Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Geek Culture / Timing code segments (differences between runs)

Author
Message
Rudolpho
19
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 23rd Feb 2009 13:06
So, I basically need to time the execution time of some code segments. The thing is that these of course differs from time to time. I guess you could just run a few identical tests and take the average time of these as the result, but I was wondering if there might be a better way (assume that, for some reason, one of the executions are overdrivenly slow (ie. the processors full power was required by some other process for a while); that would seriously mess up any chanse to register a proper execution time).

Thanks for any ideas

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 23rd Feb 2009 14:34
You can try to reduce the amount of time your process is interrupted (depending on what you are doing), but there's no way to remove it completely. You can set the processor affinity for your process, and bump its priority to high for example.

If you are trying to do anything with the OS though, it's likely that your process will get bumped for a while, especially if you are doing any kind of IO, even if it's not explicit IO (allocating memory for example).

Basically, I'd say just grab a whole load of timings, ditch the first few, then use min/max/average depending on your processing.

Van B
Moderator
22
Years of Service
User Offline
Joined: 8th Oct 2002
Location: Sunnyvale
Posted: 23rd Feb 2009 15:42
With these thing I think it's best to use a comparison command. So instead of timing a block of instructions, you would time a control, then the command, and compare all the command results with your control result. It makes sense to go for a common command in whatever area. If you were benchmarking 2D, I would get he elapsed time to clear the screen, then check other 2D commands, using the clear screen time as the control, so you would calculate everything as a multiplier of the control time. By doing it that way, as long as you have chosen a good control command your results should be more indicative of other PC's. For instance if you got a result of 0.05 for drawing a dot on the screen with clearing the screen as a control, you know that you can draw 20 dots per screen clear. But if you ran it on a machine that is twice as slow, or twice as fast, you would see that most of the time any PC would give similar results.

It depends what you are benchmarking for - you either want to find the speed of the PC hardware, the speed of the language your using, or a speed comparison of the commands being run. I personally would do it this way if I was looking to optimize code or qualify new coding techniques. Comparing whole techniques even - I'd make a function for each technique, like comparing using SQRT or vectors to find distances.

My point is though that with this technique, it may not matter how well the PC is running as long as it's consistent, if your PC was running at half speed, it would take twice as long to clear the screen and twice as long to draw a dot, so the results would be the same, and still be valid.


Health, Ammo, and bacon and eggs!
Rudolpho
19
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 24th Feb 2009 19:25
Thanks for your answers.
What I'm trying to do is to write a function that will determine when it will be faster to abandon a quicksort recursion and sort the last few elements using insertionsort instead.

I guess I'll try to just go with the average value of a few hundred timings and hope that it will be good enough.

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 24th Feb 2009 19:44
An insertion sort will do as many swaps as it does comparisons (on average for random data), while a selection sort will do twice as many comparisons, but will carry out the minimum number of swaps - basically, if it's 'expensive' to move the items around, use selection sort.

Oh, and I find that a range size of around 10 gives the best result for switching, but YMMV

Rudolpho
19
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 26th Feb 2009 12:10
Yeah, thanks.
I estimated a breakpoint at below 18 elements, seems to work good

Login to post a reply

Server time is: 2025-06-07 15:46:05
Your offset time is: 2025-06-07 15:46:05