I've written two different ways of doing the same thing. I would like to compare which one will execute faster. Of course it is always possible to benchmark, but how a program benchmarks can be different from machine to machine and can be affected by many outside factors. How could I calculate which is faster without bench marking? My thought would be that you would sum the times of all the operations done in the program. Is this a standard thing to do? It seems like when you benchmark there is alot of room for error.
My thought would be that you would sum the times of all the operations done in the program.
Yes, but you can't easily / reliably figure out those times by any method other than benchmarking.
The problem is that these times depend on the dynamic context of what happened previously in your program (or even system-wide). CPUs are complex beasts, and cache effects (data cache and instruction cache) are often a major factor. So is branch prediction. Why is it faster to process a sorted array than an unsorted array?
Static analysis of a small loop in assembly language is possible. e.g. I can accurately predict how many cycles per iteration a simple loop can run on Intel Haswell, assuming no cache misses, based on Agner Fog's microarchictecture pdf and instruction tables. Going beyond that involves more and more guesswork.
Performance in a high-level interpreted language like Ruby might be somewhat predictable for experts that spend a lot of time tuning code in it, but almost certainly not "this will take this number of microseconds", only "this is probably a bit or a lot faster than that".