Search code examples
c#algorithmbit-manipulationhardwareinteger-arithmetic

Which is more likely to fail in the event of a hardware failure: ( int i = 0; i != n; ++i ) or ( int i = 0; i < n; ++i )?


Obviously,

for ( int i = 0; i != n; ++i )

works just the same as

for ( int i = 0; i < n; ++i )

But is it normal for a developer to feel uneasy about the former? I'm always afraid that some freak accident will make the i in the former loop "miss" the n. Whereas, in the second loop, if i somehow stops incrementing correctly then the < operator will act as a barrier.

This made me think today about the question of which is actually more possible to fail given the full spectrum of freak accidents. Someone here with a knowledge of compiler/hardware stuff might have an answer for me.


Solution

  • I can sympathize with you because I felt the same way when I started programming. It's just something that you have to get used too. There's a time and place for both methods. For example, if you're using the c++ standard library, use the first method:

    for (std::vector<int>::iterator it = myvector.begin() ; it != myvector.end(); ++it)
    

    If you're just looping a fixed number of times, the convention is to use the second method:

    for (unsigned int i = 0; i < N, i++)
    

    Don't waste your time worrying about hardware problems. That's someone else's job.