Recently I found this article that claims that the idea to prefer for(;;)
over while(1)
for infinite loop came because the C compiler originally available on PDP-11 generated an extra machine instruction for while(1)
.
Btw now even Visual C++ warnings tend to favor the former.
How realistic is such attribution of for(;;)
idiom?
Here's what the V7 Unix compiler cc
produces (using SIMH and an image from TUHS):
$ cat>a.c
main(){
while(1);
}
$ cat>b.c
main(){
for(;;);
}
$ cc -S a.c
$ cc -S b.c
a.c
(while
) compiles to:
.globl _main
.text
_main:
~~main:
jsr r5,csv
jbr L1
L2:L4:tst $1
jeq L5
jbr L4
L5:L3:jmp cret
L1:jbr L2
.globl
.data
While b.c
(for
) becomes:
.globl _main
.text
_main:
~~main:
jsr r5,csv
jbr L1
L2:L4:jbr L4
L5:L3:jmp cret
L1:jbr L2
.globl
.data
So it's at least true that for(;;)
compiled to fewer instructions when not using optimization. However, when compiling with -O
, both programs produce exactly the same assembly:
.globl _main
.text
_main:
~~main:
jsr r5,csv
L4:jbr L4
.globl
.data
and when I add a loop body of printf("Hello");
, the programs are still the same.
So, it might be that the idiom has its origins in PDP-11 machine language, but by 1979 the difference was already largely irrelevant.