In most popular programing world, when you write for loops it will look like this,
for(int i = 0; condition; increment or decrement i) {}
My question is, why we write int i = 0 ? Why can't default be int a = 0 or int b = 0. I know this is possible but is there any history behind int i =0 ?
i and j have typically been used as subscripts in quite a bit of math for quite some time (e.g., even in papers that predate higher-level languages, you frequently see things like "Xi,j", especially in things like a summation).
When they designed Fortran, they (apparently) decided to allow the same, so all variables starting with "I" through "N" default to integer, and all others to real (floating point). For those who've missed it, this is the source of the old joke "God is real (unless declared integer)".
Most people seem to have seen little reason to change that. It's widely known and understood, and quite succinct. Every once in a while you see something written by some psychotic who thinks there's a real advantage to something like:
for (int outer_index_variable=0; outer_index_variable < 10; outer_index_variable++)
for (int inner_index_variable=0; inner_index_variable<10; inner_index_variable++)
x[outer_index_variable][inner_index_variable] = 0;
Thankfully this is pretty rare though, and most style guides now point out that while long, descriptive variable names can be useful, you don't always need them, especially for something like this where the variable's scope is only a line or two of code.