I decided to compare the performance of the standard strict tail-recursive version of the Fibonacci program in Haskell to one written in C, using GMP to allow comparisons where the result is to big to fit in a word (in Haskell I use the multi-precision Integer
type). I'm going to omit the Haskell program, because this is a question about C and GMP. The C implementation is this:
#include <stdio.h>
#include <stdlib.h>
#include <gmp.h>
void fib(unsigned int n){
mpz_t a, b, t;
mpz_init_set_ui(a, 0);
mpz_init_set_ui(b, 1);
mpz_init(t);
for(; n > 1; n --){
mpz_add(t, a, b);
mpz_set(a, b);
mpz_set(b, t);
}
//mpz_out_str(stdout, 10, b);
}
int main(int argc, char **argv){
unsigned long n, f;
if(argc != 2){
printf("Usage: fibc <number>\n");
return 1;
}
fib(atol(argv[1]));
return 0;
}
Notice that I commented out the line that outputs the value, which was taking about a second (I left this behavior in the Haskell version).
The results are:
time ./fibhs 1000000
./fibhs 1000000 5.77s user 0.05s system 99% cpu 5.831 total
time ./fibc 1000000
./fibc 1000000 11.19s user 0.00s system 100% cpu 11.194 total
I figure I must be using GMP wrong. Can anyone see any performance improvement possibilities in the C code?
Play ping-pong. You have two variables a and b and a temporary t. You add and put the result into t, then you copy b to a and t to b. Instead alternate between adding b to a and adding a to b. The final result is either in a or b, depending on whether n is odd or even.