I have to create a program for the university and there is a website that tests how much memory I use. With the same input, if i compile my program on my pc and run it with valgrind it says that total heap usage is 77k bytes, roughly 75 kib.
But when i submit it on the website, with the same input the memory usage results 384 kib and I don't understand if valgrind is lying or the website is drunk. My suspect is that I compile my program with a simple
gcc myprog.c -o myc
while the university website compiles it with:
/usr/bin/gcc -DEVAL -std=c11 -O2 -pipe -static -s -o program programname.c -lm
I don't know anything about this compilation command, the professor just wrote that this is used on the website and I can use it too on my pc. If i use this compilation command the program runs just fine but when i try to use valgrind on the executable file created by it, it stops and says it cannot continue.
So the question shortly is, why do i see a difference in allocated memory? Is it because of something that this compilation command does?
If i use this compilation command the program runs just fine but when i try to use valgrind on the executable file created by it, it stops and says it cannot continue
You did not provide exact error message, but anyway Valgrind works bad with statically linked binaries (built with -static
option), see Valgrind errors when linked with -static -- Why?.
why do i see a difference in allocated memory?
Because you are building dynamically linked executable and website builds statically linked executable, see the difference between them in Static linking vs dynamic linking.
Note that Valgrind is not the only tool to measure memory usage of a binary. You can also use /usr/bin/time -v <binary_name>
and look for Maximum resident set size
in output.