There are similar question all over and one which is closes is from this stack exchange site. But even tho I learned a lot reading them none of them a exactly answer my question.
Here is the question, say I have this program (it is a very simplified version)
inline void execute(char *cmd)
{
int exitstat = 0;
//lots of other thins happed
exitstat = execve(cmd, cmdarg, environ);
free(cmd), free(other passed adress); ## DO I NEED THIS, since I am already about to exit
_exit (exitstat);
}
int main(void)
{
int childstat;
char *str = malloc(6); //I also have many more allocated space in heap in the parent process
strcpy(str, "Hello");
pid_t childid = fork() //creat a child process, which will also get a copy of all the heap memories even tho it is CoW.
if (childid < 0)
exit(-1);
if (child == 0)
{
execute(str);
}
else
{
wait(&childstat);
free(str);
}
//do somethign else with str with other functions and the rest of the program
}
In this program, the parent
process
does its thing for a while, allocates a lot of process in the heap
, free
some, keep other for later and at some point it wants to execute some command, but it doesn't want to terminate, so it creates a child
to do its biting.
The child then calls another function which will do some task and in the end use execve
to execute
the command passed to it. If this was successful there would be no problem, since the executed program will handle all the allocated spaces, but if it failed the child exits with a status code. The parent waits for the child to answer, and when it does it moves on to its next routine. The problem here is that when the child
fails to execute all the heap data remains allocated, but does that matter? since,
process
is created during fork
, so the memory leak shouldn't affect the parent
since the child
is about to die.valgrind
lament about them?free
all the memories in the child heap with out actually passing all the memores (str,
and the others in this example) to the execute function and laboriously call free
each time? does kill()
have a mechanism for this?Edit
Here is working code
#include <unistd.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <string.h>
inline void execute(char *cmd)
{
extern char **environ;
int exitstat = 0;
char *cmdarg[2];
cmdarg[0] = cmd;
cmdarg[1] = NULL;
exitstat = execve(cmd, cmdarg, environ);
free(cmd);
_exit (exitstat);
}
int main(void)
{
int childstat;
char *str = malloc(6);
pid_t childid;
strcpy(str, "Hello");
childid = fork();
if (childid < 0)
exit(-1);
if (childid == 0)
{
execute(str);
}
else
{
wait(&childstat);
free(str);
}
return (0);
}
When there is a free inside the child process here is valgrinds result.
When run with the free
inside the child process is commented.(#free(cmd)
in the execute function).
As you might see there is no overall error () since this programm is short lived. But in my real program which has an infinite loop every problem in the child () matters. So I was asking if I be worried by this leaked memories in this child process just before exiting and hunt them down or just leave them as they are in another virtual space and they are cleaned as their processes exit (but in that case why is valgrind still complaining about them if their corresponding process clean them during exit)
exitstat = execve(cmd, cmdarg, environ); free(cmd), free(other passed adress);
If successful, execve
does not return. No code is executed after it, the execution is terminated and the cmd
process is started. All memory hold by current process is freed anyway by the operating system. The code free(cmd), free(other passed adress);
will not execute if execve
call is successful.
The only case where those liens could be relevant are in case of execve
error. And in case execve
returns with an error, yes, you should free that memory. I much doubt most programs with execve
actually do that and I believe they would just call abort()
after failed execve
call.
inline void execute(char *cmd)
Please read about inline
. inline
is tricky. I recommend not use it, forget it exists. I doubt the intention here is to make an inline function - there is no other function the compiler can choose from anyway. After fixing typos in code, I could not compile the code because of undefined reference to execute
- inline
has to be removed.
The problem here is that when the child fails to execute all the heap data remains allocated, but does that matter?
Ultimately it depends on what do you want to do after the child fails. If you want to just call exit
and you do not care about memory leaks in such case, then just don't care and it will not matter.
I see glibc exec*.c functions try to avoid dynamic allocation as much as possible and use variable length arrays or alloca()
.
If my assumption is correct and this leaks doesn't actually matter, why does valgrind lament about them?
It doesn't. If you correct all the mistakes in your code, then valgrind
will not "lament" about memory leaks before calling execve
(well, unless the execve
call fails).
Would there be a better way to free all the memories in the child heap with out actually passing all the memores (str, and the others in this example) to the execute function and laboriously call free each time?
"Better" will tend to be opinion based. Subjectively: no. But writing/using own wrapper around dynamic allocation with garbage collector to free all memory is also an option.
does kill() have a mechanism for this?
No, sending a signal seems unrelated to freeing memory.