I have a program written in C++ for Windows which is a native application for a browser extension. Basically it receives messages from the browser through stdin
and sends back responses through stdout
. The messages are URLs that need to be downloaded with an external application such as curl
, and the responses are the progress/completion of these downloads.
My program flow is as such:
1- There is a main loop that constantly reads stdin
and received messages from the browser
2- For each message the main loop creates an std::thread
. The thread is given the URL to download and is started and then the main loop goes back to listening for new messages.
3- In the thread I spawn a child process, say curl.exe
, using CreateProcess()
and keep reading its output.
4- Now these threads need to send the download progress to the browser, which they do by writing to the program stdout
. Since multiple threads need to write to it at the same time I have a function protected with an std::lock_guard<std::mutex>
and the threads write to stdout
using this function.
Now I want to port this program to Linux and I was hoping to simply replace CreateProcess()
with popen()
but I did a Google search about whether it's thread safe or not and even tho I couldn't find a definitive answer, most answers suggested that it is not. Apparently it uses fork()
under the hood and forks and threads don't get along well.
It looks like the Linux way is to fork()
and then use pipes to communicate between the main program and the forks but that would require for me to change the whole structure of the program since it's currently based on threads. Also I don't know how I can maintain my main loop and at the same time read the pipes from all these forked child processes. I can't imagine doing that without using a thread which gets us back to the first problem.
So I was wondering if there's another way to do this?
Here's a simplified version of how the program works:
std::mutex theMutex;
int main()
{
while(true)
{
char* url = new char[message_length];
fread(url, sizeof(char), message_length, stdin);
std::thread th1(download_thread, url);
th1.detach();
}
return 0;
}
void download_thread(const string url)
{
/* create the process */
CreateProcessW(
"curl.exe",
"curl.exe url",
NULL,
NULL,
TRUE,
processFlags,
NULL,
NULL,
&siStartInfo,
&piProcInfo);
/* keep reading the output of the process until it exits*/
const int BUFSIZE = 1024;
char buf[BUFSIZE];
unsigned long bytesRead = 0;
bSuccess = FALSE;
while(true)
{
bSuccess = ReadFile(h_child_stdout_r, buf, BUFSIZE, &bytesRead, NULL);
if(!bSuccess || bytesRead<=0)
{
break;
}
string output(buf, buf+bytesRead);
write_to_stdout(msg);
}
}
void write_to_stdout(const string msg)
{
std::lock_guard<std::mutex> lock(theMutex);
const unsigned int message_length = msg.length();
fwrite(msg.c_str(), sizeof(char), message_length, stdout);
fflush(stdout);
}
[res.on.objects]
1 The behavior of a program is undefined if calls to standard library functions from different threads may introduce a data race. The conditions under which this may occur are specified in [res.on.data.races]
[res.on.data.races]
2 A C++ standard library function shall not directly or indirectly access objects ([intro.multithread]) accessible by threads other than the current thread unless the objects are accessed directly or indirectly via the function's arguments, including this.
3 A C++ standard library function shall not directly or indirectly modify objects ([intro.multithread]) accessible by threads other than the current thread unless the objects are accessed directly or indirectly via the function's non-const arguments, including this.
So popen
cannot possibly introduce a data race.
$ man popen
...
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7).
┌────────────────────────────────────────────────────┬───────────────┬─────────┐
│Interface │ Attribute │ Value │
├────────────────────────────────────────────────────┼───────────────┼─────────┤
│popen(), pclose() │ Thread safety │ MT-Safe │
└────────────────────────────────────────────────────┴───────────────┴─────────┘