I've recently switched from Kubuntu 16.04 to Mac OS High Sierra and started porting parts of the code I work with on a regular basis. Among that is a C++ wrapper library for libsctp-dev.
I'm using the SCTP kernel extension to have the same SCTP API as on my Linux machine. While the API calls do not make a problem, using threads provided by the standard C++ library seems to make problems now. The problem can be found in the following code snippets:
void Server::start(int32_t port) {
//This allocates the socket with the SCTP protocol assigned
Receiver::start();
//This function now binds the desired port to the created socket
this->_bind(port);
//Here I register handlers for network events that can occur
this->notificationHandler.setAssocChangeHandler(std::bind(&Server::handleAssocChange, this, _1));
this->notificationHandler.setShutdownEventHandler(std::bind(&Server::handleShutdownEvent, this, _1));
this->notificationHandler.setSendFailedHandler(std::bind(&Server::handleSendFailed, this, _1));
//This is the interesting part - this will lead to BAD_ACCESS
//exceptions in the receive function
dummy = this;
this->receiveThread = std::thread(dummyReceive);
//However, if I run it in the same thread, everything works fine
//(except that I need the receive loop to run in a separate thread)
//dummyReceive();
//This is the original call, I just used the dummy function to be
//sure that the bind function does not create the problem
//this->receiveThread = std::thread(std::bind(&Server::receive, this));
}
This is the part where the dummyReceive function is defined:
Server *dummy = NULL;
void dummyReceive(){
dummy->receive();
}
Finally, this is the code of the receive method (Server is a subclass of Receiver, which is in turn a subclass of Endpoint):
void Receiver::receive() {
uint8_t buffer[this->max_buffer_size];
uint32_t buffer_size = 0;
struct sockaddr_in peer_addr = {};
socklen_t peer_addr_size = sizeof(peer_addr);
struct sctp_sndrcvinfo info = {};
int32_t flags = 0;
while (this->can_receive) {
buffer_size = Endpoint::receive(buffer, max_buffer_size, peer_addr, peer_addr_size, info, flags);
if (buffer_size == 0) {
// Notification was sent
} else if (buffer_size == -1) {
CERR("Endpoint::receive(...) returned -1" << std::endl);
} else {
this->receiveCallback(buffer, buffer_size, peer_addr, peer_addr_size, info);
}
}
}
The strange thing is that the BAD_ACCESS exception occurs when "peer_addr" gets initialized:
struct sockaddr_in peer_addr = {};
This is what CLion gives me as an error message:
EXC_BAD_ACCESS (code=1, address=0x70000807bb88)
I can avoid this by initializing the "peer_addr" and "info" struct right at the start of the function. However, then the call to "Endpoint::receive" crashes again with a BAD_ACCESS exception. This time with the following parameters:
EXC_BAD_ACCESS (code=1, address=0x70000c4adb90)
Does anyone have an idea what is wrong here? I'm using the Xcode toolchain 9.4.1 (which is internally using clang as far as I know) with CMake 3.12.0 (I'm using CLion as an IDE). If anyone needs the full library code I can upload it to git and share a link (currently it is only on a private git server).
Best Pascal
If max_buffer_size
is large you are likely to be encountering a stack overflow.
The stack is a very limited size and is probably smaller on OSX than it was on Linux (e.g. the default pthread stack size is only 512K https://developer.apple.com/library/archive/qa/qa1419/_index.html).
Large buffers should be heap allocated rather than stack allocated.
OSX isn't very good at detecting stack overflows and often raises the confusing EXC_BAD_ACCESS
error instead of a more helpful stack overflow error.