usually I am able to solve almost all my programming questions on my own, but this one really amazes me and I guess you will also find it very interesting.
So I am working on a low level OpenGL application using GLX and it crashes with a segmentation fault. I have broken the code down to this minimal example:
#include <string>
#include <GL/glx.h>
int main(int argc, char** argv)
{
Display* display = NULL;
if(display)
glXMakeCurrent(display, 0, 0);
std::string title("Hello GLX");
return 0;
}
I compile with
g++ -g -o wtf wtf.cpp -lGL
I am using 64 bit Linux Mint 17.3 by the way.
As you can see, there is nothing suspicious - I mean it doesn't even do anything, but as I said, it crashes... The Segfault disappears if I comment out the glXMakeCurrent
which makes absolutely no sense, because that isn't even ever reached.
It also doesn't crash if I remove the instantiation of the string. Swapping the instantiations or the includes doesn't help, it still crashes.
Here is a GDB Backtrace:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0 0x0000000000000000 in ?? ()
#1 0x00007ffff3deb291 in init () at dlerror.c:177
#2 0x00007ffff3deb6d7 in _dlerror_run (operate=operate@entry=0x7ffff3deb130 <dlsym_doit>, args=args@entry=0x7fffffffdc50)
at dlerror.c:129
#3 0x00007ffff3deb198 in __dlsym (handle=<optimized out>, name=<optimized out>) at dlsym.c:70
#4 0x00007ffff7b4ee1e in ?? () from /usr/lib/nvidia-352/libGL.so.1
#5 0x00007ffff7af9b47 in ?? () from /usr/lib/nvidia-352/libGL.so.1
#6 0x00007ffff7dea0cd in call_init (l=0x7ffff7ff94c0, argc=argc@entry=1, argv=argv@entry=0x7fffffffdda8,
env=env@entry=0x7fffffffddb8) at dl-init.c:64
#7 0x00007ffff7dea1f3 in call_init (env=<optimized out>, argv=<optimized out>, argc=<optimized out>, l=<optimized out>)
at dl-init.c:36
#8 _dl_init (main_map=0x7ffff7ffe1c8, argc=1, argv=0x7fffffffdda8, env=0x7fffffffddb8) at dl-init.c:126
#9 0x00007ffff7ddb30a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#10 0x0000000000000001 in ?? ()
#11 0x00007fffffffe101 in ?? ()
#12 0x0000000000000000 in ?? ()
(gdb)
and my glxinfo
output (except for the extensions)
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
...
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
...
GLX version: 1.4
...
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 560 Ti/PCIe/SSE2
OpenGL core profile version string: 4.3.0 NVIDIA 352.63
OpenGL core profile shading language version string: 4.30 NVIDIA via Cg compiler
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
...
OpenGL version string: 4.5.0 NVIDIA 352.63
OpenGL shading language version string: 4.50 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
I am really amazed by this, because IMHO it makes absolutely no sense. Does one of you have an idea what the problem could be? I was thinking that some call might have corrupted the classes VTable, but in this example there aren't even classes. It's also not an 32-vs-64-bit conflict with libGL.so.
When I put an std::cerr << "foo";
and std::cerr.flush();
(to be sure although it shouldn't be necessary) at the beginning of the main function, I get no output, so it seems like a problem with loading the library, but I can run the code from opengl.org/wiki/Tutorial:_OpenGL_3.0_Context_Creation_(GLX), so the problem cannot be the discovery of the library or the graphics chip being in some faulty state or so (I even rebooted... a linux machine... that's how out of ideas I am!)
Duplicate of Segmentation Fault before main() when using glut, and std::string? The workaround there works for me also: forcing libpthread to be linked, for example by
export LD_PRELOAD=/lib/x86_64-linux-gnu/libpthread.so.0
./wtf
Still one of the most bizarre and illogical problems I have ever encountered.