I'm getting a segmentation fault while trying to compile a vertex shader. I think I identified the problem in passing vertex attributes. The following lines compile (they might not work, but they still compile):
# version 330
layout(location=0) in vec4 in_Loc;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Norm;
Also
# version 330
layout(location=0) in vec4 in_Loc;
layout(location=1) in vec4 in_Color;
layout(location=25) in vec4 in_Norm;
but
# version 330
layout(location=0) in vec4 in_Loc;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Norm;
layout(location=33) in vec4 in_Anything
will not compile. I guess I can only define 3 vec4 attributes. However glGetIntegerv with GL_MAX_VERTEX_ATTRIBS returns 16 which is in accordance to OpenGL standard. Is this some kind of bug related to my hardware? I'm using a Intel Graphics Card
*-display
description: VGA compatible controller
product: 3rd Gen Core processor Graphics Controller
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 09
width: 64 bits
clock: 33MHz
capabilities: msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:49 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:3000(size=64)
and mesa 10:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.1.0
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile.
OS is ubuntu 14.04.
I found the mistake thanks to your comment. After creating a test case using only c the glsl file compiled. It turned out that segmentation fault happened in "cython" but not if I called the library directly from c (although there was a "bug" in c). The error was produced by adding an extra line into the glsl file, which triggered the segmentation fault. I was puzzled by that and thought that the error was in compiling the glsl. However, the problem was in the function loading the glsl file:
void load_file(const char* fname, char** buffer)
{
FILE* fp;
long fsize;
fprintf(stdout, "Loading %s \n", fname);
fp = fopen(fname, "r");
fseek(fp, 0, SEEK_END);
fsize = ftell(fp);
fseek(fp, 0, SEEK_SET);
if (fp == NULL)
{
fprintf(stderr, "Can't open %s\n", fname);
exit(1);
}
*buffer = (char* ) malloc(fsize + 1);
fprintf(stdout, "%d\n", (int ) fsize);
fread(*buffer, fsize, 1, fp);
buffer[fsize] = 0;
fclose(fp);
}
By replacing buffer[fsize] = 0; with buffer[0][fsize] = 0; I solved the problem. I don't know where does this comes from, nor whether this is actually a bug in my code, but it might be related to how cython allocates memory.