I need to read-in as fast as possible wave files. What I am currently doing is the following:
SFINFO sfinfo;
SNDFILE *wavFilefd = sf_open ("mySong.wav", SFM_READ, &sfinfo);
int readBlockSize = 1024*1024; // 1MB
if( sfinfo.frames * 2 < readBlockSize )
{
// actually I don't know where this factor two comes from, but it work's for me
readBlockSize = sfinfo.frames * 2;
}
short tmpSignal[readBlockSize];
int readcount = -1;
std::vector< short > wavVector;
while ((readcount = sf_readf_short (wavFilefd, tmpSignal, readBlockSize)) > 0)
{
++nRead;
wavVector.reserve( (nRead-1) * readBlockSize + nRead);
wavVector.insert( wavVector.end(), tmpSignal, tmpSignal+readcount);
}
sf_close(wavFilefd);
This code works well so far.
Problem: When I increase the value of readBlockSize by let's say a factor of 10 and then try to read a wave file of 115212164 bytes I get a Segmentation Fault when doing sf_readf_short
.
From the help I got "The sf_readf_XXXX functions return the number of frames read. Unless the end of the file was reached during the read, the return value should equal the number of frames requested. Attempts to read beyond the end of the file will not result in an error but will cause the sf_readf_XXXX functions to return less than the number of frames requested or 0 if already at the end of the file."
So I expected this to work also for 10MB.
Thanks for any hint.
The problem is the stack size, so when accessing the variable `tmpSignal', allocated by
//..
short tmpSignal[readBlockSize];
//..
I get the segmentation fault.
This behavior can be reproduced (at least at my machine) by
short tmp[10485760];
for(unsigned int i = 0; i < 10485760; ++i )
{
tmp[i] = 0;
}