I've written a program in C to write series of characters to a file -- 5 alphanumerics, followed by one (char)(30)
, a 'record-delimiter' character, repeat... No newlines. The program is able to loop flawlessly until it reaches the 508th iteration -- 3048 characters -- and then dies, saying I can't access the file.
The structure of the program necessitates a closing and reopening of the file every time this sequence is written (the script is part of a larger pseudo-database-simulating module), so there are 508 cycles through the open/write/close process... Before I start dissecting the database module code (of which there are quite a few lines, so I'd rather not have to if I can avoid it), I was wondering if anyone knew of a rarely-encountered read/write limit in Unix, or a problem with 3048 characters in a file in under a certain time limit, or a problem with 508 {30}'s in a file, or something simple (but hard to catch) like that. I tried delaying the read/write by a few ms on the off-chance it was accidentally fopening as it fclosed, or tripping over itself like that, but no cigar.
508 is suspiciously close to 512, a reasonable default value for the number of open files. Type the command ulimit -a
and see what limits are imposed. On my Fedora 15 system, 1024 is the limit for number of open files per process:
[wally@lenovotower ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 22084
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
If yours is 512, make sure the program is actually closing the file. Without showing some code, all we can do is speculate.