I have need to implement something similar to tail -f to read new lines added to a log file and handle the log file rolling over. This is for Solaris 10. Currently, the application checks the status of the file every second and if the file has changed, it opens the file, seeks to near the end and reads from there to the end of the file.
That all seems to work fine, but I'm curious what the performance impacts would be when the log file is very large. Does seek actually need to read through the whole file, or is it smart enough to only load the end of the file?
lseek is fast in general use, even for huge files.
See more in the man page.
Depending on special circumstances, it might slow down, but I've never seen those IRL.