Search code examples
linuxfilesystem

What happens if there are too many files under a single directory in Linux?


If there are like 1,000,000 individual files (mostly 100k in size) in a single directory, flatly (no other directories and files in them), is there going to be any compromises in efficiency or disadvantages in any other possible ways?


Solution

  • ARG_MAX is going to take issue with that... for instance, rm -rf * (while in the directory) is going to say "too many arguments". Utilities that want to do some kind of globbing (or a shell) will have some functionality break.

    If that directory is available to the public (let's say via ftp, or web server) you may encounter additional problems.

    The effect on any given file system depends entirely on that file system. How frequently are these files accessed, what is the file system? Remember, Linux (by default) prefers keeping recently accessed files in memory while putting processes into swap, depending on your settings. Is this directory served via http? Is Google going to see and crawl it? If so, you might need to adjust VFS cache pressure and swappiness.

    Edit:

    ARG_MAX is a system wide limit to how many arguments can be presented to a program's entry point. So, let's take 'rm', and the example "rm -rf *" - the shell is going to turn '*' into a space delimited list of files which in turn becomes the arguments to 'rm'.

    The same thing is going to happen with ls, and several other tools. For instance, ls foo* might break if too many files start with 'foo'.

    I'd advise (no matter what fs is in use) to break it up into smaller directory chunks, just for that reason alone.