My question is what is the best way to store a lot of files on a server. I did some searching and what I know so far is that its a bad idea to store all files in a single directory. Also I know that some filesystems have a subdirectory limit so it is not a good idea to create for every file a new directory. I read also some approach about using the hash of the file und build the path to store the file from this string. But I think if I do this I will end up with a lot of subdirectories which is maybe not a perfect solution.
There are tons of storage options, available on the network to store a lot of data.The right solution can often depend on specific needs. So if you are looking for a cheap and effective solution I would suggest using RAID(Redundant Array of Independent Disks).
1)RAID is a way of storing your data across multiple disks so that if something happens to one hard drive, none of your data will be lost. You can actually build your own server that uses RAID to back up your data files.RAID-5 with onboard controllers(using a proper dedicated controller)
2)unRAID is no longer confined to the capabilities of a single OS. It lets you partition system resources, enabling you to store and protect data as well as run any application in isolated environments.
3)If you want to store large amount of files, then the approach should not let any file should have a size more than 3-5MB, the moment it crosses it should you create a new file with the next revision number. In that way, you can keep the chain on. The moment the folder size crosses 1GB, create a new folder with then the next revision number and make sure the disk is NTFS partitioned and have enough space as per your requirement.
Hope it helps.