I'm developing an application which uses some large binary files - in the range 1GB - 25GB. The application will run primarily on servers and possibly the odd powerful/modern desktop PC. I could (a) split these large files up so they're always less than 4 GB, or (b) just keep together in one single file.
FAT32 file systems only allow file sizes up to 4 GB. If I don't split up the files, they won't be usable on FAT32 systems.
Do I need to bother splitting these files?
This application is always going to be running on reasonably modern hardware. Are there any modern servers out there which are likely to use FAT32? Are there any other cloud file systems which would have significant limits on file sizes? (e.g. AWS Elastic file system is fine, as it allows single files up to 47 TB).
You can keep files as big as you need them but there is one big question. Do you have to move or copy these files?
If not then I do not see the problem. Huge database files, swap files, and virtual machine image files work just fine. If files have to be copied, moved, and uploaded then I would split them.
First copying, moving, upload, download, and backup are usually file-based. There are some tools that can split files into parts and rebuild files from pieces but you would have to look for them. Uploads and downloads can be problematic too as transfers sometimes get interrupted and most tools just do not support resume.