I'm reading the section Configuring the Cache Object Size Limit. I wonder how ATS efficiently handle large files (for example a movie file). Please shed me some light.
Thanks in advance.
In General Speaking, ATS is filesystem(the cache storage) is a complex design that will handle many small files and big files, in mix, and with both very efficiently.
Internal: 1, the disk Read/Write is design to write with buffered, the small files are filling into the 1MB write buffer, so on spin disk writing is sequential and much efficient than read which is random.
2, large file is split into 1MB(default) small READ/WRITE fragment, which means in ATS we do IO read/write at size 1MB at the most time when dealing with large files.
please tweak proxy.config.cache.min_average_object_size and proxy.config.cache.target_fragment_size, if you find ATS not perfect.
When coming to the real world CDN and caching system, most sites will do splices on large files to make file transfer more efficient, you can also do that with ATS plugins if you would like ATS do that job for you.