first of all I'd like to say thank you for taking the time in reading this.
So I'm currently doing some research and planning out how I will make this setup work best.
I have a main server, running a custom CRM made for a client. The client wants to start uploading large files, which would fill up the server disk within no time. For that reason these files would need to be saved on a remote/alternative server, but also be accessible via HTTP (to be displayed on the front-end CRM).
How would I go around setting something like this up?
I thought about using FTP to transfer the file to the remote server, but that seems like an extra unneccessary step as the file would first get uploaded to the main server, then get sent to the secondary. That's double the bandwidth and response time.
Maybe there's a way to attach the secondary server as a "network location" on the main server, and then just move the files onto it while they are uploaded.
Another issue that may arrise, is that the main server needs to be able to create directories dynamically on the storage server, as it's a CRM, new clients get added so files will be uploaded to /clients/{ID}/{PROJECT}/* for instance.
I thought about using Amazon S3 or another cloud storage service, but the client wants a dedicated server for their storage.
Another possibility, maybe, is having the user upload directly to the storage server and the storage server sends the info back to the main server, but not sure how this would work best.
The main server is running CentOS, managed with WHM/cPanel.
As it was mentioned in the comments already, you definitely don't want to use FTP. That's probably the worst solution.
Generally speaking server-wise the solution has to be transparent, and if both servers are running Unix-based OS then NFS is the typical way to go. Here's a short HowTo on setting up NFS on CentOS: https://www.howtoforge.com/tutorial/setting-up-an-nfs-server-and-client-on-centos-7/
If for some reason in the future you decide to move storage to Amazon then good news it already supports NFS (and SMB for that matter), so reconfiguration cost is kept to minimum.
SMB is good if one of servers is on Windows. Also it offers more flexibility with regards to access (side note: IMHO in most cases the typical Unix access scheme is more than enough). Not hard to configure either but it's just not native to Unix. Because of that SMB is slower, but if you don't need top-notch realtime performance than that won't be an issue (and honestly speaking, the biggest lagger would be connection from client to server, not server-to-server) Here you'll find both graphical and CMD configuration instructions of SMB on CentOS: https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-samba-configuring.html
WebDAV is universal and has advantage over SMB when used for file access over a high-latency network (like Amazon cloud). WebDAV could have worse performance than SMB on a local network. Also it has file size limit of 4 GB