I'm interested if anyone has done something like this before and if so how it worked out. We have a Jenkins farm with about 15 slaves. Right now each slave has its own local disk for the workspaces, but our jobs are not tied to specific slaves. This means that if Job 1 originally ran on Slave1, but then had to switch to Slave2 it would have to pull the code again. This seems like a waste in terms of download time and of disk space because the code is now duplicated across two slaves.
Is it a good idea to mount a shared NFS drive (or some other shared drive) across all the slaves so that the jobs could run on any slave, but the disk would be the same for all? The obvious risk would be latency, but are there other risks associated with this as well?
Thanks!
Given that disk space is so cheap and fast these days, I really doubt you will see any benefits from your plan.
Instead I can think of several downsides:
If you are worried about the checkout time, there are ways to optimize that:
The details how to do these vary a bit depending on which version control system you use. Also there might be other tricks you can do: shallow clones, use reference repos, ...
I'm pretty sure you can make the checkout time a non-issue. Disk space usage is harder to make go away but usually disk is cheap enough. And if you have small and fast SSD disk, you can usually clean up generated files from the workspace at the end of the build to save space. (I have exactly that case at work.)