I am setting up a small, 256 core compute cluster at my university for fluid dynamics simulations. The code we use is written in a mix of C and Fortran and currently runs on a large supercomputer just fine.
In our cluster development, we have 16 compute nodes with 16 AMD CPUs each. We also have an 8 core Dell box that we would like to use as a "head" or "login" node. This box, however, is Intel Xenon.
We would like to NFS mount the home directory of each user to the login node and restrict their access to the compute nodes. This would require the users to compile and run their programs via mpirun on the login node. Our questions are:
If there's a good resource out there that could help, we'd appreciate that, too. We've found so many suggestions and ideas on various pages... It'd be nice to be pointed towards one that the community considers reputable. (Disclaimer... we aren't computer scientists, we are just regular scientists.)
Intel and AMD processors are at large binary compatible though there are things like difference in cache sizes and instruction scheduling that could result in sub-optimal run of a particular code on AMD if the code was compiled with optimisations for Intel and vice versa. There are some differences in the instruction sets implemented by both vendors but those are usually not very useful in scientific computing anyway.
Since (1) is not a problem, one does not need a workaround. Still one has to keep in mind that some compilers enable by default instruction sets and optimisations for the processor, on which the code is being compiled. Therefore one has to be extra careful with the compiler options when the head node uses CPUs from a different vendor or even from the same vendor but from a different generation. This is especially true for Intel's compiler suite, while GCC is less aggressive by default. On the other hand, one could usually instruct the compiler what architecture to target and optimise for, e.g. by providing the appropriate -mtune=...
option to GCC.
As for sharing the file system, it depends on how your data storage is organised. Parallel applications often need to access the same files from all ranks (e.g. configuration files, databases, etc.) and therefore require both home and work file systems to be shared (unless one uses the home file system as working one). Also you might want to share things like /opt
(or whatever the location where you store cluster-wide software packages) in order to simplify the cluster administration.
It is hard to point you to a definitive source since there are as many "best practices" as cluster installations around the world. Just stick with a working setup and tune it iteratively until you reach convergence. Installing TORQUE is a good start.