I have few python and fortran code developed for conducting scientific simulations for a research problem. 90% of the code can only be executed as a serial process as the solution procedure is implicit. Often, while running the code in my laptop, I have noticed that it occupies 1 full core of the CPU at 90-100% all the time during simulation and each simulation lasts for 10+ hours.
But now as I need to run the same code multiple times, it has becomes too slow to do it in my laptop and it limit smooth functioning for other activities during the period of the simulation. My code generally requires low memory (<1 GB) but high computational power. It requires no data transfer while running the simulation. Below are the two answers I seek:
I would appreciate anyguidance in this matter.
Thanks!
this answer might be a bit late for you (2 and half years on), but in case you're still interested!
The best fit for the machine depends on what you need for the work you are doing. It can be a tradeoff between cost and speed. For example, if you run an 8-core machine with 8 concurrent processes, it will be done 8 times faster than the same speed cpu core. You could also run your process across multiple virtual machines.
With today's AWS instances, you can scale all the way up to 36 cores on a single machine. But in your AWS account, you could run hundreds of cores concurrently. So developing a way to run the code across machines will help you as you need to scale out.
At my company, we run hundreds of fortran programs concurrently across 40 computers in AWS and have been using spot instances instead of on-demand instances to lower costs.