The answer to your question really depend on multiple factors, most of which are relative to your system, such as the type of simulation you plan to run, and most importantly the software you plan on using. The size of the molecule, the solvent nature (explicit, implicit), the presence of constraints, are some of the factor to keep into consideration.

The best starting point will be to look at the documentation of the MD engine you will use, and see what they recommend. For example, GROMACS has a specific section dedicated to maximising hardware performance.
Also looking at literature could help, in particular the bench marking studies that compare different softwares in different condition.

To give you an idea, ages ago, during my PhD I simulated a system of ~100.000 atoms (explecit solvent and no restraints) for up to 1 micro-second, using a single cluster node made of two 6-core X5650 Intel Xeons and three to eight 512-core M2090 NVIDIA GPUs.

At the time, I was averaging between 9 and 12 ns/day.

Today, you probably can find better specs, but the core principles and element to factor should be the same.



Source link