|Tech Articles ID:||KB8020268|
|Marc & Mentat|
|Environment:||Intel IA-32 (32-bit compat)|
"We're looking to upgrade our machines. We use [insert MSC product here]. Do you have any recommendations?"
Officially, MSC does not provide any specific recommendations, especially in terms of any vendor choices for hardware.
The Installation Guides for most all products list the tested/certified/supported platforms and operating systems. Those are an excellent start to see what might be minimum system requirements, supported operating systems, and tested graphics cards. Look up the installation guide for the specific version of the product you wish to check.
Longer Answer with Suggestions:
Beyond that, sometimes Support will offer some "general" factors that influence performance. As a very simple and rough algorithm:
For most everything, RAM is king. It's the fastest way to store and retrieve bits of information. Executables load into memory as they are run. A lot of the general operating system performance is felt through RAM.
32-bit applications can use up to 2-3GB of it. 64-bit applications can use up to 8GB, or possibly quite higher (depending on how they were compiled).
For products that operate with viewports to examine the model (such as Patran, SimXpert, ADAMS/View, Marc/Mentat), having a decent graphics card is the next most noticeable aspect of the product performance. Again, the installation guides will usually list tested cards. Typically, these will utilize OpenGL graphics. With current drivers, running with the graphics cards handling the rendering of the images offers significant performance while manipulating the model.
Solvers utilize repeated instructions (for example, as matrices are solved and equations are iterated through). Processor designs will support a certain amount of instructions per cycle, and when multiplied by the cycles per second (Hz), you get a gage of how fast something can occur. Within a similar processor architecture, the clock speed (Hz) can give a quick index of which one may solve faster.
Some solvers support multi-threading (running tasks in multiple threads at the same time ... you can find more information on this on the internet). For those that do, current multi-core processor designs are quite capable of handling these multi-threaded modes. Performance for certain types of solutions in certain applications may improve in this mode (though not by any linear extrapolation).
Some solvers may also be written to handle simultaneous solutions of discrete portions of the model (sometimes referred to as domains) as separate processes. When available, for certain solution types and models, this may offer a significant reduction of solution time. Or, it often allows for larger models (when solved in parts). However ... each split process takes up memory to run. Hence, see statement #1 of "RAM is king".
Most often, this is where performance has a tendency to bottleneck. When a machine doesn't have enough RAM (hardware memory) it will write to a swap file (disk memory). Hence, the reason RAM is king.
For most solvers, there is a need to temporarily (and frequently) write/read information (often referred to as scratch operations). The more frequently that writing to disk occurs, the more noticeable the disk access speed becomes. When possible, some solvers may allow the user to specify that more of these "scratch" operations occur within memory. However, for operations that require frequent disk read/writes, faster disk access speeds help.
And, for this reason, this is the recommendation that the software is loaded locally, and writes to (in terms of model files or scratch) local disk as well. We use the term "local" as opposed to a disk served/shared out on the network. Accumulations of small network delays with repeated read/write cycles can also exhibit poor performance.
Discuss this Tech Articles