Implementation¶
| Type: | enum |
|---|---|
| Range: | MKL, MUMPS |
| Default: | -/- |
| Appearance: | optional |
One can choose between various direct solver implementations. The default solver depends on the platform:
- CPU, single node Intels’
MKL Pardisosparse solver, - CPU, multi node cluster MUMPS sparse solver, https://mumps-solver.org
- CPU+NVIDIA-GPU(s), single or multi-node NVIDIA direct sparse solver (
cuDSS), https://developer.nvidia.com/cudss
The GPU based cuDSS solver supports two operation modes. Mode CUDSS utilizes the GPU vRAM only, whereas with CUDSS-HybridMemory major parts of the matrix factoriation is stored on the CPU RAM memory. Typically the RAM memory is much larger than the GPU memory allowing for solving larger problems. Therefore CUDSS-HybridMemory is the default choice for platforms with an NVIDIA-GPU resource.