Linear solvers
Once the integration scheme described how the linear matrix system is built, this system must be solved in order to find the solution
at the next time step.
To solve this system, two main categories of algorithms exist: the direct solvers and the iterative solvers.
Direct solvers
These solvers aim at finding the exact solution of the system by computing in one single step
. To do so, various methods exist to compute the inverse matrix of
.
For small-size linear systems, the direct methods will be efficient. Large and sparse systems may imply time-consuming inverse of the matrix . The advantage of direct methods is that they succeed to solve well-conditioned and even some quite ill-conditioned problems. The computation of the inverse of
often relies on decomposition of this matrix: Cholesky, LU or LDL and their sparse versions are available.
Direct solver implementation
Direct solvers in SOFA are:
- SparseLDLSolver
- LULinearSolver / SparseLUSolver
- CholeskySolver / SparseCholeskySolver
- SVDLinearSolver (Jacobi SVD)
- BTDLinearSolver
In the SOFA code
The resolution of the linear system is computed in the solve()
function of the LinearSolver. With direct solvers, the integration scheme sucessively calls the two following functions:
implementing the targeted decomposition method:
Iterative solvers
Contrary to direct solvers, iterative methods converge towards the solution gradually. The solution is approximated at each iteration a little bit more accurately, rather than computed in one single large iteration. With iterative methods, the error esimated in the solution decreases with the number of iterations.
For well-conditioned problems (even large systems), the convergence remains monotonic. However, for ill-conditioned systems, the convergence might be much slower. Since these methods compute the residual at each iteration, the matrix
does not have to be built to improve performances (only matrix vector computations). Numerical settings of the solver (maximum number of iterations, tolerance for instance) must be appropriately defined. Two available methods are the conjugate gradient method (using the CGLinearSolver) or the minimal residual method (using the MinResLinearSolver).
Iterative solver implementation
Iterative solvers in SOFA are:
- CGLinearSolver
- MinResLinearSolver
In the SOFA code
The resolution of the linear system is computed in the solve()
function of the LinearSolver. With iterative solvers, the integration scheme only calls the function:
and will handle these vectors as TempVectorContainer and create any new vector using the function vtmp.createTempVector() as follows:
Last modified: 5 May 2020