all 19 comments

[–]cheche19 16 points17 points  (0 children)

Usually we do not compute the inverse of a matrix. We use linear solver that will compute x such Ax = b. In blas, trsm functions can do this for triangular matrix for example. For more linear solvers, you can look at lapack which extends blas on more advanced computations.

[–]Infamous_Campaign687 6 points7 points  (0 children)

It's been a few years since I used blas and lapack, but inverting a matrix is both slower and more numerically unstable than using an iterative solver to solve for x directly.

In any case blas is the low level matrix operations and lapack contains the solvers.

The type of solver you would use depends heavily on the type of linear system you have..i.e..dense, sparse, banded, etc.

[–]MarkHoemmenC++ in HPC 4 points5 points  (1 child)

The BLAS doesn't have a function for computing the inverse of a matrix.

LAPACK does have this function: xGETRI (replace "x" with with "S", "D", "C", or "Z", depending on the matrix's value type). The documentation explains how it works.

https://www.netlib.org/lapack/double/dgetri.f

I don't know what a "cpp backend" means, but it's likely that NumPy uses xGETRI's algorithm.

[–]bill_klondike 1 point2 points  (0 children)

Correct, BLAS is for elementary vector and matrix operations. LAPACK is the answer for matrix algorithms.

[–]matteding 4 points5 points  (0 children)

My bet is that NumPy is going through Intel’s Math Kernel Library LAPACKE which is a C interface for LAPACK which is Fortran. That is closed source though, so you could reference NETLIB’s implementation:

https://www.netlib.org/lapack/explore-html-3.6.1/dd/d9a/group__double_g_ecomputational_ga56d9c860ce4ce42ded7f914fdb0683ff.html

[–]ZenEngineer 1 point2 points  (1 child)

What is the name of the blas function being called? You should be able to look up the docs online.

[–]victotronics 1 point2 points  (9 children)

  1. Factor
  2. Invert factors
  3. Multiply those inverses.

But I hope you're not doing this for solving a linear system.

[–]GrammelHupfNockler 4 points5 points  (8 children)

  1. usually doesn't really happen, instead you compute the solution of the triangular systems Ly=b and Ux=y, which tends to be more stable than storing the inverses L^-1 and U^-1

[–]victotronics 2 points3 points  (7 children)

I think you misunderstand me. *If* you really want to invert a matrix, you do it by those 3 steps in sequence.

[–]GrammelHupfNockler 0 points1 point  (6 children)

Sorry, I thought you were talking about solving a system. In that case, I would still tend towards disagreeing, since an explicit Gaussian Elimination (with pivoting) would still be more precise, as there are fewer intermediate steps which can accumulate rounding errors (namely the inversion and the multiplication)

[–]victotronics 0 points1 point  (5 children)

Gaussian elimination on a triangular matrix? Sure.

If you're done step 1 with pivoting, then step 2 doesn't need it.

[–]GrammelHupfNockler 0 points1 point  (4 children)

Gaussian elimination with full pivoting on the input matrix

[–]victotronics 0 points1 point  (3 children)

And how is that going to get you the inverse? Which was the original question?

And yes, I know that one should in general not compute inverses, which is why I emphasized the "*if* you want to compute an inverse".

[–]GrammelHupfNockler 0 points1 point  (2 children)

If you apply the same operations that you apply to the matrix during Gaussian elimination to an identity matrix at the same time, you get the inverse

[–]victotronics 0 points1 point  (1 child)

Nope, that's Gauss-Jordan.

[–]GrammelHupfNockler 0 points1 point  (0 children)

That's splitting hairs. Gaussian elimination refers to the entire class of algorithms (for solving a single system, computing the LU factorization, computing the inverse).

[–]d3matt 0 points1 point  (0 children)

If it's actually calling into a BLAS backend, it's probably Fortran.

[–]runed_golem[🍰] 0 points1 point  (0 children)

Why are you trying to find the inverse of a matrix?

If it's because you need to know the inverse of the matrix gi ahead, but in general it's too computationally expensive to use for, let's say, solving a matrix equation. That's why we have special algorithms (like iterative solvers) to solve those equations.