[python] Python Inverse of a Matrix

How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.

This question is related to python algorithm matrix linear-algebra matrix-inverse

The answer is


You could calculate the determinant of the matrix which is recursive and then form the adjoined matrix

Here is a short tutorial

I think this only works for square matrices

Another way of computing these involves gram-schmidt orthogonalization and then transposing the matrix, the transpose of an orthogonalized matrix is its inverse!


If you hate numpy, get out RPy and your local copy of R, and use it instead.

(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)


It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:

A = matrix( [[1,2,3],[11,12,13],[21,22,23]])

By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...

It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!

I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.

If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So

A = matrix( [[2,2,3],[11,24,13],[21,22,46]])

becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.

Thanks for your contribution,

OldAl.


For those like me, who were looking for a pure Python solution without pandas or numpy involved, check out the following GitHub project: https://github.com/ThomIves/MatrixInverse.

It generously provides a very good explanation of how the process looks like "behind the scenes". The author has nicely described the step-by-step approach and presented some practical examples, all easy to follow.

This is just a little code snippet from there to illustrate the approach very briefly (AM is the source matrix, IM is the identity matrix of the same size):

def invert_matrix(AM, IM):
    for fd in range(len(AM)):
        fdScaler = 1.0 / AM[fd][fd]
        for j in range(len(AM)):
            AM[fd][j] *= fdScaler
            IM[fd][j] *= fdScaler
        for i in list(range(len(AM)))[0:fd] + list(range(len(AM)))[fd+1:]:
            crScaler = AM[i][fd]
            for j in range(len(AM)):
                AM[i][j] = AM[i][j] - crScaler * AM[fd][j]
                IM[i][j] = IM[i][j] - crScaler * IM[fd][j]
    return IM

But please do follow the entire thing, you'll learn a lot more than just copy-pasting this code! There's a Jupyter notebook as well, btw.

Hope that helps someone, I personally found it extremely useful for my very particular task (Absorbing Markov Chain) where I wasn't able to use any non-standard packages.


Numpy will be suitable for most people, but you can also do matrices in Sympy

Try running these commands at http://live.sympy.org/

M = Matrix([[1, 3], [-2, 3]])
M
M**-1

For fun, try M**(1/2)


Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.

See Don't invert that matrix.


For those like me, who were looking for a pure Python solution without pandas or numpy involved, check out the following GitHub project: https://github.com/ThomIves/MatrixInverse.

It generously provides a very good explanation of how the process looks like "behind the scenes". The author has nicely described the step-by-step approach and presented some practical examples, all easy to follow.

This is just a little code snippet from there to illustrate the approach very briefly (AM is the source matrix, IM is the identity matrix of the same size):

def invert_matrix(AM, IM):
    for fd in range(len(AM)):
        fdScaler = 1.0 / AM[fd][fd]
        for j in range(len(AM)):
            AM[fd][j] *= fdScaler
            IM[fd][j] *= fdScaler
        for i in list(range(len(AM)))[0:fd] + list(range(len(AM)))[fd+1:]:
            crScaler = AM[i][fd]
            for j in range(len(AM)):
                AM[i][j] = AM[i][j] - crScaler * AM[fd][j]
                IM[i][j] = IM[i][j] - crScaler * IM[fd][j]
    return IM

But please do follow the entire thing, you'll learn a lot more than just copy-pasting this code! There's a Jupyter notebook as well, btw.

Hope that helps someone, I personally found it extremely useful for my very particular task (Absorbing Markov Chain) where I wasn't able to use any non-standard packages.


It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:

A = matrix( [[1,2,3],[11,12,13],[21,22,23]])

By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...

It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!

I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.

If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So

A = matrix( [[2,2,3],[11,24,13],[21,22,46]])

becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.

Thanks for your contribution,

OldAl.


If you hate numpy, get out RPy and your local copy of R, and use it instead.

(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)


If you hate numpy, get out RPy and your local copy of R, and use it instead.

(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)


Numpy will be suitable for most people, but you can also do matrices in Sympy

Try running these commands at http://live.sympy.org/

M = Matrix([[1, 3], [-2, 3]])
M
M**-1

For fun, try M**(1/2)


Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.

See Don't invert that matrix.


Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.

See Don't invert that matrix.


If you hate numpy, get out RPy and your local copy of R, and use it instead.

(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)


You could calculate the determinant of the matrix which is recursive and then form the adjoined matrix

Here is a short tutorial

I think this only works for square matrices

Another way of computing these involves gram-schmidt orthogonalization and then transposing the matrix, the transpose of an orthogonalized matrix is its inverse!


Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.

See Don't invert that matrix.


Examples related to python

programming a servo thru a barometer Is there a way to view two blocks of code from the same file simultaneously in Sublime Text? python variable NameError Why my regexp for hyphenated words doesn't work? Comparing a variable with a string python not working when redirecting from bash script is it possible to add colors to python output? Get Public URL for File - Google Cloud Storage - App Engine (Python) Real time face detection OpenCV, Python xlrd.biffh.XLRDError: Excel xlsx file; not supported Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation

Examples related to algorithm

How can I tell if an algorithm is efficient? Find the smallest positive integer that does not occur in a given sequence Efficiently getting all divisors of a given number Peak signal detection in realtime timeseries data What is the optimal algorithm for the game 2048? How can I sort a std::map first by value, then by key? Finding square root without using sqrt function? Fastest way to flatten / un-flatten nested JSON objects Mergesort with Python Find common substring between two strings

Examples related to matrix

How to get element-wise matrix multiplication (Hadamard product) in numpy? How can I plot a confusion matrix? Error: stray '\240' in program What does the error "arguments imply differing number of rows: x, y" mean? How to input matrix (2D list) in Python? Difference between numpy.array shape (R, 1) and (R,) Counting the number of non-NaN elements in a numpy ndarray in Python Inverse of a matrix using numpy How to create an empty matrix in R? numpy matrix vector multiplication

Examples related to linear-algebra

Rotating a Vector in 3D Space "Cloning" row or column vectors What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs? Python Inverse of a Matrix

Examples related to matrix-inverse

Inverse of matrix in R Simple 3x3 matrix inverse code (C++) Python Inverse of a Matrix