[c++] What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs?

It seems that many projects slowly come upon a need to do matrix math, and fall into the trap of first building some vector classes and slowly adding in functionality until they get caught building a half-assed custom linear algebra library, and depending on it.

I'd like to avoid that while not building in a dependence on some tangentially related library (e.g. OpenCV, OpenSceneGraph).

What are the commonly used matrix math/linear algebra libraries out there, and why would decide to use one over another? Are there any that would be advised against using for some reason? I am specifically using this in a geometric/time context*(2,3,4 Dim)* but may be using higher dimensional data in the future.

I'm looking for differences with respect to any of: API, speed, memory use, breadth/completeness, narrowness/specificness, extensibility, and/or maturity/stability.

Update

I ended up using Eigen3 which I am extremely happy with.

This question is related to c++ math matrix linear-algebra

The answer is


What about GLM?

It's based on the OpenGL Shading Language (GLSL) specification and released under the MIT license. Clearly aimed at graphics programmers


If you are looking for high performance matrix/linear algebra/optimization on Intel processors, I'd look at Intel's MKL library.

MKL is carefully optimized for fast run-time performance - much of it based on the very mature BLAS/LAPACK fortran standards. And its performance scales with the number of cores available. Hands-free scalability with available cores is the future of computing and I wouldn't use any math library for a new project doesn't support multi-core processors.

Very briefly, it includes:

  1. Basic vector-vector, vector-matrix, and matrix-matrix operations
  2. Matrix factorization (LU decomp, hermitian,sparse)
  3. Least squares fitting and eigenvalue problems
  4. Sparse linear system solvers
  5. Non-linear least squares solver (trust regions)
  6. Plus signal processing routines such as FFT and convolution
  7. Very fast random number generators (mersenne twist)
  8. Much more.... see: link text

A downside is that the MKL API can be quite complex depending on the routines that you need. You could also take a look at their IPP (Integrated Performance Primitives) library which is geared toward high performance image processing operations, but is nevertheless quite broad.

Paul

CenterSpace Software ,.NET Math libraries, centerspace.net


I found this library quite simple and functional (http://kirillsprograms.com/top_Vectors.php). These are bare bone vectors implemented via C++ templates. No fancy stuff - just what you need to do with vectors (add, subtract multiply, dot, etc).


FLENS

http://flens.sf.net

It also implements a lot of LAPACK functions.


I'll add vote for Eigen: I ported a lot of code (3D geometry, linear algebra and differential equations) from different libraries to this one - improving both performance and code readability in almost all cases.

One advantage that wasn't mentioned: it's very easy to use SSE with Eigen, which significantly improves performance of 2D-3D operations (where everything can be padded to 128 bits).


So I'm a pretty critical person, and figure if I'm going to invest in a library, I'd better know what I'm getting myself into. I figure it's better to go heavy on the criticism and light on the flattery when scrutinizing; what's wrong with it has many more implications for the future than what's right. So I'm going to go overboard here a little bit to provide the kind of answer that would have helped me and I hope will help others who may journey down this path. Keep in mind that this is based on what little reviewing/testing I've done with these libs. Oh and I stole some of the positive description from Reed.

I'll mention up top that I went with GMTL despite it's idiosyncrasies because the Eigen2 unsafeness was too big of a downside. But I've recently learned that the next release of Eigen2 will contain defines that will shut off the alignment code, and make it safe. So I may switch over.

Update: I've switched to Eigen3. Despite it's idiosyncrasies, its scope and elegance are too hard to ignore, and the optimizations which make it unsafe can be turned off with a define.

Eigen2/Eigen3

Benefits: LGPL MPL2, Clean, well designed API, fairly easy to use. Seems to be well maintained with a vibrant community. Low memory overhead. High performance. Made for general linear algebra, but good geometric functionality available as well. All header lib, no linking required.

Idiocyncracies/downsides: (Some/all of these can be avoided by some defines that are available in the current development branch Eigen3)

  • Unsafe performance optimizations result in needing careful following of rules. Failure to follow rules causes crashes.
    • you simply cannot safely pass-by-value
    • use of Eigen types as members requires special allocator customization (or you crash)
    • use with stl container types and possibly other templates required special allocation customization (or you will crash)
    • certain compilers need special care to prevent crashes on function calls (GCC windows)

GMTL

Benefits: LGPL, Fairly Simple API, specifically designed for graphics engines. Includes many primitive types geared towards rendering (such as planes, AABB, quatenrions with multiple interpolation, etc) that aren't in any other packages. Very low memory overhead, quite fast, easy to use. All header based, no linking necessary.

Idiocyncracies/downsides:

  • API is quirky
    • what might be myVec.x() in another lib is only available via myVec[0] (Readability problem)
      • an array or stl::vector of points may cause you to do something like pointsList[0][0] to access the x component of the first point
    • in a naive attempt at optimization, removed cross(vec,vec) and replaced with makeCross(vec,vec,vec) when compiler eliminates unnecessary temps anyway
    • normal math operations don't return normal types unless you shut off some optimization features e.g.: vec1 - vec2 does not return a normal vector so length( vecA - vecB ) fails even though vecC = vecA - vecB works. You must wrap like: length( Vec( vecA - vecB ) )
    • operations on vectors are provided by external functions rather than members. This may require you to use the scope resolution everywhere since common symbol names may collide
    • you have to do
        length( makeCross( vecA, vecB ) )
      or
        gmtl::length( gmtl::makeCross( vecA, vecB ) )
      where otherwise you might try
        vecA.cross( vecB ).length()
  • not well maintained
    • still claimed as "beta"
    • documentation missing basic info like which headers are needed to use normal functionalty
      • Vec.h does not contain operations for Vectors, VecOps.h contains some, others are in Generate.h for example. cross(vec&,vec&,vec&) in VecOps.h, [make]cross(vec&,vec&) in Generate.h
  • immature/unstable API; still changing.
    • For example "cross" has moved from "VecOps.h" to "Generate.h", and then the name was changed to "makeCross". Documentation examples fail because still refer to old versions of functions that no-longer exist.

NT2

Can't tell because they seem to be more interested in the fractal image header of their web page than the content. Looks more like an academic project than a serious software project.

Latest release over 2 years ago.

Apparently no documentation in English though supposedly there is something in French somewhere.

Cant find a trace of a community around the project.

LAPACK & BLAS

Benefits: Old and mature.

Downsides:

  • old as dinosaurs with really crappy APIs

I'm new to this topic, so I can't say a whole lot, but BLAS is pretty much the standard in scientific computing. BLAS is actually an API standard, which has many implementations. I'm honestly not sure which implementations are most popular or why.

If you want to also be able to do common linear algebra operations (solving systems, least squares regression, decomposition, etc.) look into LAPACK.


I've heard good things about Eigen and NT2, but haven't personally used either. There's also Boost.UBLAS, which I believe is getting a bit long in the tooth. The developers of NT2 are building the next version with the intention of getting it into Boost, so that might count for somthing.

My lin. alg. needs don't exteed beyond the 4x4 matrix case, so I can't comment on advanced functionality; I'm just pointing out some options.


For what it's worth, I've tried both Eigen and Armadillo. Below is a brief evaluation.

Eigen Advantages: 1. Completely self-contained -- no dependence on external BLAS or LAPACK. 2. Documentation decent. 3. Purportedly fast, although I haven't put it to the test.

Disadvantage: The QR algorithm returns just a single matrix, with the R matrix embedded in the upper triangle. No idea where the rest of the matrix comes from, and no Q matrix can be accessed.

Armadillo Advantages: 1. Wide range of decompositions and other functions (including QR). 2. Reasonably fast (uses expression templates), but again, I haven't really pushed it to high dimensions.

Disadvantages: 1. Depends on external BLAS and/or LAPACK for matrix decompositions. 2. Documentation is lacking IMHO (including the specifics wrt LAPACK, other than changing a #define statement).

Would be nice if an open source library were available that is self-contained and straightforward to use. I have run into this same issue for 10 years, and it gets frustrating. At one point, I used GSL for C and wrote C++ wrappers around it, but with modern C++ -- especially using the advantages of expression templates -- we shouldn't have to mess with C in the 21st century. Just my tuppencehapenny.


Okay, I think I know what you're looking for. It appears that GGT is a pretty good solution, as Reed Copsey suggested.

Personally, we rolled our own little library, because we deal with rational points a lot - lots of rational NURBS and Beziers.

It turns out that most 3D graphics libraries do computations with projective points that have no basis in projective math, because that's what gets you the answer you want. We ended up using Grassmann points, which have a solid theoretical underpinning and decreased the number of point types. Grassmann points are basically the same computations people are using now, with the benefit of a robust theory. Most importantly, it makes things clearer in our minds, so we have fewer bugs. Ron Goldman wrote a paper on Grassmann points in computer graphics called "On the Algebraic and Geometric Foundations of Computer Graphics".

Not directly related to your question, but an interesting read.


Examples related to c++

Method Call Chaining; returning a pointer vs a reference? How can I tell if an algorithm is efficient? Difference between opening a file in binary vs text How can compare-and-swap be used for a wait-free mutual exclusion for any shared data structure? Install Qt on Ubuntu #include errors detected in vscode Cannot open include file: 'stdio.h' - Visual Studio Community 2017 - C++ Error How to fix the error "Windows SDK version 8.1" was not found? Visual Studio 2017 errors on standard headers How do I check if a Key is pressed on C++

Examples related to math

How to do perspective fixing? How to pad a string with leading zeros in Python 3 How can I use "e" (Euler's number) and power operation in python 2.7 numpy max vs amax vs maximum Efficiently getting all divisors of a given number Using atan2 to find angle between two vectors How to calculate percentage when old value is ZERO Finding square root without using sqrt function? Exponentiation in Python - should I prefer ** operator instead of math.pow and math.sqrt? How do I get the total number of unique pairs of a set in the database?

Examples related to matrix

How to get element-wise matrix multiplication (Hadamard product) in numpy? How can I plot a confusion matrix? Error: stray '\240' in program What does the error "arguments imply differing number of rows: x, y" mean? How to input matrix (2D list) in Python? Difference between numpy.array shape (R, 1) and (R,) Counting the number of non-NaN elements in a numpy ndarray in Python Inverse of a matrix using numpy How to create an empty matrix in R? numpy matrix vector multiplication

Examples related to linear-algebra

Rotating a Vector in 3D Space "Cloning" row or column vectors What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs? Python Inverse of a Matrix