[c#] What are the differences between a multidimensional array and an array of arrays in C#?

What are the differences between multidimensional arrays double[,] and array-of-arrays double[][] in C#?

If there is a difference, what is the best use for each one?

This question is related to c# multidimensional-array jagged-arrays

The answer is


Multi-dimension arrays are (n-1)-dimension matrices.

So int[,] square = new int[2,2] is square matrix 2x2, int[,,] cube = new int [3,3,3] is a cube - square matrix 3x3. Proportionality is not required.

Jagged arrays are just array of arrays - an array where each cell contains an array.

So MDA are proportional, JD may be not! Each cell can contains an array of arbitrary length!


In addition to the other answers, note that a multidimensional array is allocated as one big chunky object on the heap. This has some implications:

  1. Some multidimensional arrays will get allocated on the Large Object Heap (LOH) where their equivalent jagged array counterparts would otherwise not have.
  2. The GC will need to find a single contiguous free block of memory to allocate a multidimensional array, whereas a jagged array might be able to fill in gaps caused by heap fragmentation... this isn't usually an issue in .NET because of compaction, but the LOH doesn't get compacted by default (you have to ask for it, and you have to ask every time you want it).
  3. You'll want to look into <gcAllowVeryLargeObjects> for multidimensional arrays way before the issue will ever come up if you only ever use jagged arrays.

This might have been mentioned in the above answers but not explicitly: with jagged array you can use array[row] to refer a whole row of data, but this is not allowed for multi-d arrays.


Simply put multidimensional arrays are similar to a table in DBMS.
Array of Array (jagged array) lets you have each element hold another array of the same type of variable length.

So, if you are sure that the structure of data looks like a table (fixed rows/columns), you can use a multi-dimensional array. Jagged array are fixed elements & each element can hold an array of variable length

E.g. Psuedocode:

int[,] data = new int[2,2];
data[0,0] = 1;
data[0,1] = 2;
data[1,0] = 3;
data[1,1] = 4;

Think of the above as a 2x2 table:

1 | 2
3 | 4
int[][] jagged = new int[3][]; 
jagged[0] = new int[4] {  1,  2,  3,  4 }; 
jagged[1] = new int[2] { 11, 12 }; 
jagged[2] = new int[3] { 21, 22, 23 }; 

Think of the above as each row having variable number of columns:

 1 |  2 |  3 | 4
11 | 12
21 | 22 | 23

I am parsing .il files generated by ildasm to build a database of assemnblies, classes, methods, and stored procedures for use doing a conversion. I came across the following, which broke my parsing.

.method private hidebysig instance uint32[0...,0...] 
        GenerateWorkingKey(uint8[] key,
                           bool forEncryption) cil managed

The book Expert .NET 2.0 IL Assembler, by Serge Lidin, Apress, published 2006, Chapter 8, Primitive Types and Signatures, pp. 149-150 explains.

<type>[] is termed a Vector of <type>,

<type>[<bounds> [<bounds>**] ] is termed an array of <type>

** means may be repeated, [ ] means optional.

Examples: Let <type> = int32.

1) int32[...,...] is a two-dimensional array of undefined lower bounds and sizes

2) int32[2...5] is a one-dimensional array of lower bound 2 and size 4.

3) int32[0...,0...] is a two-dimensional array of lower bounds 0 and undefined size.

Tom


I would like to update on this, because in .NET Core multi-dimensional arrays are faster than jagged arrays. I ran the tests from John Leidegren and these are the results on .NET Core 2.0 preview 2. I increased the dimension value to make any possible influences from background apps less visible.

Debug (code optimalization disabled)
Running jagged 
187.232 200.585 219.927 227.765 225.334 222.745 224.036 222.396 219.912 222.737 

Running multi-dimensional  
130.732 151.398 131.763 129.740 129.572 159.948 145.464 131.930 133.117 129.342 

Running single-dimensional  
 91.153 145.657 111.974  96.436 100.015  97.640  94.581 139.658 108.326  92.931 


Release (code optimalization enabled)
Running jagged 
108.503 95.409 128.187 121.877 119.295 118.201 102.321 116.393 125.499 116.459 

Running multi-dimensional 
 62.292  60.627  60.611  60.883  61.167  60.923  62.083  60.932  61.444  62.974 

Running single-dimensional 
 34.974  33.901  34.088  34.659  34.064  34.735  34.919  34.694  35.006  34.796 

I looked into disassemblies and this is what I found

jagged[i][j][k] = i * j * k; needed 34 instructions to execute

multi[i, j, k] = i * j * k; needed 11 instructions to execute

single[i * dim * dim + j * dim + k] = i * j * k; needed 23 instructions to execute

I wasn't able to identify why single-dimensional arrays were still faster than multi-dimensional but my guess is that it has to do with some optimalization made on the CPU


A multidimensional array creates a nice linear memory layout while a jagged array implies several extra levels of indirection.

Looking up the value jagged[3][6] in a jagged array var jagged = new int[10][5] works like this: Look up the element at index 3 (which is an array) and look up the element at index 6 in that array (which is a value). For each dimension in this case, there's an additional look up (this is an expensive memory access pattern).

A multidimensional array is laid out linearly in memory, the actual value is found by multiplying together the indexes. However, given the array var mult = new int[10,30], the Length property of that multidimensional array returns the total number of elements i.e. 10 * 30 = 300.

The Rank property of a jagged array is always 1, but a multidimensional array can have any rank. The GetLength method of any array can be used to get the length of each dimension. For the multidimensional array in this example mult.GetLength(1) returns 30.

Indexing the multidimensional array is faster. e.g. given the multidimensional array in this example mult[1,7] = 30 * 1 + 7 = 37, get the element at that index 37. This is a better memory access pattern because only one memory location is involved, which is the base address of the array.

A multidimensional array therefore allocates a continuous memory block, while a jagged array does not have to be square, e.g. jagged[1].Length does not have to equal jagged[2].Length, which would be true for any multidimensional array.

Performance

Performance wise, multidimensional arrays should be faster. A lot faster, but due to a really bad CLR implementation they are not.

 23.084  16.634  15.215  15.489  14.407  13.691  14.695  14.398  14.551  14.252 
 25.782  27.484  25.711  20.844  19.607  20.349  25.861  26.214  19.677  20.171 
  5.050   5.085   6.412   5.225   5.100   5.751   6.650   5.222   6.770   5.305 

The first row are timings of jagged arrays, the second shows multidimensional arrays and the third, well that's how it should be. The program is shown below, FYI this was tested running mono. (The windows timings are vastly different, mostly due to the CLR implementation variations).

On windows, the timings of the jagged arrays are greatly superior, about the same as my own interpretation of what multidimensional array look up should be like, see 'Single()'. Sadly the windows JIT-compiler is really stupid, and this unfortunately makes these performance discussions difficult, there are too many inconsistencies.

These are the timings I got on windows, same deal here, the first row are jagged arrays, second multidimensional and third my own implementation of multidimensional, note how much slower this is on windows compared to mono.

  8.438   2.004   8.439   4.362   4.936   4.533   4.751   4.776   4.635   5.864
  7.414  13.196  11.940  11.832  11.675  11.811  11.812  12.964  11.885  11.751
 11.355  10.788  10.527  10.541  10.745  10.723  10.651  10.930  10.639  10.595

Source code:

using System;
using System.Diagnostics;
static class ArrayPref
{
    const string Format = "{0,7:0.000} ";
    static void Main()
    {
        Jagged();
        Multi();
        Single();
    }

    static void Jagged()
    {
        const int dim = 100;
        for(var passes = 0; passes < 10; passes++)
        {
            var timer = new Stopwatch();
            timer.Start();
            var jagged = new int[dim][][];
            for(var i = 0; i < dim; i++)
            {
                jagged[i] = new int[dim][];
                for(var j = 0; j < dim; j++)
                {
                    jagged[i][j] = new int[dim];
                    for(var k = 0; k < dim; k++)
                    {
                        jagged[i][j][k] = i * j * k;
                    }
                }
            }
            timer.Stop();
            Console.Write(Format,
                (double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
        }
        Console.WriteLine();
    }

    static void Multi()
    {
        const int dim = 100;
        for(var passes = 0; passes < 10; passes++)
        {
            var timer = new Stopwatch();
            timer.Start();
            var multi = new int[dim,dim,dim];
            for(var i = 0; i < dim; i++)
            {
                for(var j = 0; j < dim; j++)
                {
                    for(var k = 0; k < dim; k++)
                    {
                        multi[i,j,k] = i * j * k;
                    }
                }
            }
            timer.Stop();
            Console.Write(Format,
                (double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
        }
        Console.WriteLine();
    }

    static void Single()
    {
        const int dim = 100;
        for(var passes = 0; passes < 10; passes++)
        {
            var timer = new Stopwatch();
            timer.Start();
            var single = new int[dim*dim*dim];
            for(var i = 0; i < dim; i++)
            {
                for(var j = 0; j < dim; j++)
                {
                    for(var k = 0; k < dim; k++)
                    {
                        single[i*dim*dim+j*dim+k] = i * j * k;
                    }
                }
            }
            timer.Stop();
            Console.Write(Format,
                (double)timer.ElapsedTicks/TimeSpan.TicksPerMillisecond);
        }
        Console.WriteLine();
    }
}

Preface: This comment is intended to address the answer provided by okutane, but because of SO's silly reputation system, I can not post it where it belongs.

Your assertion that one is slower than the other because of the method calls isn't correct. One is slower than the other because of more complicated bounds-checking algorithms. You can easily verify this by looking, not at the IL, but at the compiled assembly. For example, on my 4.5 install, accessing an element (via pointer in edx) stored in a two-dimensional array pointed to by ecx with indexes stored in eax and edx looks like so:

sub eax,[ecx+10]
cmp eax,[ecx+08]
jae oops //jump to throw out of bounds exception
sub edx,[ecx+14]
cmp edx,[ecx+0C]
jae oops //jump to throw out of bounds exception
imul eax,[ecx+0C]
add eax,edx
lea edx,[ecx+eax*4+18]

Here, you can see that there's no overhead from method calls. The bounds checking is just very convoluted thanks to the possibility of non-zero indexes, which is a functionality not on offer with jagged arrays. If we remove the sub,cmp,and jmps for the non-zero cases, the code pretty much resolves to (x*y_max+y)*sizeof(ptr)+sizeof(array_header). This calculation is about as fast (one multiply could be replaced by a shift, since that's the whole reason we choose bytes to be sized as powers of two bits) as anything else for random access to an element.

Another complication is that there are plenty of cases where a modern compiler will optimize away the nested bounds-checking for element access while iterating over a single-dimension array. The result is code that basically just advances an index pointer over the contiguous memory of the array. Naive iteration over multi-dimensional arrays generally involves an extra layer of nested logic, so a compiler is less likely to optimize the operation. So, even though the bounds-checking overhead of accessing a single element amortizes out to constant runtime with respect to array dimensions and sizes, a simple test-case to measure the difference may take many times longer to execute.