[c#] Find size of object instance in bytes in c#

For any arbitrary instance (collections of different objects, compositions, single objects, etc)

How can I determine its size in bytes?

(I've currently got a collection of various objects and i'm trying to determine the aggregated size of it)

EDIT: Has someone written an extension method for Object that could do this? That'd be pretty neat imo.

This question is related to c#

The answer is


This is impossible to do at runtime.

There are various memory profilers that display object size, though.

EDIT: You could write a second program that profiles the first one using the CLR Profiling API and communicates with it through remoting or something.


You may be able to approximate the size by pretending to serializing it with a binary serializer (but routing the output to oblivion) if you're working with serializable objects.

class Program
{
    static void Main(string[] args)
    {
        A parent;
        parent = new A(1, "Mike");
        parent.AddChild("Greg");
        parent.AddChild("Peter");
        parent.AddChild("Bobby");

        System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf =
           new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
        SerializationSizer ss = new SerializationSizer();
        bf.Serialize(ss, parent);
        Console.WriteLine("Size of serialized object is {0}", ss.Length);
    }
}

[Serializable()]
class A
{
    int id;
    string name;
    List<B> children;
    public A(int id, string name)
    {
        this.id = id;
        this.name = name;
        children = new List<B>();
    }

    public B AddChild(string name)
    {
        B newItem = new B(this, name);
        children.Add(newItem);
        return newItem;
    }
}

[Serializable()]
class B
{
    A parent;
    string name;
    public B(A parent, string name)
    {
        this.parent = parent;
        this.name = name;
    }
}

class SerializationSizer : System.IO.Stream
{
    private int totalSize;
    public override void Write(byte[] buffer, int offset, int count)
    {
        this.totalSize += count;
    }

    public override bool CanRead
    {
        get { return false; }
    }

    public override bool CanSeek
    {
        get { return false; }
    }

    public override bool CanWrite
    {
        get { return true; }
    }

    public override void Flush()
    {
        // Nothing to do
    }

    public override long Length
    {
        get { return totalSize; }
    }

    public override long Position
    {
        get
        {
            throw new NotImplementedException();
        }
        set
        {
            throw new NotImplementedException();
        }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        throw new NotImplementedException();
    }

    public override long Seek(long offset, System.IO.SeekOrigin origin)
    {
        throw new NotImplementedException();
    }

    public override void SetLength(long value)
    {
        throw new NotImplementedException();
    }
}

For anyone looking for a solution that doesn't require [Serializable] classes and where the result is an approximation instead of exact science. The best method I could find is json serialization into a memorystream using UTF32 encoding.

private static long? GetSizeOfObjectInBytes(object item)
{
    if (item == null) return 0;
    try
    {
        // hackish solution to get an approximation of the size
        var jsonSerializerSettings = new JsonSerializerSettings
        {
            DateFormatHandling = DateFormatHandling.IsoDateFormat,
            DateTimeZoneHandling = DateTimeZoneHandling.Utc,
            MaxDepth = 10,
            ReferenceLoopHandling = ReferenceLoopHandling.Ignore
        };
        var formatter = new JsonMediaTypeFormatter { SerializerSettings = jsonSerializerSettings };
        using (var stream = new MemoryStream()) { 
            formatter.WriteToStream(item.GetType(), item, stream, Encoding.UTF32);
            return stream.Length / 4; // 32 bits per character = 4 bytes per character
        }
    }
    catch (Exception)
    {
        return null;
    }
}

No, this won't give you the exact size that would be used in memory. As previously mentioned, that is not possible. But it'll give you a rough estimation.

Note that this is also pretty slow.


Simplest way is: int size = *((int*)type.TypeHandle.Value + 1)

I know this is implementation detail but GC relies on it and it needs to be as close to start of the methodtable for efficiency plus taking into consideration how GC code complex is nobody will dare to change it in future. In fact it works for every minor/major versions of .net framework+.net core. (Currently unable to test for 1.0)
If you want more reliable way, emit a struct in a dynamic assembly with [StructLayout(LayoutKind.Auto)] with exact same fields in same order, take its size with sizeof IL instruction. You may want to emit a static method within struct which simply returns this value. Then add 2*IntPtr.Size for object header. This should give you exact value.
But if your class derives from another class, you need to find each size of base class seperatly and add them + 2*Inptr.Size again for header. You can do this by getting fields with BindingFlags.DeclaredOnly flag.
Arrays and strings just adds that size its length * element size. For cumulative size of aggreagate objects you need to implement more sophisticated solution which involves visiting every field and inspect its contents.


For value types, you can use Marshal.SizeOf. Of course, it returns the number of bytes required to marshal the structure in unmanaged memory, which is not necessarily what the CLR uses.


This doesn't apply to the current .NET implementation, but one thing to keep in mind with garbage collected/managed runtimes is the allocated size of an object can change throughout the lifetime of the program. For example, some generational garbage collectors (such as the Generational/Ulterior Reference Counting Hybrid collector) only need to store certain information after an object is moved from the nursery to the mature space.

This makes it impossible to create a reliable, generic API to expose the object size.


You can use reflection to gather all the public member or property information (given the object's type). There is no way to determine the size without walking through each individual piece of data on the object, though.


First of all, a warning: what follows is strictly in the realm of ugly, undocumented hacks. Do not rely on this working - even if it works for you now, it may stop working tomorrow, with any minor or major .NET update.

You can use the information in this article on CLR internals MSDN Magazine Issue 2005 May - Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects - last I checked, it was still applicable. Here's how this is done (it retrieves the internal "Basic Instance Size" field via TypeHandle of the type).

object obj = new List<int>(); // whatever you want to get the size of
RuntimeTypeHandle th = obj.GetType().TypeHandle;
int size = *(*(int**)&th + 1);
Console.WriteLine(size);

This works on 3.5 SP1 32-bit. I'm not sure if field sizes are the same on 64-bit - you might have to adjust the types and/or offsets if they are not.

This will work for all "normal" types, for which all instances have the same, well-defined types. Those for which this isn't true are arrays and strings for sure, and I believe also StringBuilder. For them you'll have add the size of all contained elements to their base instance size.


safe solution with some optimizations CyberSaving/MemoryUsage code. some case:

/* test nullable type */      
TestSize<int?>.SizeOf(null) //-> 4 B

/* test StringBuilder */    
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 100; i++) sb.Append("??????????");
TestSize<StringBuilder>.SizeOf(sb ) //-> 3132 B

/* test Simple array */    
TestSize<int[]>.SizeOf(new int[100]); //-> 400 B

/* test Empty List<int>*/    
var list = new List<int>();  
TestSize<List<int>>.SizeOf(list); //-> 205 B

/* test List<int> with 100 items*/
for (int i = 0; i < 100; i++) list.Add(i);
TestSize<List<int>>.SizeOf(list); //-> 717 B

It works also with classes:

class twostring
{
    public string a { get; set; }
    public string b { get; set; }
}
TestSize<twostring>.SizeOf(new twostring() { a="0123456789", b="0123456789" } //-> 28 B

For arrays of structs/values, I have different results with:

first = Marshal.UnsafeAddrOfPinnedArrayElement(array, 0).ToInt64();
second = Marshal.UnsafeAddrOfPinnedArrayElement(array, 1).ToInt64();
arrayElementSize = second - first;

(oversimplified example)

Whatever the approach, you really need to understand how .Net works to correctly interpret the results. For instance, the returned element size is the "aligned" element size, with some padding. The overhead and thus the size is different depending on the usage of a type: "boxed" on the GC heap, on the stack, as a field, as an array element.

(I wanted to know what would be the memory impact of using "dummy" empty structs (without any field) to mimic "optional" arguments of generics; making tests with different layouts involving empty structs, I can see that an empty struct uses (at least) 1 byte per element; I vaguely remember it is because .Net needs a different address for each field, which wouldn't work if a field really was empty/0-sized).


Use Son Of Strike which has a command ObjSize.

Note that actual memory consumed is always larger than ObjSize reports due to a synkblk which resides directly before the object data.

Read more about both here MSDN Magazine Issue 2005 May - Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects.


For unmanaged types aka value types, structs:

        Marshal.SizeOf(object);

For managed objects the closer i got is an approximation.

        long start_mem = GC.GetTotalMemory(true);

        aclass[] array = new aclass[1000000];
        for (int n = 0; n < 1000000; n++)
            array[n] = new aclass();

        double used_mem_median = (GC.GetTotalMemory(false) - start_mem)/1000000D;

Do not use serialization.A binary formatter adds headers, so you can change your class and load an old serialized file into the modified class.

Also it won't tell you the real size in memory nor will take into account memory alignment.

[Edit] By using BiteConverter.GetBytes(prop-value) recursivelly on every property of your class you would get the contents in bytes, that doesn't count the weight of the class or references but is much closer to reality. I would recommend to use a byte array for data and an unmanaged proxy class to access values using pointer casting if size matters, note that would be non-aligned memory so on old computers is gonna be slow but HUGE datasets on MODERN RAM is gonna be considerably faster, as minimizing the size to read from RAM is gonna be a bigger impact than unaligned.


I have created benchmark test for different collections in .NET: https://github.com/scholtz/TestDotNetCollectionsMemoryAllocation

Results are as follows for .NET Core 2.2 with 1,000,000 of objects with 3 properties allocated:

Testing with string: 1234567
Hashtable<TestObject>:                                     184 672 704 B
Hashtable<TestObjectRef>:                                  136 668 560 B
Dictionary<int, TestObject>:                               171 448 160 B
Dictionary<int, TestObjectRef>:                            123 445 472 B
ConcurrentDictionary<int, TestObject>:                     200 020 440 B
ConcurrentDictionary<int, TestObjectRef>:                  152 026 208 B
HashSet<TestObject>:                                       149 893 216 B
HashSet<TestObjectRef>:                                    101 894 384 B
ConcurrentBag<TestObject>:                                 112 783 256 B
ConcurrentBag<TestObjectRef>:                               64 777 632 B
Queue<TestObject>:                                         112 777 736 B
Queue<TestObjectRef>:                                       64 780 680 B
ConcurrentQueue<TestObject>:                               112 784 136 B
ConcurrentQueue<TestObjectRef>:                             64 783 536 B
ConcurrentStack<TestObject>:                               128 005 072 B
ConcurrentStack<TestObjectRef>:                             80 004 632 B

For memory test I found the best to be used

GC.GetAllocatedBytesForCurrentThread()

AFAIK, you cannot, without actually deep-counting the size of each member in bytes. But again, does the size of a member (like elements inside a collection) count towards the size of the object, or a pointer to that member count towards the size of the object? Depends on how you define it.

I have run into this situation before where I wanted to limit the objects in my cache based on the memory they consumed.

Well, if there is some trick to do that, I'd be delighted to know about it!


From Pavel and jnm2:

private int DumpApproximateObjectSize(object toWeight)
{
   return Marshal.ReadInt32(toWeight.GetType().TypeHandle.Value, 4);
}

On a side note be careful because it only work with contiguous memory objects