Large matrices and vectors

Customers frequently ask us the size of the largest matrix they can instantiate. With recent developments at Microsoft the maximum object size now significantly depends on your OS (x86 or x64) and the version of .NET your application is referencing. With .NET 4.5 huge matrices can be created that far exceed the current 2GByte limit.

Pre .NET 4.5

Until 2012 all Microsoft 32-bit .NET processes were limited to 1.2GB of memory. Theoretically a matrix could take up most of that space. Let’s suppose it’s feasible in our application to have a 1 GB matrix. That matrix would contain 134,217,728 doubles or 268,435,456 floats — for example, a 11,585 x 11,585 square DoubleMatrix or a 16,384 x 16,384 square FloatMatrix.

There is a workaround in 32-bit .NET to increase the process memory to 2.4GB.

  1. Add the /3GB switch to boot.ini.
  2. After building the application, run the linker as follows:
link -edit -LARGEADDRESSAWARE application.exe

Increasingly, our customers are switching to 64-bit computing in part to get around these memory limitations. Although a 64-bit .NET process’s memory is only limited by the available RAM, the .NET runtime nevertheless limits any one object to 2GB. For that reason, our matrices are limited to a theoretical maximum of 402,653,184 doubles or 805,306,368 floats —for example, a 20,066 x 20,066 square DoubleMatrix or a 28,377 x 28,377 square FloatMatrix.

The good news is that this memory limit has finally been lifted with the release of .NET 4.5. A big thanks to the Microsoft GC .NET team! As they well know many developers have been requesting this change.

.NET 4.5

With the .NET 4.5 release developers can now create objects that exceed the 2 GB per object limit only in x64 environments. In order create these large objects the application must enable the element gcAllowVeryLargeObjects in the run-time schema. This run-time schema controls the behavior of the .NET garbage collection system.

    <gcAllowVeryLargeObjects enabled="true" />

These very large objects are subject to the following reasonable restrictions

  1. The maximum number of elements in an array is UInt32.MaxValue.
  2. The maximum index in any single dimension is 2,147,483,591 (0x7FFFFFC7) for byte arrays and arrays of single-byte structures, and 2,146,435,071 (0X7FEFFFFF) for other types.
  3. The maximum size for strings and other non-array objects is unchanged.

The brief Microsoft documentation note can be found here.

What does this mean for NMath?

First NMath has not yet been released with .NET 4.5 (as of July 2012) so all of the following will hold in the future under such a release. All current releases are still subject to the limits outlined above. So looking to the near future, underlying all NMath vectors and matrices is a contiguous 1-D array. This means that for matrices the number of elements must be less than 2,146,435,071. The same holds for vectors. The following table summarizes the maximum size of various NMath objects under .NET 4.5 on a x64 OS with gcAllowVeryLargeObjects enabled.

Class object Maximum size - elements Memory size - bytes
FloatVector 2,146,435,071 7.996 GBytes
DoubleVector 2,146,435,071 15.992 GBytes
FloatMatrix 2,146,435,071 7.996 GBytes
DoubleMatrix 2,146,435,071 15.992 GBytes

The Complex versions of these classes would have the same maximum number of elements but occupy twice the memory. This also means that we will soon be able to allocate a square matrix with a maximum size of [ 46329 x 46329 ].

– Trevor & Paul

NMath 5.3

We have confirmed that the currently shipping version of NMath (version 5.3) can handle very large objects if you use gcAllowVeryLargeObjects and you are targeting .NET 4.5. We have created DoubleMatrix objects as large as 30,000 x 30,000 in this way. That’s a 6.7GB object.

– Trevor

Large Matrices in a Web Environment

It was reported to CenterSpace that the configuration change did not work in web environments. We confirmed this to be true. The good news is that there’s a workaround developed by Microsoft. A Microsoft developer says:

Essentially, the reason it can’t be done at the application-level Web.config is that this particular setting can only be set per-process and only when the CLR is initializing. Since the CLR is already initialized by the time we start reading your Web.config, it’s too late to do anything about it at that time. Using CLRConfigFile should work around this by allowing you to specify an intermediate configuration file that the CLR can use for initialization. Then once the CLR is up, ASP.NET will read your Web.config and run your application as normal.

More info here: here

We have verified that this works.

– Trevor

3 thoughts on “Large matrices and vectors

  1. Trevor,
    Any furhter update on this? 2GB limit on 64-bit systems is restricting. Large engineering simulations can easily have 100,000+ equations. Thanks.

  2. It’s certainly a very frustrating .NET limitation. Microsoft, in general, still seems unaware of the seriousness of the issue. I believe that they feel that no one would ever create such a large object. The technical computing guys get it but they can’t promise a fix. It could be years, unfortunately.

    Internally, we have gotten around this problem by creating memory on the native heap inside our data structures. We have created a custom LU factorization for a customer and underlying matrix, vector. All were long indices with native data arrays. It all worked flawlessly.

    We hope to have a version of the product in the future that uses the native heap. Stay tuned.

    – Trevor

  3. NMath assemblies built against .NET 4.5 are available upon request. Contact support AT

    – Trevor
    CEO, CenterSpace Software

Leave a Reply

Your email address will not be published. Required fields are marked *