You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I work on the Visual Studio performance and reliability team. We have internal memory dumps that we automatically analyze looking to attribute large memory usage inside VS. In a few dumps I have recently looked at FastReport is rooting/responsible for DataColumnCollection objects that in aggregate can reach over a gig in size. In some dumps it is many (say 36) each holding reasonably sized (say 30-36 MB) DataColumnCollections. In other dumps I have seen 1-2 each holding DataColumnCollections whose size is > 1 GB.
DataColumnCollection is unfortunately not built to scale as it internally holds items in an ArrayList. Using segmented or non-contiguous data structures will be much more friendly in terms of address space when dealing with very large amounts of items.
To Reproduce
Unclear as these reports come from dump taken due to large memory usage. The GC root chains of these items tend to look like this, if it helps any:
Expected behavior
Perhaps the large memory usage is unavoidable to express the users request, but thinking about ways to scale up to handle larger and larger requests that don't just add more and more items to a DataColumnCollection would be nice.
In large scale situations like this using contiguous datastructures is non-optimal as it requires very large LOH allocations, those stick around for a long time even if GC eligible and tend to bloat the overall memory usage of VS.
The text was updated successfully, but these errors were encountered:
Problem Description
I work on the Visual Studio performance and reliability team. We have internal memory dumps that we automatically analyze looking to attribute large memory usage inside VS. In a few dumps I have recently looked at FastReport is rooting/responsible for DataColumnCollection objects that in aggregate can reach over a gig in size. In some dumps it is many (say 36) each holding reasonably sized (say 30-36 MB) DataColumnCollections. In other dumps I have seen 1-2 each holding DataColumnCollections whose size is > 1 GB.
DataColumnCollection is unfortunately not built to scale as it internally holds items in an ArrayList. Using segmented or non-contiguous data structures will be much more friendly in terms of address space when dealing with very large amounts of items.
An example of a segmented collection can be seen here: https://github.com/dotnet/roslyn/blob/main/src/Dependencies/Collections/SegmentedArray%601.cs
To Reproduce
Unclear as these reports come from dump taken due to large memory usage. The GC root chains of these items tend to look like this, if it helps any:
Expected behavior
Perhaps the large memory usage is unavoidable to express the users request, but thinking about ways to scale up to handle larger and larger requests that don't just add more and more items to a DataColumnCollection would be nice.
In large scale situations like this using contiguous datastructures is non-optimal as it requires very large LOH allocations, those stick around for a long time even if GC eligible and tend to bloat the overall memory usage of VS.
The text was updated successfully, but these errors were encountered: