You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Given collection contains millions of documents.
Calling DeleteMany(x => array.Contains(x.Foo)) to delete documents, resulting in million of documents removed, takes around 40 minutes to complete, and causes log file to grow hundreds of gigabytes during this time.
Workaround
If splitting process in smaller batches, e.g. to delete not more than 1000 elements at a time with DeleteMany(x => array.Contains(x.Foo)) call, WAL file size remains small.
Concern
Documents in database where this happens have simple structure and are not exceptionally large (couple of string fields with up to 100 characters each).
While time it takes to delete million documents might make sense, considering million of documents to delete and expression by field that doesn't have index, such aggressive growth of WAL file doesn't seem to have a reasonable explanation. Even if each document deletion caused a new page inserted into WAL, it still won't explain hundreds of gigabytes.
So the concern is that there might be something suboptimal in how WAL is being written in this scenario.
The text was updated successfully, but these errors were encountered:
Version
5.0.17
Describe the bug
Given collection contains millions of documents.
Calling
DeleteMany(x => array.Contains(x.Foo))
to delete documents, resulting in million of documents removed, takes around 40 minutes to complete, and causes log file to grow hundreds of gigabytes during this time.Workaround
If splitting process in smaller batches, e.g. to delete not more than 1000 elements at a time with
DeleteMany(x => array.Contains(x.Foo))
call, WAL file size remains small.Concern
Documents in database where this happens have simple structure and are not exceptionally large (couple of string fields with up to 100 characters each).
While time it takes to delete million documents might make sense, considering million of documents to delete and expression by field that doesn't have index, such aggressive growth of WAL file doesn't seem to have a reasonable explanation. Even if each document deletion caused a new page inserted into WAL, it still won't explain hundreds of gigabytes.
So the concern is that there might be something suboptimal in how WAL is being written in this scenario.
The text was updated successfully, but these errors were encountered: