Skip to content

Commit

Permalink
Update Linux_Memory_Management_Essentials.md
Browse files Browse the repository at this point in the history
Signed-off-by: Igor Stoppa <istoppa@nvidia.com>
  • Loading branch information
igor-stoppa authored Sep 19, 2024
1 parent 2ccc621 commit c3496f6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion Contributions/Linux_Memory_Management_Essentials.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ The following section presents a set of statements that can be objectively verif
1. this is implemented through the concept of the "buddy allocator", meaning that whenever a certain amount of linear memory is requested (either sub-page or multi-page size), it always tries to obtain it from the smallest free slot available, only breaking larger free slots when no alternatives are possible.
2. the kernel also keeps ready a certain amount of pre-diced memory allocations, to avoid incurring the penalty of having to look for some free memory as a consequence of an allocation request.
3. folios are structures introduced to simplify the management of what has been traditionally called compound pages and reduce memory fragmentation: a compound page represents a group of contiguous pages that is treated as a single logical unit. Folios could eventually support the use of optimisations provided by certain pages (e.g. ARM64 allows the use of a single page table entry to represent 16 pages, as long as they are physically contiguous and aligned to a 16 pages boundary, through the "contiguous bit" flag in the page table). This can be useful e.g. when keeping in the page cache a chunk of data from a file, should the memory be released, it could result in releasing several physically contiguous pages, instead of scattered ones.
6. whenever possible, allocations happen through caches, which means that said caches must be re-filled, whenever they hit a low watermark, and this re-filling can happen in two ways:
6. whenever possible, allocations happen through caches (e.g. kmalloc caches, perCPU caches, ad-hoc object caches, etc.), which means that said caches must be re-filled, whenever they hit a low watermark, and this re-filling can happen in two ways:
1. through recycling memory as it gets freed: for example in case a core is running short of pages in its own local queue, it might "capture" a page that it is freeing.
2. through a dedicated thread that can asynchronously dice larger order pages into smaller portions that are then placed into caches in need to be refilled
7. the kernel can also employ an Out Of Memory Killer feature, that is invoked in extreme cases, when all the existing stashes of memory have been depleted: in this case the killer will pick a user space process and just evict it, releasing all the resources it had allocated. It's far from desirable, but it's a method sometimes employed.
Expand Down

0 comments on commit c3496f6

Please sign in to comment.