mirror of
https://gitee.com/bianbu-linux/linux-6.6
synced 2025-04-24 14:07:52 -04:00
mm: clean up mlock_page / munlock_page references in comments
Change documentation and comments that refer to now-renamed functions. Link: https://lkml.kernel.org/r/20230116192827.2146732-5-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
672aa27d0b
commit
e0650a41f7
3 changed files with 19 additions and 17 deletions
|
@ -298,7 +298,7 @@ treated as a no-op and mlock_fixup() simply returns.
|
||||||
If the VMA passes some filtering as described in "Filtering Special VMAs"
|
If the VMA passes some filtering as described in "Filtering Special VMAs"
|
||||||
below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
|
below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
|
||||||
off a subset of the VMA if the range does not cover the entire VMA. Any pages
|
off a subset of the VMA if the range does not cover the entire VMA. Any pages
|
||||||
already present in the VMA are then marked as mlocked by mlock_page() via
|
already present in the VMA are then marked as mlocked by mlock_folio() via
|
||||||
mlock_pte_range() via walk_page_range() via mlock_vma_pages_range().
|
mlock_pte_range() via walk_page_range() via mlock_vma_pages_range().
|
||||||
|
|
||||||
Before returning from the system call, do_mlock() or mlockall() will call
|
Before returning from the system call, do_mlock() or mlockall() will call
|
||||||
|
@ -373,20 +373,21 @@ Because of the VMA filtering discussed above, VM_LOCKED will not be set in
|
||||||
any "special" VMAs. So, those VMAs will be ignored for munlock.
|
any "special" VMAs. So, those VMAs will be ignored for munlock.
|
||||||
|
|
||||||
If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
|
If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
|
||||||
specified range. All pages in the VMA are then munlocked by munlock_page() via
|
specified range. All pages in the VMA are then munlocked by munlock_folio() via
|
||||||
mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same
|
mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same
|
||||||
function used when mlocking a VMA range, with new flags for the VMA indicating
|
function used when mlocking a VMA range, with new flags for the VMA indicating
|
||||||
that it is munlock() being performed.
|
that it is munlock() being performed.
|
||||||
|
|
||||||
munlock_page() uses the mlock pagevec to batch up work to be done under
|
munlock_folio() uses the mlock pagevec to batch up work to be done
|
||||||
lru_lock by __munlock_page(). __munlock_page() decrements the page's
|
under lru_lock by __munlock_folio(). __munlock_folio() decrements the
|
||||||
mlock_count, and when that reaches 0 it clears PG_mlocked and clears
|
folio's mlock_count, and when that reaches 0 it clears the mlocked flag
|
||||||
PG_unevictable, moving the page from unevictable state to inactive LRU.
|
and clears the unevictable flag, moving the folio from unevictable state
|
||||||
|
to the inactive LRU.
|
||||||
|
|
||||||
But in practice that may not work ideally: the page may not yet have reached
|
But in practice that may not work ideally: the folio may not yet have reached
|
||||||
"the unevictable LRU", or it may have been temporarily isolated from it. In
|
"the unevictable LRU", or it may have been temporarily isolated from it. In
|
||||||
those cases its mlock_count field is unusable and must be assumed to be 0: so
|
those cases its mlock_count field is unusable and must be assumed to be 0: so
|
||||||
that the page will be rescued to an evictable LRU, then perhaps be mlocked
|
that the folio will be rescued to an evictable LRU, then perhaps be mlocked
|
||||||
again later if vmscan finds it in a VM_LOCKED VMA.
|
again later if vmscan finds it in a VM_LOCKED VMA.
|
||||||
|
|
||||||
|
|
||||||
|
@ -489,15 +490,16 @@ For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
|
||||||
munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
|
munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
|
||||||
(unless it was a PTE mapping of a part of a transparent huge page).
|
(unless it was a PTE mapping of a part of a transparent huge page).
|
||||||
|
|
||||||
munlock_page() uses the mlock pagevec to batch up work to be done under
|
munlock_folio() uses the mlock pagevec to batch up work to be done
|
||||||
lru_lock by __munlock_page(). __munlock_page() decrements the page's
|
under lru_lock by __munlock_folio(). __munlock_folio() decrements the
|
||||||
mlock_count, and when that reaches 0 it clears PG_mlocked and clears
|
folio's mlock_count, and when that reaches 0 it clears the mlocked flag
|
||||||
PG_unevictable, moving the page from unevictable state to inactive LRU.
|
and clears the unevictable flag, moving the folio from unevictable state
|
||||||
|
to the inactive LRU.
|
||||||
|
|
||||||
But in practice that may not work ideally: the page may not yet have reached
|
But in practice that may not work ideally: the folio may not yet have reached
|
||||||
"the unevictable LRU", or it may have been temporarily isolated from it. In
|
"the unevictable LRU", or it may have been temporarily isolated from it. In
|
||||||
those cases its mlock_count field is unusable and must be assumed to be 0: so
|
those cases its mlock_count field is unusable and must be assumed to be 0: so
|
||||||
that the page will be rescued to an evictable LRU, then perhaps be mlocked
|
that the folio will be rescued to an evictable LRU, then perhaps be mlocked
|
||||||
again later if vmscan finds it in a VM_LOCKED VMA.
|
again later if vmscan finds it in a VM_LOCKED VMA.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -2167,7 +2167,7 @@ try_again:
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* __munlock_pagevec may clear a writeback page's LRU flag without
|
* __munlock_folio() may clear a writeback page's LRU flag without
|
||||||
* page_lock. We need wait writeback completion for this page or it
|
* page_lock. We need wait writeback completion for this page or it
|
||||||
* may trigger vfs BUG while evict inode.
|
* may trigger vfs BUG while evict inode.
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -201,7 +201,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
|
||||||
* Is an smp_mb__after_atomic() still required here, before
|
* Is an smp_mb__after_atomic() still required here, before
|
||||||
* folio_evictable() tests the mlocked flag, to rule out the possibility
|
* folio_evictable() tests the mlocked flag, to rule out the possibility
|
||||||
* of stranding an evictable folio on an unevictable LRU? I think
|
* of stranding an evictable folio on an unevictable LRU? I think
|
||||||
* not, because __munlock_page() only clears the mlocked flag
|
* not, because __munlock_folio() only clears the mlocked flag
|
||||||
* while the LRU lock is held.
|
* while the LRU lock is held.
|
||||||
*
|
*
|
||||||
* (That is not true of __page_cache_release(), and not necessarily
|
* (That is not true of __page_cache_release(), and not necessarily
|
||||||
|
@ -216,7 +216,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
|
||||||
folio_set_unevictable(folio);
|
folio_set_unevictable(folio);
|
||||||
/*
|
/*
|
||||||
* folio->mlock_count = !!folio_test_mlocked(folio)?
|
* folio->mlock_count = !!folio_test_mlocked(folio)?
|
||||||
* But that leaves __mlock_page() in doubt whether another
|
* But that leaves __mlock_folio() in doubt whether another
|
||||||
* actor has already counted the mlock or not. Err on the
|
* actor has already counted the mlock or not. Err on the
|
||||||
* safe side, underestimate, let page reclaim fix it, rather
|
* safe side, underestimate, let page reclaim fix it, rather
|
||||||
* than leaving a page on the unevictable LRU indefinitely.
|
* than leaving a page on the unevictable LRU indefinitely.
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue