Commits (2)
  • Jann Horn's avatar
    mm: slub: add missing TID bump in kmem_cache_alloc_bulk() · fd4d9c7d
    Jann Horn authored
    When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu
    freelist of length M, and N > M > 0, it will first remove the M elements
    from the percpu freelist, then call ___slab_alloc() to allocate the next
    element and repopulate the percpu freelist. ___slab_alloc() can re-enable
    IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc()
    to properly commit the freelist head change.
    Fix it by unconditionally bumping c->tid when entering the slowpath.
    Cc: stable@vger.kernel.org
    Fixes: ebe909e0 ("slub: improve bulk alloc strategy")
    Signed-off-by: default avatarJann Horn <jannh@google.com>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  • Linus Torvalds's avatar
    mm: slub: be more careful about the double cmpxchg of freelist · 5076190d
    Linus Torvalds authored
    This is just a cleanup addition to Jann's fix to properly update the
    transaction ID for the slub slowpath in commit fd4d9c7d ("mm: slub:
    add missing TID bump..").
    The transaction ID is what protects us against any concurrent accesses,
    but we should really also make sure to make the 'freelist' comparison
    itself always use the same freelist value that we then used as the new
    next free pointer.
    Jann points out that if we do all of this carefully, we could skip the
    transaction ID update for all the paths that only remove entries from
    the lists, and only update the TID when adding entries (to avoid the ABA
    issue with cmpxchg and list handling re-adding a previously seen value).
    But this patch just does the "make sure to cmpxchg the same value we
    used" rather than then try to be clever.
    Acked-by: default avatarJann Horn <jannh@google.com>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
......@@ -2997,11 +2997,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
if (likely(page == c->page)) {
set_freepointer(s, tail_obj, c->freelist);
void **freelist = READ_ONCE(c->freelist);
set_freepointer(s, tail_obj, freelist);
if (unlikely(!this_cpu_cmpxchg_double(
s->cpu_slab->freelist, s->cpu_slab->tid,
c->freelist, tid,
freelist, tid,
head, next_tid(tid)))) {
note_cmpxchg_failure("slab_free", s, tid);
......@@ -3174,6 +3176,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
void *object = c->freelist;
if (unlikely(!object)) {
* We may have removed an object from c->freelist using
* the fastpath in the previous iteration; in that case,
* c->tid has not been bumped yet.
* Since ___slab_alloc() may reenable interrupts while
* allocating memory, we should bump c->tid now.
c->tid = next_tid(c->tid);
* Invoking slow path likely have side-effect
* of re-populating per CPU c->freelist