Search code examples
rustmemorymemory-managementallocationallocator

Why is a nested allocator in Rust causing heap corruption?


I've been experimenting with the Rust allocator_api feature and have developed a simple linear allocator. My test cases show this works as expected. However, I want to be able to nest allocators, e.g.:

let linear1 = LinearAllocator::with_capacity(32);
let linear2 = LinearAllocator::with_capacity_in(4, linear1.clone());

assert!(linear1.allocate(Layout::new::<u32>()).is_ok());
assert!(linear2.allocate(Layout::new::<u32>()).is_ok());

But when I do this, my test fails with the following output:

error: test failed, to rerun pass `-p allocators --bin allocators`

Caused by:
  process didn't exit successfully: `D:\Dev\allocators\target\debug\deps\allocators-a76735fe34fe79e2.exe test::linear_alloc_nested_with_linear_alloc --exact --nocapture` (exit code: 0xc0000374, STATUS_HEAP_CORRUPTION)

I can see from a dbg!(&self) in Allocator::deallocate() that it's trying to deallocate from the Global allocator rather than the inner LinearAllocator:

[src\main.rs:126:9] &self = LinearAllocator {
    size: 32,
    allocated: 4,
    data: 0x00000158a8cd6400,
    layout: Layout {
        size: 32,
        align: 1 (1 << 0),
    },
    alloc: Global,
}

Maybe this is expected because IIRC Rust drops in the order of declaration (here linear1 then linear2). But I get the same error if I manually drop in the reverse order of declaration. So I'm not 100% sure I'm on the right track.

I do call deallocate in Drop. If I debug the test, it also shows here as the cause of the problem. But I can't figure out why. I need to be able to deallocate, e.g. if I nest a linear allocator inside another allocator that does deallocate. If I exclude the call to deallocate in drop, the test passes. If I do nothing in Drop, will the right thing happen?

Here's the full LinearAllocator code:

#[derive(Clone, Debug)]
pub struct LinearAllocator<A: Allocator = Global> {
    size: usize,
    allocated: Arc<AtomicUsize>,
    data: *mut u8,
    layout: Layout,
    alloc: A,
}

impl Default for LinearAllocator<Global> {
    fn default() -> Self {
        Self {
            size: 0,
            allocated: Arc::new(AtomicUsize::new(0)),
            data: std::ptr::null_mut(),
            layout: Layout::new::<u8>(),
            alloc: Global,
        }
    }
}

impl LinearAllocator<Global> {
    pub fn with_capacity(size: usize) -> Self {
        Self::with_capacity_in(size, Global)
    }
}

impl<A: Allocator> LinearAllocator<A> {
    pub fn with_capacity_in(size: usize, alloc: A) -> Self {
        let layout = Layout::array::<u8>(size).unwrap();
        let data = unsafe { alloc.allocate(layout).unwrap().as_mut() as *mut [u8] as *mut u8 };

        Self {
            size,
            allocated: Arc::new(AtomicUsize::new(0)),
            data,
            layout,
            alloc,
        }
    }

    pub fn reset(&self) {
        self.allocated.store(0, Ordering::Relaxed);
    }
}

unsafe impl<A: Allocator + Debug> Allocator for LinearAllocator<A> {
    fn allocate(&self, layout: Layout) -> Result<std::ptr::NonNull<[u8]>, AllocError> {
        let size = layout.size();
        let align = layout.align();

        let remainder = size % align;
        let aligned_size = if remainder == 0 {
            size
        } else {
            size + align - remainder
        };

        let mut current = self.allocated.load(Ordering::Relaxed);
        loop {
            if self.size - current < aligned_size {
                return Err(AllocError);
            }

            let new = current + aligned_size;

            if let Err(actual) =
                self.allocated
                    .compare_exchange(current, new, Ordering::Relaxed, Ordering::Relaxed)
            {
                current = actual;
            } else {
                break;
            }
        }

        // Safety: We previously checked that there is enough space in data.
        let data = unsafe { self.data.add(current) };
        let ptr = std::ptr::slice_from_raw_parts_mut(data, aligned_size);

        std::ptr::NonNull::new(ptr).ok_or(AllocError)
    }

    unsafe fn deallocate(&self, _ptr: std::ptr::NonNull<u8>, _layout: Layout) {
        // Linear allocator does not deallocate.
        dbg!(&self);
    }
}

impl<A: Allocator> Drop for LinearAllocator<A> {
    fn drop(&mut self) {
        self.reset();

        if let Some(ptr) = std::ptr::NonNull::new(self.data) {
            unsafe { A::deallocate(&self.alloc, ptr, self.layout) };
        }
    }
}

Why is this error occurring and what can be done to resolve it?


Solution

  • The problem isn't the layered allocator, by simply cloneing the allocator which copies self.data you cause a double free in drop because both instances try to deallocate the same pointer.

    i.e. the minimal reproducible test is:

        let linear1 = LinearAllocator::with_capacity(32);
        linear1.clone();
    

    You'd have to make sure the pointer is only passed to deallocate once, for example with a counter, I use Arc for convenience here and ManuallyDrop so I can take it in Drop:

    #[derive(Clone, Debug)]
    pub struct LinearAllocator<A: Allocator = Global> {
        size: usize,
        allocated: Arc<AtomicUsize>,
        data: ManuallyDrop<Arc<*mut u8>>,
        layout: Layout,
        alloc: A,
    }
    
    impl<A: Allocator> Drop for LinearAllocator<A> {
        fn drop(&mut self) {
            self.reset();
    
            if let Some(ptr) = Arc::into_inner(unsafe { ManuallyDrop::take(&mut self.data) })
                .and_then(|data| std::ptr::NonNull::new(data))
            {
                unsafe { self.alloc.deallocate(ptr, self.layout) };
            }
        }
    }
    

    Some small adjustments are necessary in the rest of the code as well.