#[repr(align(128))]pub struct CachePadding {
atomic: AtomicUsize,
}
Expand description
Intentionally force alignment to 128 bytes to make a best effort attempt to place each atomic on its own cache line. This reduces contention and improves performance for common CPU caching protocols such as MESI.
Fields§
§atomic: AtomicUsize
Implementations§
Source§impl CachePadding
Convenience wrapper methods around atomic operations. Both start and end indices are packed
into a single atomic so that we can use the fastest and easiest to reason about Relaxed
ordering.
impl CachePadding
Convenience wrapper methods around atomic operations. Both start and end indices are packed
into a single atomic so that we can use the fastest and easiest to reason about Relaxed
ordering.
Auto Trait Implementations§
impl !Freeze for CachePadding
impl RefUnwindSafe for CachePadding
impl Send for CachePadding
impl Sync for CachePadding
impl Unpin for CachePadding
impl UnwindSafe for CachePadding
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more