Home/Concurrency Patterns/Send/Sync Bounds

Send/Sync Bounds

Understanding thread safety

intermediate
sendsyncthread-safety
🎮 Interactive Playground

What is Send and Sync?

Send and Sync are auto traits in Rust that determine whether types are safe to transfer or share across thread boundaries. These marker traits are fundamental to Rust's thread safety guarantees and enable compile-time verification of concurrent code correctness.
  • Send: A type is Send if it's safe to transfer ownership across thread boundaries
  • Sync: A type is Sync if it's safe to share references across thread boundaries (T is Sync if &T is Send)

The Problem

When writing concurrent code, you need to ensure that:

  1. Data transferred between threads is safe to move
  2. Data shared between threads is safe to access concurrently
  3. Thread safety is enforced at compile time, not runtime

Without Send and Sync, runtime data races could occur, leading to undefined behavior.

Example Code

use std::thread;
use std::sync::{Arc, Mutex};
use std::rc::Rc;

// Example 1: Send types can be moved between threads
#[derive(Debug)]
struct SendableData {
    value: i32,
    name: String,
}

// Automatically implements Send because all fields are Send
fn example_send() {
    let data = SendableData {
        value: 42,
        name: String::from("test"),
    };

    // OK: SendableData is Send, can move to another thread
    let handle = thread::spawn(move || {
        println!("Data in thread: {:?}", data);
    });

    handle.join().unwrap();
}

// Example 2: Sync types can be shared via references
fn example_sync() {
    let data = Arc::new(Mutex::new(vec![1, 2, 3]));
    let mut handles = vec![];

    for i in 0..3 {
        let data_clone = Arc::clone(&data);
        let handle = thread::spawn(move || {
            let mut lock = data_clone.lock().unwrap();
            lock.push(i);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Final data: {:?}", data.lock().unwrap());
}

// Example 3: Rc is NOT Send or Sync
fn example_rc_not_send() {
    let rc = Rc::new(42);

    // ❌ Compile error: Rc is not Send
    // let handle = thread::spawn(move || {
    //     println!("Value: {}", *rc);
    // });

    // ✅ Use Arc instead for thread-safe reference counting
    let arc = Arc::new(42);
    let handle = thread::spawn(move || {
        println!("Value: {}", *arc);
    });
    handle.join().unwrap();
}

// Example 4: Raw pointers are NOT Send or Sync by default
struct NotSend {
    ptr: *const i32, // Raw pointer is not Send
}

// Explicitly opt-out of Send (it's already not Send due to raw pointer)
// unsafe impl Send for NotSend {} // Don't do this unless you know it's safe!

// Example 5: PhantomData for correct auto-trait inference
use std::marker::PhantomData;

struct MyBox<T> {
    ptr: *mut T,
    _marker: PhantomData<T>, // Ensures MyBox<T> has same Send/Sync as T
}

// Now MyBox<T> is Send only if T is Send
// Now MyBox<T> is Sync only if T is Sync

unsafe impl<T: Send> Send for MyBox<T> {}
unsafe impl<T: Sync> Sync for MyBox<T> {}

fn main() {
    example_send();
    example_sync();
    example_rc_not_send();
}

Why It Works

Send Trait

  • A type T is Send if transferring ownership to another thread is safe
  • Most types are Send by default
  • Compiler automatically implements Send if all fields are Send
  • Raw pointers, Rc, and RefCell are NOT Send

Sync Trait

  • A type T is Sync if &T is Send (immutable references can be shared)
  • Immutable types are usually Sync
  • Interior mutability types need synchronization (Mutex, RwLock)
  • Cell and RefCell are NOT Sync because they use non-atomic operations

Relationship

// T is Sync if &T is Send
// Arc<T> is Send + Sync if T is Send + Sync
// Mutex<T> is Send + Sync if T is Send

When to Use

Use Send when:

  • Transferring data ownership between threads
  • Implementing work-stealing schedulers
  • Moving data into thread pools
  • Passing messages through channels

Use Sync when:

  • Sharing immutable data across threads
  • Implementing shared caches
  • Creating thread-safe singletons
  • Building concurrent data structures

Explicit implementations when:

  • Wrapping raw pointers in safe abstractions
  • Using FFI types with known thread-safety properties
  • Building custom concurrent data structures
  • Optimizing with interior mutability

⚠️ Anti-patterns

⚠️ Mistake #1: Blindly Implementing Send/Sync

// ❌ DON'T: Unsafe without proper analysis
struct MyType {
    ptr: *mut i32,
}

// Dangerous! Only if you can guarantee thread safety
// unsafe impl Send for MyType {}
// unsafe impl Sync for MyType {}

// ✅ DO: Use proper synchronization
struct SafeType {
    data: Arc<Mutex<i32>>,
}
// Automatically Send + Sync

⚠️ Mistake #2: Ignoring Interior Mutability

// ❌ DON'T: RefCell is not Sync
// let shared = Arc::new(RefCell::new(vec![1, 2, 3]));
// Won't compile: RefCell is not Sync

// ✅ DO: Use Mutex for interior mutability across threads
let shared = Arc::new(Mutex::new(vec![1, 2, 3]));

⚠️ Mistake #3: Unsafe Casts Without Guarantees

// ❌ DON'T: Transmuting to bypass Send/Sync
use std::mem;

// fn unsafe_send<T>(value: T) -> Box<dyn Send> {
//     unsafe { mem::transmute(Box::new(value)) }
// }

// ✅ DO: Respect the type system
fn safe_send<T: Send>(value: T) -> Box<dyn Send> {
    Box::new(value)
}

Advanced Example: Thread-Safe Cache

use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::thread;
use std::time::Duration;

/// Thread-safe cache with generic keys and values
/// K and V must be Send + Sync for the cache to be thread-safe
pub struct ThreadSafeCache<K, V>
where
    K: Eq + std::hash::Hash + Send + Sync,
    V: Clone + Send + Sync,
{
    data: Arc<RwLock<HashMap<K, V>>>,
}

impl<K, V> ThreadSafeCache<K, V>
where
    K: Eq + std::hash::Hash + Send + Sync,
    V: Clone + Send + Sync,
{
    pub fn new() -> Self {
        Self {
            data: Arc::new(RwLock::new(HashMap::new())),
        }
    }

    pub fn get(&self, key: &K) -> Option<V> {
        let read_lock = self.data.read().unwrap();
        read_lock.get(key).cloned()
    }

    pub fn insert(&self, key: K, value: V) {
        let mut write_lock = self.data.write().unwrap();
        write_lock.insert(key, value);
    }

    pub fn len(&self) -> usize {
        self.data.read().unwrap().len()
    }
}

// Clone is cheap because we're cloning Arc
impl<K, V> Clone for ThreadSafeCache<K, V>
where
    K: Eq + std::hash::Hash + Send + Sync,
    V: Clone + Send + Sync,
{
    fn clone(&self) -> Self {
        Self {
            data: Arc::clone(&self.data),
        }
    }
}

// Example usage
fn cache_example() {
    let cache = ThreadSafeCache::<String, i32>::new();
    let mut handles = vec![];

    // Writer threads
    for i in 0..5 {
        let cache_clone = cache.clone();
        let handle = thread::spawn(move || {
            cache_clone.insert(format!("key_{}", i), i * 10);
            thread::sleep(Duration::from_millis(10));
        });
        handles.push(handle);
    }

    // Reader threads
    for i in 0..5 {
        let cache_clone = cache.clone();
        let handle = thread::spawn(move || {
            thread::sleep(Duration::from_millis(5));
            if let Some(value) = cache_clone.get(&format!("key_{}", i)) {
                println!("Read key_{}: {}", i, value);
            }
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Final cache size: {}", cache.len());
}

Performance Characteristics

Send Overhead

  • Zero cost: Send is a marker trait with no runtime overhead
  • Transfer cost depends on the size of the data being moved
  • Compiler optimizations can eliminate unnecessary moves

Sync Overhead

  • Synchronization cost: Types like Mutex and RwLock have runtime overhead
  • Lock contention can significantly impact performance
  • Read operations with RwLock are faster than writes
  • Consider lock-free alternatives for hot paths

Trade-offs

  • Arc>: Simple but can cause contention
  • Arc>: Better for read-heavy workloads
  • Lock-free structures: Complex but highest performance

Exercises

Beginner

  1. Create a struct with a Vec field. Is it Send? Is it Sync? Why?
  2. Why is Rc not Send or Sync? What would you use instead?
  3. Write a function that spawns a thread and moves a String into it

Intermediate

  1. Implement a thread-safe counter using Arc>
  2. Create a type that wraps a raw pointer and correctly implements Send
  3. Build a simple thread pool that accepts Send + 'static closures

Advanced

  1. Implement a thread-safe LRU cache with Send + Sync bounds
  2. Create a custom smart pointer type with correct Send/Sync inference
  3. Build a work-stealing queue using atomic operations

Real-World Usage

Tokio Runtime

// Tokio requires Send for futures that are spawned
use tokio::runtime::Runtime;

let rt = Runtime::new().unwrap();
rt.spawn(async {
    // This closure must be Send
    println!("Running in Tokio task");
});

Rayon Parallel Iterator

use rayon::prelude::*;

// Elements must be Send to parallelize
let sum: i32 = (0..1000).into_par_iter().sum();

std::sync primitives

  • Mutex: Requires T: Send to be Sync
  • RwLock: Same requirements as Mutex
  • Arc: Requires T: Send + Sync to be Send + Sync

crossbeam

use crossbeam::channel;

// Channel works with any Send type
let (tx, rx) = channel::unbounded();

Further Reading

🎮 Try it Yourself

🎮

Send/Sync Bounds - Playground

Run this code in the official Rust Playground