RAII Guards

Custom drop guards for resource management

intermediate
dropresourcesraii
šŸŽ® Interactive Playground

What is RAII?

RAII (Resource Acquisition Is Initialization) is a programming pattern where resource lifetime is tied to object lifetime. In Rust, this is enforced through the Drop trait. The Core Principle:
  • Resource acquired in constructor
  • Resource automatically released in destructor (Drop)
  • No manual cleanup needed - compiler guarantees cleanup
  • Exception-safe (panic-safe in Rust)
{
    let file = File::open("data.txt")?;  // Resource acquired
    // Use file...
}  // Drop called automatically, file closed - even if panic occurs!

Why RAII Guards Are Critical

Problems They Solve:

  1. Forgot to Unlock: Mutex left locked forever (deadlock)
  2. Early Return: Resource leaked because cleanup code never reached
  3. Panic Safety: Resource leaked when panic occurs
  4. Complex Control Flow: Hard to ensure cleanup in all paths
  5. API Misuse: Users forget to call cleanup functions

RAII Solution:

  • Automatic: Cleanup happens automatically when guard drops
  • Exception-Safe: Works even during panics
  • Type-Safe: Can't forget to unlock/release
  • Zero Cost: Same assembly as manual cleanup

Real-World Example 1: Database Transaction Guard (Systems/Web)

use std::sync::Mutex;

/// Database connection with transaction support
pub struct DbConnection {
    conn: Mutex<Connection>,
}

struct Connection {
    transaction_active: bool,
    // In real code: TCP socket, prepared statements, etc.
}

/// Transaction guard - automatically commits or rolls back
pub struct Transaction<'conn> {
    conn: &'conn DbConnection,
    committed: bool,
}

impl DbConnection {
    pub fn new() -> Self {
        Self {
            conn: Mutex::new(Connection {
                transaction_active: false,
            }),
        }
    }

    /// Begin a transaction - returns a guard
    pub fn transaction(&self) -> Result<Transaction, &'static str> {
        let mut conn = self.conn.lock().unwrap();

        if conn.transaction_active {
            return Err("Transaction already active");
        }

        conn.transaction_active = true;
        println!("BEGIN TRANSACTION");

        Ok(Transaction {
            conn: self,
            committed: false,
        })
    }

    /// Execute a query (simplified)
    pub fn execute(&self, sql: &str) -> Result<(), &'static str> {
        println!("EXEC: {}", sql);
        Ok(())
    }
}

impl<'conn> Transaction<'conn> {
    /// Execute query within transaction
    pub fn execute(&self, sql: &str) -> Result<(), &'static str> {
        self.conn.execute(sql)
    }

    /// Explicit commit
    pub fn commit(mut self) -> Result<(), &'static str> {
        let mut conn = self.conn.conn.lock().unwrap();
        println!("COMMIT");
        conn.transaction_active = false;
        self.committed = true;
        Ok(())
    }

    /// Explicit rollback
    pub fn rollback(mut self) -> Result<(), &'static str> {
        let mut conn = self.conn.conn.lock().unwrap();
        println!("ROLLBACK");
        conn.transaction_active = false;
        self.committed = true;  // Mark as handled
        Ok(())
    }
}

impl Drop for Transaction<'_> {
    fn drop(&mut self) {
        if !self.committed {
            // Auto-rollback if not explicitly committed
            let mut conn = self.conn.conn.lock().unwrap();
            println!("AUTO-ROLLBACK (transaction guard dropped without commit)");
            conn.transaction_active = false;
        }
    }
}

// Usage examples
fn transaction_example() {
    let db = DbConnection::new();

    // Example 1: Explicit commit
    {
        let tx = db.transaction().unwrap();
        tx.execute("INSERT INTO users VALUES (1, 'Alice')").unwrap();
        tx.execute("INSERT INTO orders VALUES (1, 100)").unwrap();
        tx.commit().unwrap();  // Explicitly commit
    }  // Transaction completed

    // Example 2: Auto-rollback on early return
    {
        let tx = db.transaction().unwrap();
        tx.execute("INSERT INTO users VALUES (2, 'Bob')").unwrap();

        if some_error_condition() {
            return;  // Early return - transaction auto-rolls back!
        }

        tx.execute("INSERT INTO orders VALUES (2, 200)").unwrap();
        tx.commit().unwrap();
    }

    // Example 3: Auto-rollback on panic
    {
        let result = std::panic::catch_unwind(|| {
            let tx = db.transaction().unwrap();
            tx.execute("INSERT INTO users VALUES (3, 'Charlie')").unwrap();
            panic!("Something went wrong!");
            // Never reached, but transaction still rolls back!
        });

        if result.is_err() {
            println!("Panic occurred, but transaction was rolled back safely");
        }
    }
}

fn some_error_condition() -> bool {
    true  // Simulated error
}

Why RAII is Perfect for Transactions:

  1. Auto-Rollback: Forget to commit? Automatically rolled back
  2. Panic-Safe: Even if code panics, transaction rolls back
  3. Early Return: Complex logic with multiple returns? Still safe
  4. Type System Enforcement: Can't accidentally commit twice
  5. Zero Runtime Cost: Drop is inline, no overhead

Real-World Example 2: Mutex Guard Implementation (Concurrency)

Here's how std::sync::MutexGuard works internally:

use std::sync::Mutex;
use std::ops::{Deref, DerefMut};
use std::cell::UnsafeCell;

/// Simplified Mutex implementation
pub struct MyMutex<T> {
    locked: std::sync::atomic::AtomicBool,
    data: UnsafeCell<T>,
}

unsafe impl<T: Send> Send for MyMutex<T> {}
unsafe impl<T: Send> Sync for MyMutex<T> {}

/// The RAII guard - automatically unlocks on drop
pub struct MyMutexGuard<'a, T> {
    mutex: &'a MyMutex<T>,
}

impl<T> MyMutex<T> {
    pub fn new(data: T) -> Self {
        Self {
            locked: std::sync::atomic::AtomicBool::new(false),
            data: UnsafeCell::new(data),
        }
    }

    /// Lock the mutex - returns guard
    pub fn lock(&self) -> MyMutexGuard<T> {
        // Spin until we acquire the lock
        while self.locked.swap(true, std::sync::atomic::Ordering::Acquire) {
            std::hint::spin_loop();
        }

        println!("Mutex locked");

        MyMutexGuard { mutex: self }
    }
}

impl<T> Deref for MyMutexGuard<'_, T> {
    type Target = T;

    fn deref(&self) -> &T {
        unsafe { &*self.mutex.data.get() }
    }
}

impl<T> DerefMut for MyMutexGuard<'_, T> {
    fn deref_mut(&mut self) -> &mut T {
        unsafe { &mut *self.mutex.data.get() }
    }
}

impl<T> Drop for MyMutexGuard<'_, T> {
    fn drop(&mut self) {
        // Unlock when guard is dropped
        self.mutex.locked.store(false, std::sync::atomic::Ordering::Release);
        println!("Mutex unlocked");
    }
}

// Usage
fn mutex_guard_example() {
    let data = MyMutex::new(vec![1, 2, 3]);

    {
        let mut guard = data.lock();  // Acquires lock
        guard.push(4);                // Can mutate through guard
        guard.push(5);
    }  // Guard dropped, mutex automatically unlocked

    // Can lock again
    let guard2 = data.lock();
    println!("Data: {:?}", *guard2);
}  // Unlocked again

Key RAII Techniques:

  1. Deref/DerefMut: Guard acts like &T or &mut T
  2. Lifetime Binding: Guard can't outlive the Mutex
  3. Automatic Unlock: Drop trait handles cleanup
  4. Move Semantics: Can't duplicate guard (ownership prevents double-unlock)

Real-World Example 3: File Lock Guard (Systems)

use std::fs::File;
use std::io::{self, Write};
use std::path::Path;

/// Exclusive file lock that releases on drop
pub struct FileLock {
    file: File,
    path: String,
}

impl FileLock {
    /// Acquire exclusive lock on file
    pub fn acquire(path: &str) -> io::Result<Self> {
        // In real code: use OS-level file locking (flock/LockFileEx)
        println!("Acquiring lock on {}", path);

        let file = File::create(format!("{}.lock", path))?;

        Ok(FileLock {
            file,
            path: path.to_string(),
        })
    }

    /// Write to file while holding lock
    pub fn write(&mut self, data: &[u8]) -> io::Result<()> {
        self.file.write_all(data)
    }
}

impl Drop for FileLock {
    fn drop(&mut self) {
        println!("Releasing lock on {}", self.path);
        // In real code: release OS lock
        std::fs::remove_file(format!("{}.lock", self.path)).ok();
    }
}

// Usage
fn file_lock_example() -> io::Result<()> {
    let mut lock = FileLock::acquire("important_data.txt")?;

    // Critical section - file is locked
    lock.write(b"Critical data")?;

    if some_error_occurs() {
        return Err(io::Error::new(io::ErrorKind::Other, "Error occurred"));
        // Lock automatically released even though we're returning early!
    }

    lock.write(b" more data")?;

    Ok(())
}  // Lock released here automatically

fn some_error_occurs() -> bool {
    false
}

Real-World Example 4: Span Guard for Tracing (Observability)

use std::time::Instant;

/// Tracing span that records timing automatically
pub struct Span {
    name: &'static str,
    start: Instant,
}

impl Span {
    /// Enter a new span
    pub fn enter(name: &'static str) -> Self {
        println!("[TRACE] Entering span: {}", name);
        Self {
            name,
            start: Instant::now(),
        }
    }
}

impl Drop for Span {
    fn drop(&mut self) {
        let elapsed = self.start.elapsed();
        println!("[TRACE] Exiting span: {} (took {:?})", self.name, elapsed);
    }
}

// Usage - automatic timing!
fn process_request() {
    let _span = Span::enter("process_request");

    {
        let _span = Span::enter("parse_headers");
        std::thread::sleep(std::time::Duration::from_millis(10));
    }  // parse_headers span ends, timing recorded

    {
        let _span = Span::enter("fetch_from_db");
        std::thread::sleep(std::time::Duration::from_millis(50));
    }  // fetch_from_db span ends

    {
        let _span = Span::enter("serialize_response");
        std::thread::sleep(std::time::Duration::from_millis(5));
    }  // serialize_response span ends

}  // process_request span ends

Output:

[TRACE] Entering span: process_request
[TRACE] Entering span: parse_headers
[TRACE] Exiting span: parse_headers (took 10.123ms)
[TRACE] Entering span: fetch_from_db
[TRACE] Exiting span: fetch_from_db (took 50.456ms)
[TRACE] Entering span: serialize_response
[TRACE] Exiting span: serialize_response (took 5.789ms)
[TRACE] Exiting span: process_request (took 66.789ms)

Tracing Benefits:

  • Automatic Timing: No manual start/stop calls
  • Nested Spans: RAII handles nesting automatically
  • Panic-Safe: Timing recorded even if code panics
  • Zero Boilerplate: Just wrap code in span

Advanced Pattern: Scoped Guards

/// Thread pool with scoped thread guard
pub struct ThreadPool {
    threads: Vec<std::thread::JoinHandle<()>>,
}

/// Guard that ensures all threads join before dropping
pub struct ThreadPoolGuard<'pool> {
    pool: &'pool mut ThreadPool,
    joined: bool,
}

impl ThreadPool {
    pub fn new(size: usize) -> Self {
        Self {
            threads: Vec::with_capacity(size),
        }
    }

    /// Spawn threads - returns guard
    pub fn scoped(&mut self) -> ThreadPoolGuard {
        ThreadPoolGuard {
            pool: self,
            joined: false,
        }
    }
}

impl ThreadPoolGuard<'_> {
    /// Spawn a thread in this pool
    pub fn spawn<F>(&mut self, f: F)
    where
        F: FnOnce() + Send + 'static,
    {
        let handle = std::thread::spawn(f);
        self.pool.threads.push(handle);
    }

    /// Explicitly join all threads
    pub fn join(mut self) {
        for handle in self.pool.threads.drain(..) {
            handle.join().ok();
        }
        self.joined = true;
    }
}

impl Drop for ThreadPoolGuard<'_> {
    fn drop(&mut self) {
        if !self.joined {
            println!("Auto-joining threads...");
            for handle in self.pool.threads.drain(..) {
                handle.join().ok();
            }
        }
    }
}

Guard Design Patterns

Pattern 1: Explicit vs Implicit Completion

impl Transaction {
    // Explicit: user must call commit()
    pub fn commit(self) {
        // Consumes self - can't use after commit
    }
}

impl Drop for Transaction {
    fn drop(&mut self) {
        if !self.committed {
            // Auto-rollback if not committed
        }
    }
}
Trade-off:
  • āœ… Safe: Forgot to commit? Auto-rolls back
  • āŒ Potential surprise: User might expect auto-commit

Pattern 2: Defusing Guards

impl MutexGuard {
    /// Forget to unlock (dangerous!)
    pub fn leak(self) {
        std::mem::forget(self);  // Drop never called
    }
}
When to use: Transferring ownership to C FFI

Pattern 3: Guard Stacking

// Multiple guards can stack
let guard1 = mutex1.lock();
let guard2 = mutex2.lock();
let guard3 = mutex3.lock();
// All unlock in reverse order (LIFO)

āš ļø Anti-Patterns and Common Mistakes

āš ļø āŒ Mistake #1: Ignoring Guard (Immediate Drop)

// BAD: Guard dropped immediately!
mutex.lock();  // Lock acquired...and immediately released!
data.push(1);  // NOT protected!

// GOOD: Bind guard to variable
let _guard = mutex.lock();  // Stays locked
data.push(1);  // Protected

āš ļø āŒ Mistake #2: Holding Guards Too Long

// BAD: Holding lock while doing I/O
let _guard = data.lock();
expensive_network_call();  // Lock held during slow operation!

// GOOD: Release lock before slow operation
{
    let _guard = data.lock();
    let snapshot = data.clone();
}  // Lock released
expensive_network_call_with(snapshot);

āš ļø āŒ Mistake #3: Deadlock with Guard Ordering

// Thread 1
let _g1 = mutex_a.lock();
let _g2 = mutex_b.lock();  // Deadlock if Thread 2 locked b first!

// Thread 2
let _g1 = mutex_b.lock();
let _g2 = mutex_a.lock();  // Deadlock!

// FIX: Always acquire locks in same order
let _g1 = mutex_a.lock();
let _g2 = mutex_b.lock();

When to Use RAII Guards

āœ… Use RAII Guards When:

  1. Paired Operations: Acquire/release, lock/unlock, begin/end
  2. Resource Management: Files, sockets, memory, handles
  3. Scope-Based Cleanup: Want cleanup at scope exit
  4. Panic Safety Required: Must cleanup even during panics
  5. Complex Control Flow: Multiple return paths

āŒ Avoid RAII Guards When:

  1. Resource Shared Across Scopes: Need manual control
  2. Async Code: Guard lifetime doesn't match async lifetime
  3. Performance Critical: Drop overhead unacceptable (rare)
  4. Simple Operations: Overkill for trivial cleanup

Performance Characteristics

| Aspect | Cost | Notes |

|--------|------|-------|

| Guard Creation | ~0 cycles | Inline initialization |

| Guard Drop | ~0-10 cycles | Inline cleanup |

| Deref | 0 cycles | Compiler optimizes away |

| Memory | 1-2 words | Pointer + metadata |

| Binary Size | Minimal | Inline expansion |

Conclusion: RAII guards are effectively zero-cost.

Exercises

Exercise 1: Build a Timer Guard

Create a guard that measures execution time of a scope.

Hints:
  • Start timer in constructor
  • Print elapsed in Drop
  • Use std::time::Instant

Exercise 2: Resource Pool Guard

Implement a connection pool with guards that return connections on drop.

Hints:
  • Pool tracks available connections
  • Guard holds connection
  • Drop returns to pool

Exercise 3: Scope Guard with Callback

Create a generic guard that runs a callback on drop.

Hints:
  • Store closure in guard
  • Call closure in Drop
  • Use for custom cleanup logic

Further Reading

Real-World Usage

šŸ¦€ Tokio

Uses guards for runtime context, task locals, and tracing spans.

View on GitHub

šŸ¦€ tracing

Span guards automatically record timing on drop.

View on GitHub

šŸ¦€ parking_lot

High-performance Mutex with guard pattern.

View on GitHub

šŸŽ® Try it Yourself

šŸŽ®

RAII Guards - Playground

Run this code in the official Rust Playground