Dynamic Programming

Memoization and tabulation patterns

advanced
dpmemoizationoptimization
🎮 Interactive Playground

What is Dynamic Programming?

Dynamic Programming (DP) solves complex problems by breaking them into overlapping subproblems. By storing solutions to subproblems (memoization or tabulation), we avoid redundant computation.

The Problem

Implementing DP in Rust involves:

  • Memoization: Using HashMap or arrays to cache results
  • Ownership: Passing memoization tables through recursive calls
  • Lifetimes: When caching references vs. owned values
  • Performance: Choosing between HashMap and array-based storage

Example Code

use std::collections::HashMap;
use std::cmp::{max, min};

/// Fibonacci with memoization (top-down DP)
pub fn fibonacci_memo(n: u64) -> u64 {
    fn fib_helper(n: u64, memo: &mut HashMap<u64, u64>) -> u64 {
        if n <= 1 {
            return n;
        }

        if let Some(&cached) = memo.get(&n) {
            return cached;
        }

        let result = fib_helper(n - 1, memo) + fib_helper(n - 2, memo);
        memo.insert(n, result);
        result
    }

    let mut memo = HashMap::new();
    fib_helper(n, &mut memo)
}

/// Fibonacci with tabulation (bottom-up DP)
pub fn fibonacci_tab(n: u64) -> u64 {
    if n <= 1 {
        return n;
    }

    let mut dp = vec![0u64; (n + 1) as usize];
    dp[1] = 1;

    for i in 2..=n as usize {
        dp[i] = dp[i - 1] + dp[i - 2];
    }

    dp[n as usize]
}

/// Fibonacci with space optimization
pub fn fibonacci_optimized(n: u64) -> u64 {
    if n <= 1 {
        return n;
    }

    let (mut prev, mut curr) = (0u64, 1u64);
    for _ in 2..=n {
        let next = prev + curr;
        prev = curr;
        curr = next;
    }
    curr
}

/// Longest Common Subsequence
pub fn lcs(s1: &str, s2: &str) -> String {
    let chars1: Vec<char> = s1.chars().collect();
    let chars2: Vec<char> = s2.chars().collect();
    let (m, n) = (chars1.len(), chars2.len());

    // dp[i][j] = length of LCS of s1[0..i] and s2[0..j]
    let mut dp = vec![vec![0usize; n + 1]; m + 1];

    for i in 1..=m {
        for j in 1..=n {
            if chars1[i - 1] == chars2[j - 1] {
                dp[i][j] = dp[i - 1][j - 1] + 1;
            } else {
                dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]);
            }
        }
    }

    // Backtrack to find the actual subsequence
    let mut result = Vec::new();
    let (mut i, mut j) = (m, n);
    while i > 0 && j > 0 {
        if chars1[i - 1] == chars2[j - 1] {
            result.push(chars1[i - 1]);
            i -= 1;
            j -= 1;
        } else if dp[i - 1][j] > dp[i][j - 1] {
            i -= 1;
        } else {
            j -= 1;
        }
    }

    result.reverse();
    result.into_iter().collect()
}

/// 0/1 Knapsack Problem
pub fn knapsack(weights: &[u32], values: &[u32], capacity: u32) -> u32 {
    let n = weights.len();
    let cap = capacity as usize;

    // dp[i][w] = max value using items 0..i with capacity w
    let mut dp = vec![vec![0u32; cap + 1]; n + 1];

    for i in 1..=n {
        for w in 0..=cap {
            // Don't take item i-1
            dp[i][w] = dp[i - 1][w];

            // Take item i-1 if it fits
            let item_weight = weights[i - 1] as usize;
            if item_weight <= w {
                dp[i][w] = max(dp[i][w], dp[i - 1][w - item_weight] + values[i - 1]);
            }
        }
    }

    dp[n][cap]
}

/// Edit Distance (Levenshtein Distance)
pub fn edit_distance(s1: &str, s2: &str) -> usize {
    let chars1: Vec<char> = s1.chars().collect();
    let chars2: Vec<char> = s2.chars().collect();
    let (m, n) = (chars1.len(), chars2.len());

    // dp[i][j] = edit distance between s1[0..i] and s2[0..j]
    let mut dp = vec![vec![0usize; n + 1]; m + 1];

    // Base cases
    for i in 0..=m {
        dp[i][0] = i; // Delete all characters
    }
    for j in 0..=n {
        dp[0][j] = j; // Insert all characters
    }

    for i in 1..=m {
        for j in 1..=n {
            if chars1[i - 1] == chars2[j - 1] {
                dp[i][j] = dp[i - 1][j - 1]; // No operation needed
            } else {
                dp[i][j] = 1 + min(
                    dp[i - 1][j - 1], // Replace
                    min(dp[i - 1][j], dp[i][j - 1]), // Delete or Insert
                );
            }
        }
    }

    dp[m][n]
}

/// Coin Change - minimum coins needed
pub fn coin_change(coins: &[u32], amount: u32) -> Option<u32> {
    let amount = amount as usize;
    let mut dp = vec![u32::MAX; amount + 1];
    dp[0] = 0;

    for i in 1..=amount {
        for &coin in coins {
            let coin = coin as usize;
            if coin <= i && dp[i - coin] != u32::MAX {
                dp[i] = min(dp[i], dp[i - coin] + 1);
            }
        }
    }

    if dp[amount] == u32::MAX {
        None
    } else {
        Some(dp[amount])
    }
}

/// Maximum Subarray Sum (Kadane's Algorithm)
pub fn max_subarray_sum(arr: &[i32]) -> i32 {
    if arr.is_empty() {
        return 0;
    }

    let mut max_ending_here = arr[0];
    let mut max_so_far = arr[0];

    for &num in &arr[1..] {
        max_ending_here = max(num, max_ending_here + num);
        max_so_far = max(max_so_far, max_ending_here);
    }

    max_so_far
}

fn main() {
    // Fibonacci
    println!("Fib(40) memo: {}", fibonacci_memo(40));
    println!("Fib(40) tab: {}", fibonacci_tab(40));
    println!("Fib(40) opt: {}", fibonacci_optimized(40));

    // Longest Common Subsequence
    let lcs_result = lcs("ABCDGH", "AEDFHR");
    println!("LCS: {}", lcs_result); // "ADH"

    // Knapsack
    let weights = [10, 20, 30];
    let values = [60, 100, 120];
    println!("Knapsack: {}", knapsack(&weights, &values, 50)); // 220

    // Edit Distance
    println!("Edit distance: {}", edit_distance("kitten", "sitting")); // 3

    // Coin Change
    println!("Coin change: {:?}", coin_change(&[1, 2, 5], 11)); // Some(3)

    // Max Subarray
    println!("Max subarray: {}", max_subarray_sum(&[-2, 1, -3, 4, -1, 2, 1, -5, 4])); // 6
}

Why This Works

  1. HashMap for sparse subproblems: When not all states are needed
  2. Vec> for dense DP: When most states will be computed
  3. Space optimization: Often only need previous row/state
  4. Iterative over recursive: Avoids stack overflow for large inputs

Memoization vs. Tabulation

| Approach | Pros | Cons |

|----------|------|------|

| Memoization (top-down) | Only computes needed states, intuitive | Recursion overhead, stack limits |

| Tabulation (bottom-up) | No recursion, easier to optimize space | May compute unnecessary states |

⚠️ Anti-patterns

// DON'T: Recompute without memoization
fn naive_fib(n: u64) -> u64 {
    if n <= 1 { n }
    else { naive_fib(n - 1) + naive_fib(n - 2) } // O(2^n) time!
}

// DON'T: Use HashMap when array suffices
fn overcomplicated_fib(n: u64) -> u64 {
    let mut memo: HashMap<u64, u64> = HashMap::new();
    // Array would be faster for consecutive indices
}

Real-World Usage

  • Text editors: Diff algorithms (edit distance)
  • Bioinformatics: Sequence alignment (LCS)
  • Finance: Portfolio optimization (knapsack)
  • Compilers: Optimal code generation

Exercises

  1. Implement longest increasing subsequence (LIS)
  2. Add path reconstruction to knapsack
  3. Solve the matrix chain multiplication problem
  4. Implement word break with dictionary lookup

🎮 Try it Yourself

🎮

Dynamic Programming - Playground

Run this code in the official Rust Playground