Article

Exploring Memoization in Swift

Chris Sessions

August 11, 2025

This is the first article in the Swift Macro Testing series. The purpose of the series is to explore a useful triad of testing approaches we use at Livefront when building Swift macros.

In this article, we look at an optimization technique called memoization, where you cache the results of pure function calls, and use the cached results for subsequent calls. In the rest of the series, we’ll use our findings here to build a Memoized Swift macro. The purpose of this article is to see what memoization looks like and why you’d use it.

Fibonacci Numbers

Memoization is valuable when computing the result of a pure function takes a long time. When results are cached, you only have to compute the result once for any given input. Each subsequent call returns the cached result.

A classic example of a pure function that can become computationally expensive is one that recursively computes the nᵗʰ Fibonacci number.

func fibonacci(_ n: Int) -> Int {
if n < 0 {
0
} else if n < 2 {
n
} else {
fibonacci(n - 1) + fibonacci(n - 2)
}
}

While the early Fibonacci numbers don’t present a problem, the function has a time complexity of O(2ⁿ), which means that its compute time grows exponentially.

fibonacci(10) // 177 calls             Instant
fibonacci(15) // 1,973 calls Instant
fibonacci(20) // 21,891 calls Nearly instant
fibonacci(25) // 242,785 calls Several seconds
fibonacci(50) // >1 Quadrillion calls Hours or days

The first 20 or so Fibonacci numbers generate without any noticeable delay, but by the 25th Fibonacci number, computation takes several seconds. By the 50th Fibonacci number, compute time estimates range from hours to days.

To make matters worse, each identical call carries the same cost. If calling fibonacci(n) takes one minute, then calling it twice takes two minutes, calling it three times takes three minutes, and so on.

Shallow Memoization

There’s a clear need to cache the fibonacci results, so let’s implement a memoized variant. To do so we need three things:

  1. A fibonacci function that computes Fibonacci numbers
  2. A cache that stores the computed results
  3. A memoizedFibonacci function that reads and writes cached results
final class Sample {
var counter = 0

func fibonacci(_ n: Int) -> Int {
counter += 1

return if n < 0 {
0
} else if n < 2 {
n
} else {
fibonacci(n - 1) + fibonacci(n - 2)
}
}

private var _fibonacciCache = [Int: Int]()

func memoizedFibonacci(_ n: Int) -> Int {
if let cached = _fibonacciCache[n] {
return cached
}
let result = fibonacci(n)
_fibonacciCache[n] = result
return result
}
}

let original = Sample()
original.fibonacci(25) // result: 75,025
original.counter // 242,785 calls

original.fibonacci(25) // result:75,025
original.counter // 485,570 calls

let memoized = Sample()
memoized.memoizedFibonacci(25) // result: 75,025
memoized.counter // 242,785 calls

memoized.memoizedFibonacci(25) // result: 75,025
memoized.counter // 242,785 calls

Adding a counter allows us to track how many times fibonacci is called. Running the code in a playground reveals that calling fibonacci(25) recomputes the result every time, whereas calling memoizedFibonacci(25) only computes the result once. It caches that result, and returns it for all subsequent calls.

Holding the line at 242,785 calls is better than letting the number grow infinitely. However, calculating the 25th Fibonacci number is still expensive the first time, and we’re still looking at hours or days to calculate the 50th Fibonacci number. Such is the reality with an O(2ⁿ) time complexity.

We could at this point explore other options for calculating large Fibonacci numbers, but since this article focuses on memoization, let’s stick with our recursive algorithm and see if we can further optimize it.

Deep Memoization

Looking at the ⁠fibonacci function, it’s clear that the O(2ⁿ) time complexity stems from its two recursive calls:

fibonacci(n - 1) + fibonacci(n - 2)

If memoizedFibonacci is more efficient than fibonacci, perhaps we could improve performance by using it for these recursive calls.

final class Sample {
var counter = 0

func fibonacci(_ n: Int) -> Int {
counter += 1

return if n < 0 {
0
} else if n < 2 {
n
} else {
memoizedFibonacci(n - 1) + memoizedFibonacci(n - 2)
}
}

private var _fibonacciCache = [Int: Int]()

func memoizedFibonacci(_ n: Int) -> Int {
if let cached = _fibonacciCache[n] {
return cached
}
let result = fibonacci(n)
_fibonacciCache[n] = result
return result
}
}

let sample10 = Sample()
sample10.fibonacci(10) // result: 55
sample10.counter // 11 calls

let sample25 = Sample()
sample25.fibonacci(25) // result: 75,025
sample25.counter // 26 calls

let sample50 = Sample()
sample50.fibonacci(50) // result: 12,586,269,025
sample50.counter // 51 calls

Running this code in a playground demonstrates that this modification delivers remarkable results. It changes the time complexity from exponential O(2ⁿ) to linear O(n).

This dramatic improvement occurs because now each Fibonacci number is calculated exactly once and then cached. When calculating ⁠fibonacci(n), the function computes and caches values from ⁠fibonacci(0) through ⁠fibonacci(n-1), building the cache from the bottom up. Each subsequent recursive call benefits from these cached results, which eliminates the exponential branching of calculations that occurred in the original implementation.

Calling ⁠fibonacci(50) demonstrates the magnitude of this change. Without caching, the recursive algorithm makes more than a quadrillion function calls. With caching, that almost unimaginable number is reduced to 51, and the result is instantaneous.

Imagining a @Memoized Macro

This article has demonstrated the power and utility of memoization. The only drawback is that the cache storage, lookup, and update mechanisms add clutter, which obscures the original function’s logic.

Fortunately, Swift provides a way to eliminate this clutter. The cache-related mechanisms are essentially the same regardless of which function we memoize, and this pattern of repetitive boilerplate code is exactly the kind of problem that Swift macros were designed to solve.

The Swift Macro Testing series continues in “ Swift Macro Expansion Testing ,” where we begin developing a macro that encapsulates the memoization logic we’ve just explored. By the end of the series, memoizing a function will be as simple as this:

struct Sample {
@Memoized
func fibonacci(_ n: Int) -> Int {
if n < 0 {
0
} else if n < 2 {
n
} else {
memoizedFibonacci(n - 1) + memoizedFibonacci(n - 2)
}
}
}

Sample().fibonacci(42) // result: 267,914,296

In the next article, we’ll explore how to create this macro using expansion testing, the first approach in our testing triad.