I need to perform some expensive calculation, such as determining a Fibonacci number:

```
/// Calculate the Nth Fibonacci number (inefficiently)
func fib(n: Int) -> Int {
n > 1 ? fib(n: n-1) + fib(n: n-2) : n
}
```

My project contains a number value types that need to perform calculations like `fib`

based on their properties:

```
struct Fibber : Hashable {
/// The index in the sequence
var n: Int
/// The calculated Fibonacci number at _n_
var fibcalc: Int { fib(n: n) }
}
```

It works fine. But it is slow!

```
class FibberTests: XCTestCase {
func testFibCalc() {
measure { // average: 1.291, relative standard deviation: 1.5%
var fibber = Fibber(n: 1)
XCTAssertEqual(1, fibber.fibcalc)
fibber.n = 25
XCTAssertEqual(75_025, fibber.fibcalc)
fibber.n = 39
XCTAssertEqual(63_245_986, fibber.fibcalc)
}
}
}
```

So I make a single global dictionary that is keyed on *source code* location, and contains a map from a `Hashable`

instance to the result of some arbitrary calculation:

```
/// Singleton global memoization cache, keyed on source code location and input hashable
private var memoized = Dictionary<String, Dictionary<AnyHashable, Any>>()
```

The cache key will be something like: “function:fibcalc file:Fibber.swift line:47”.

Any Hashable instance can utilize this function to perform and memoize a calculation based on the key type, and return that cached value on subsequent invocations of the same call:

```
extension Hashable {
/// Caches and returns the result of the `calculation` function.
public func memoize<T>(function: StaticString = #function, file: StaticString = #file, line: Int = #line, _ calculation: (Self) -> T) -> T {
let cacheKey = "function:(function) file:(file) line:(line)"
let hashKey = AnyHashable(self)
if let cached = memoized(cacheKey)?(hashKey) as? T { return cached }
if memoized(cacheKey) == nil { memoized(cacheKey) = Dictionary() }
let calculated = calculation(self)
memoized(cacheKey)?(hashKey) = calculated
return calculated
}
}
```

Memoizing these expensive calculations is now very simple:

```
extension Fibber {
/// The cached fib. Repeated calls on the same source instance will return the memoized result for this instance.
var fibmemo: Int { memoize(.fibcalc) }
}
```

And we get an order-of-magnitude speedup!

```
extension FibberTests {
func testFibMemo() {
measure { // average: 0.132, relative standard deviation: 299.9%
var fibber = Fibber(n: 1)
XCTAssertEqual(1, fibber.fibmemo)
fibber.n = 25
XCTAssertEqual(75_025, fibber.fibmemo)
fibber.n = 39
XCTAssertEqual(63_245_986, fibber.fibmemo)
}
}
}
```

*Assumptions:*

- the Hashable key will always be a value type (this isn’t currently enforceable in Swift)

*Non-Issues:*

- Thread-safety: locking can be added to the cache later
- Unbounded memory growth:
`memoized`

`Dictionary`

will be converted to an `NSCache`

*Valid Issues:*

- Duck typing: the keys are
`AnyHashable`

and the values are `Any`

, so runtime type conversion is used (yuck)

My main question is: is this a good idea? Are there any issues with the cache key using the source location?