functional programming – Efficiently calculate value of Pascal’s Triangle using memoization and recursion(updated)

Follow up to Efficiently calculate value of Pascal’s Triangle using memoization and recursion

Based on feedback given by jvwh in the answers

def pascal(c: Int, r: Int): Int = {
  if (c < 0 || r < 0 || c > r) throw new IllegalArgumentException()
  // use a cache; map (c,r) pair to value
  // fill in value recursively
  val cache: mutable.Map((Int, Int), Int) =
    (0 to r).flatMap(n => Seq((0, n) -> 1, (n, n) -> 1))

  def getPascalValue(c: Int, r: Int): Int =
    cache.getOrElseUpdate((c,r), getPascalValue(c, r-1) +
      getPascalValue(c-1, r-1))

Adds obvious handling of invariants, flatMap method to populate the edges with 1 values rather then a for loop. Key difference is it uses a mutable map that saves previous values. Run time for pascal(5, 105) dropped from around 20s to 3ms. Pretty happy with the efficiency but more feedback is definitely welcome

rust – n-th Fibonacci number with memoization

I am learning Rust and decided to create a simple program to calculate n-th Fibonacci number. It uses a vector for memoization. The program works, but what would you suggest to improve here? Maybe store memo inside fib function as a static variable?

I tried to use u32 everywhere, but 48-th fib number doesn’t fit into u32, so I decided to use u64.

use std::io;
use std::io::Write;
use std::str::FromStr;

fn fib(n: u32, memo: &mut Vec<u64>) -> u64 {
    if n < 2 {
        return n as u64;
    else if memo((n-1) as usize) != 0 {
        return memo((n-1) as usize);

    memo((n-1) as usize) = fib(n - 1, memo) + fib(n - 2, memo);
    memo((n-1) as usize)

fn main() {
    loop {
        print!("n = ");

        let mut str: String = String::new();
        io::stdin().read_line(&mut str)
            .expect("Error getting number");

        match u32::from_str(&str) {
            Ok(n) => {
                let mut memo: Vec<u64> = vec!(0; n as usize);
                println!("fib({})={}", n, fib(n, &mut memo));
            Err(_e) => {
                println!("Error parsing number.");
                println!("Try again");

c++ – C++17 Recursive Fibonacci calculation with memoization

Compiler is g++ 4.2. I’m new to C++, but I’ve done a lot of data science, web scraping, and some socketing stuff in Python. This code generates the nth Fibonacci number, either with a naive implementation or with caching.

#include <iostream>
#include <string>
#include <unordered_map>
#define BIG unsigned long long int

std::unordered_map<int, BIG> umap;

int fib(int n) {
    if (n < 2) {
        return n;
    return fib(n-1) + fib(n-2);

BIG fib_with_memo(int n) {
    if (umap.find(n) == umap.end()) { // if F_n not in cache, calculate, cache, and return it
        BIG val = fib_with_memo(n-1) + fib_with_memo(n-2);
        umap(n) = val;
        return val;
    return umap(n); // otherwise return the cached value

int main(int argc, char** argv) {
    umap(0) = 0;
    umap(1) = 1;

    int input = std::stoi( std::string(argv(1)) );
    std::cout << fib_with_memo(input) << std::endl;
    return 0;

I used a std::unordered_map (hash table) because its lookup/insertion times are $O(1)$ (compared to the std::map, which has $O(log(n))$ for both.)

C++ Implementation of a Python-like memoization decorator

Coming from a Python background, one thing I really miss in C++ is a memoization decorator (like functools.lru_cache. As I sometimes compete on Codeforces, I found myself implementing a similar thing in C++17 in case I ever need a quick and easy way to memoize function calls. I was wondering whether I could get some feedback on my implementation, and whether something like this could be effective and practical to use in production code.


#include <bits/stdc++.h>

// Using a map
template <typename RetType, typename... ArgTypes>
auto memoize(std::function<RetType(ArgTypes...)> func) -> decltype(func) {
    return (
        memo = std::map<std::tuple<ArgTypes...>, RetType>(), 
        func = std::move(func)
    )(ArgTypes... args) mutable -> RetType {
        auto key = make_tuple(args...);
        if (auto it = memo.find(key); it != memo.end()) {
            return it->second;
        auto (it, emplaced) = memo.emplace(key, func(args...));
        return it->second;

// Using an unordered_map

// From boost
template<typename T>
inline void hash_combine(std::size_t& seed, const T& val) {
    std::hash<T> hasher;
    seed ^= hasher(val) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
template <typename RetType, typename... ArgTypes>
auto memoizeHashed(std::function<RetType(ArgTypes...)> func) -> decltype(func) {
    using KeyType = std::tuple<ArgTypes...>;
    struct KeyHashFnc {
        size_t operator() (const KeyType& key) const {
            size_t result = 0;
            std::apply((&result)(auto&&... args) {
                    hash_combine(result, std::forward<decltype(args)>(args)), 
            }, key);
            return result;
    return (
        memo = std::unordered_map<KeyType, RetType, KeyHashFnc>(), 
        func = std::move(func)
    )(ArgTypes... args) mutable -> RetType {
        auto key = std::make_tuple(args...);
        if (auto it = memo.find(key); it != memo.end()) {
            return it->second;
        auto (it, emplaced) = memo.emplace(key, func(args...));
        return it->second;

// Intended usage (solves the problem:
using namespace std;
int INF = 1e9+5;
class Solution {
    int minDifficulty(const vector<int>& arr, int d) {
        int N = arr.size();

        using DpFuncType = function<int(int,int)>;
        DpFuncType dp = (&dp, &arr, N)(int i, int k) {
            int n = N-i;

            if (k <= 0) {
                return -1;
            else if (k > n) {
                return -1;
            else if (k == n) {
                return accumulate(arr.begin()+i, arr.end(), 0);
            else { // k < n
                if (k == 1) {
                    return *max_element(arr.begin()+i, arr.end());
                else {
                    int result = INF;
                    int maxVal = arr(i);
                    for (int j = i+1; j < N; ++j) {
                        int curr = dp(j,k-1);
                        if (curr != -1) {
                            result = min(result, maxVal + curr);
                        maxVal = max(maxVal, arr(j));

                    if (result == INF) {
                        return -1;
                    else {
                        return result;
        dp = memoizeHashed(dp); // memoize(...) can also be used here

        return dp(0,d);

memoization – Generic memoize function in Swift

I need to perform some expensive calculation, such as determining a Fibonacci number:

/// Calculate the Nth Fibonacci number (inefficiently)
func fib(n: Int) -> Int {
    n > 1 ? fib(n: n-1) + fib(n: n-2) : n

My project contains a number value types that need to perform calculations like fib based on their properties:

struct Fibber : Hashable {
    /// The index in the sequence
    var n: Int

    /// The calculated Fibonacci number at _n_
    var fibcalc: Int { fib(n: n) }

It works fine. But it is slow!

class FibberTests: XCTestCase {
    func testFibCalc() {
        measure { // average: 1.291, relative standard deviation: 1.5%
            var fibber = Fibber(n: 1)
            XCTAssertEqual(1, fibber.fibcalc)
            fibber.n = 25
            XCTAssertEqual(75_025, fibber.fibcalc)
            fibber.n = 39
            XCTAssertEqual(63_245_986, fibber.fibcalc)

So I make a single global dictionary that is keyed on source code location, and contains a map from a Hashable instance to the result of some arbitrary calculation:

/// Singleton global memoization cache, keyed on source code location and input hashable
private var memoized = Dictionary<String, Dictionary<AnyHashable, Any>>()

The cache key will be something like: “function:fibcalc file:Fibber.swift line:47”.

Any Hashable instance can utilize this function to perform and memoize a calculation based on the key type, and return that cached value on subsequent invocations of the same call:

extension Hashable {
    /// Caches and returns the result of the `calculation` function.
    public func memoize<T>(function: StaticString = #function, file: StaticString = #file, line: Int = #line, _ calculation: (Self) -> T) -> T {
        let cacheKey = "function:(function) file:(file) line:(line)"
        let hashKey = AnyHashable(self)
        if let cached = memoized(cacheKey)?(hashKey) as? T { return cached }
        if memoized(cacheKey) == nil { memoized(cacheKey) = Dictionary() }
        let calculated = calculation(self)
        memoized(cacheKey)?(hashKey) = calculated
        return calculated

Memoizing these expensive calculations is now very simple:

extension Fibber {
    /// The cached fib. Repeated calls on the same source instance will return the memoized result for this instance.
    var fibmemo: Int { memoize(.fibcalc) }

And we get an order-of-magnitude speedup!

extension FibberTests {
    func testFibMemo() {
        measure { // average: 0.132, relative standard deviation: 299.9%
            var fibber = Fibber(n: 1)
            XCTAssertEqual(1, fibber.fibmemo)
            fibber.n = 25
            XCTAssertEqual(75_025, fibber.fibmemo)
            fibber.n = 39
            XCTAssertEqual(63_245_986, fibber.fibmemo)


  • the Hashable key will always be a value type (this isn’t currently enforceable in Swift)


  • Thread-safety: locking can be added to the cache later
  • Unbounded memory growth: memoized Dictionary will be converted to an NSCache

Valid Issues:

  • Duck typing: the keys are AnyHashable and the values are Any, so runtime type conversion is used (yuck)

My main question is: is this a good idea? Are there any issues with the cache key using the source location?

memoization – An algorithm for finding "cutaway words"?

My son, who is just learning to read, likes the idea of ​​hiding a letter of a word with his finger and asking what the rest of the word means. Her favorite is to turn "her" into "he" and "then" into "the".

While he's starting to learn big words, I want him to find "truncatable" words, similar to the concept of a truncatable prime number. These are dictionary words that are also dictionary words when a character is removed from the left or right side. A correctly "truncated" word would continue to be a one-word word, but I would only search for those that can be truncated to three letters:


What is the best algorithm to do it? My initial naive approaches are horribly ineffective for the dictionary file I use. Currently, I literally test every word in the dictionary, recursively, and the time and execution space is insoluble and the methodology ends up testing each word several times.

Is there an effective way, memorized or optimized elsewhere, to find truncated words?

javascript – Recursive backpack with memoization does not behave as expected

I know that similar questions have already been asked on the internet, but I am willing to find someone who can help me understand why my code, in particular, does not work. I'm happy if you can help me! 🙂

Write a function that gives an array of items ([weigth, value]) and a limit capacity, returns the largest possible value for the total weight to be less than or equal to the given limit.

I have problems to implement the memorization, I'm a little stuck … Without memorization, the code seems to work. With the implementation of the memorization, the code gives erroneous results. Can you help me identify my problem?


const backpack = (items, capacity) => {

// number of elements * capacity storage matrix
let m = [...Array(items.length)].map (_ => [...Array(capacity)])

// help function for recursion
recursion const = (number of elements, capacity) => {

// save request
if (m[items.length - 1][capacity - 1])
return m[items.length - 1][capacity - 1]




        // basic case
if (items.length <= 0 || capacity <= 0)
returns 0

let the weight, the value, rest
leave result = 0
let better = 0

// loop all the given elements
for (ie i = 0; i <items.length; i ++) {

// deconstruct the elements into weight and value variables
            [weight, value] = articles[i]

            // delete the current element to avoid choosing an element twice
rest = [...items]
            rest.splice (i, 1)

// there is room, goes down the recursion
if (weight <= capacity)
result = value + recursion (rest, capacity - weight)

if (best <result)
best = result

// save the best result in m
m[items.length - 1][capacity - 1]    = better
best return

// initial recursion call
back recursion (objects, capacity)

Some entries:

test1: {
Articles: [
  ]capacity: 5

test2: {
Articles: [
  ]capacity: 10