Dynamic Programming is this topic that scares the crap out of everyone (myself included). This was one of those topics that took me a super long time to think about before it clicked. But I think I figured it out! My simple way of remembering Dynamic Programming is to think of *iteration with memoization*.

If those two things don’t mean anything to you, read on, otherwise, let’s recap with the previous Daily Problem as a warmup…

This article is part of the Data Structures and Algorithms Series. If you missed the previous article, check that out first.

## Answers to the previous Daily Problem

Multisetsare allowed to have repeated elements. A multiset of`_n_`

items may thus have fewer than`_n!_`

distinct permutations. For example,`{1,1,2,2}`

has only six different permutations:`{1, 1, 2, 2}, {1, 2, 1, 2}, {1, 2, 2, 1}, {2, 1, 1, 2}, {2, 1, 2, 1}, and {2,2,1,1}`

. Design and implement an efficient algorithm for constructing all permutations of a multiset.

This question is basically asking “give me all of the shapes of these items”. Each permutation is a shape, or arrangement, of the 4 components, 2 of which are similar.

One approach would be to leverage backtracking, trying to figure out from those end results how do you get to the result with `_n-1_`

values left. This is a call back to our previous articles to re-assert your knowledge of backtracking algorithms. I will offer an alternative approach below:

Another naive approach is to make an `_n^2_`

loop between the number of possibilities and the number of slots and just insert each item into each slot. So 1 goes to first slot, then second slot, then third…then the second one goes into the first slot, then second slot… and you build out the possibilities by filling in the remaining options.

The problem is that doing the example above would arrive at duplicates (inserting the first 1 into the 4 slots yields the same shape of numbers as would if you inserted the second 1 into the 4 slots). So how do we eliminate these possibilities?

**Caching.** As we’ll see in this article, when we cache previous results, we can throw out unnecessary moves like constructing an unnecessary permutation. Once we’ve inserted the four options for the first 1, we don’t ever have to calculate them again for any other 1s because they yield the same output. Same with the 2s. So in the multiset example above, there are 24 different arrangements, duplicated twice (once for the 1 and once for the 2). In other words, we have `6x2x2 = 24`

, or 6 permutations that are duplicated once for `1`

(x2) and once again for `2`

(x2).

If that doesn’t make sense to you, check out how we handle generalized recurrence relations below.

## Dynamic Programming = Iteration + Memoization

Functions that recurse call themselves. *Memoization* is just storing previously-computed results. Honestly, the best way to think about this is with a classic example, the Fibonacci Sequence:

```
const fibonacci = (n) => {
if (n === 1) return 0;
if (n === 2) return 1;
return fibonacci(n - 1) + fibonacci(n - 2);
}
```

All this algorithm says is

To get the number you’re looking for, add up the two previous numbers in the sequence

The first few numbers of Fibonacci are `0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55`

. Simple, right? What’s the 24th number of Fibonacci? Oof! Yeah, this one gets rough pretty quick (in fact, it is *exponentially* bad!)

So we’ve got the recursive part in this function, but we’re missing the memoization. If we add in memoization, this problem becomes virtually linear in time, with the tradeoff of an array to store previous results:

```
const fibonacciCached = (n, cache = [0, 1]) => {
if (n === 1) return 0;
if (n === 2) return 1;
if (!cache[n]) cache[n] = fibonacciCached(n - 1) + fibonacciCached(n - 2);
return cache[n];
}
```

Literally one extra line to cache the result. The first solution will heat up your laptop if you find the 45th Fibonacci number from how hard it will be working. The latter will run extremely fast. All we did was make sure that if we’ve found this number before, we look it up in an array (constant-time lookup) as opposed to computing it all over again.

Simple, right? Well, not exactly.

For starters, this only works if you truly refer to the things you’ve computed. It works great here because we always refer to all of the numbers computed (since it’s always referring to lower numbers). But what about backtracking a path or with depth-first search? You may not always revisit everything there, in which case you’ve stored things unnecessarily.

Recursion also makes things painful. Recursion really eats up stack space. If we can cache with primitives and not recurse, that’s even better. With Fibonacci, it’s pretty easy:

```
const fibonacciArr = (n, cache = [0, 1]) => {
for (i = 2; i <= n; i++) cache[i] = cache[i - 1] + cache[i - 2];
return cache[n];
}
```

Brilliant! The shortest and the fastest solution yet with * O(n)* time and space! Now, to get really weird, we have to remember one thing that you probably noticed as you were counting Fibonacci: you only ever looked back two numbers when finding the next. Could our computer do that and only have to store 2 numbers as opposed to

*n*? You bet!

```
const fibonacci2 = (n) => {
let first = 0, second = 1, next;
if (n <= 1) return 1;
for (let i = 2; i < n; i++) {
next = first + second;
first = second;
second = next;
}
return first + second;
}
```

A couple of more lines, but we all we’re doing is computing the next number by adding the two previous numbers (like we first specified in our definition of Fibonacci). To save space, we overwrite the variables and keep chugging along. The solution still runs in * O(n)* time but the space is now constant (the two recorded numbers plus the pointer to our upcoming to-be-computed number).

One final thing I’ll say about these last two examples: you may have noticed we traded in the full caching solution for less space. If constant space and linear time is fine with you (which it probably is) then stick with this.

However, if you plan to call a function over, and over, and over again, permanently storing a singleton of the cached values, it might be better to call something like `fibonacciCached`

since you can always check if someone else has done this work before. The storage space is linear while the lookup time is constant for all but the uncharted territory. If you expect to visit previous values often, consider this alternative.

## Another example: binomial coefficients

Everyone uses Fibonacci to talk about Dynamic Programming. The problem is if you only ever hear Fibonacci, your brain will train itself to remember the answer to calculating Fibonacci, and not necessarily how to apply Dynamic Programming.

Let’s take another example from the book: binomial coefficients. This is just another way of saying:

How many different ways can I count something?

So if I have the numbers `1,2,3,4`

and I want to pick 2 numbers from them, how many pairs will that generate? The answer is 6, because I can grab `(1,2)`

, `(1,3)`

, `(1,4)`

, `(2,3)`

, `(2,4)`

, and `(3,4)`

.

A technique for quickly generating these coefficients is Pascal’s triangle which you can read on your own for fun. Suffice it to say, it looks an awful lot like how you would generate a Fibonacci sequence. In this case, it’s the sum of two numbers directly above it in the triangle.

We can map this idea out as a function for our binomial coefficient pretty quickly in code. Remember that for Fibonacci we iterated on our code three times: once with recursion, once with recursion and memoization, and finally a third with in-place memoization over a loop. To illustrate that we don’t need recursion to make Dynamic Programming work, let’s see binomial coefficients mapped out as a loop:

```
const binomialCoefficient = (n, k) => {
const bc = [[]];
// fill along the edges of the triangle
for (let i = 0; i < n; i++) bc[i][0] = 1;
for (let j = 0; j < n; j++) bc[j][j] = 1;
// fill in the middle
for (let i = 1; i < n; i++)
for (let j = 1; j < i; j++)
bc[i][j] = bc[i - 1][j - 1] + bc[i - 1][j];
return bc[n][k];
}
```

You’ll see that the bigger `for`

loop is just the definition of the recurrence relation for binomial coefficients.

You can see there’s a pretty similar pattern here to the last Fibonacci algorithm we drew up. First, we fill in the default values and edge cases. Next, we iterate through our numbers to precompute values in an array. Finally, we draw values from that array to grab our coefficient value.

The only real difference is that we have to supply two values now instead of one and the math of computation is slightly varied given that the recurrence relation isn’t exactly the same as that of Fibonacci.

So these two examples are pretty similar. For my last trick, let’s go in a completely different direction to really cement this idea.

## Checking for spelling errors

Did you know that you can create a spellchecker with Dynamic Programming? Yeah, you can! It’s the same formula: try recursion, then memoize, then optimize. So what happens when you spell something? You’re dealing with 4 scenarios for the spellchecker to validate:

**It matched.**Each letter of`soil`

is evaluated and compared to be correct.**It swapped.**For`soel`

, you have to swap the`e`

for an`i`

when you get there.**It added.**For`sol`

, you have to add in the`i`

when you get to`l`

too early.**It removed.**For`soile`

, you have to remove the`e`

when you expected the string to end.

And all but the first scenario carries a cost since the correct string requires no changes but each of the other three requires a change.

To map this out, we have a function that takes our correct word (from something like a dictionary) and compares it to the word we typed out. To do this we’re going to create a matrix of all possibilities from how to get from a letter in our target word to a letter in our compared word. I’ll write it all out now then we can explain it a bit more afterwards:

```
const compareWords = (targetWord, comparedWord) => {
const costs = [0,0,0]; // each slot is associated with a MATCH, INSERT, or DELETE cost
const INSERT_COST = 0;
const DELETE_COST = 0;
const matrix;
let i, j, k; // counters
// initialize the matrix 2D array
// our first row
for (i = 0; i < targetWord.length; i++) {
matrix[0][i].cost = 0;
if (i > 0) {
matrix[0][i].parent = 'I'; // for insert cost
} else {
matrix[i][0].parent = -1; // no parent since we're in the top corner
}
}
// our first column
for (j = 0; j < compareWord.length; j++) {
matrix[j][0].cost = 0;
if (j > 0) {
matrix[j][0].parent = 'D'; // for deletion cost
} else {
matrix[j][0].parent = -1;
}
}
// determine costs
for (i = 1; i < targetWord.length; i++) {
for (j = 1; j < compareWord.length; j++) {
// match cost
costs[0] = matrix[i-1][j-1].cost + (compareWord[i] === targetWord[j] ? 0 : 1);
// insert cost
costs[1] = matrix[i][j-1].cost + 1; // always costs 1 to insert a letter into our target word
// delete cost
costs[2] = matrix[i-1][j].cost + 1; // always costs 1 to delete a letter from our target word
// assume match cost is cheapest at first
matrix[i][j].cost = costs[0];
matrix[i][j].parent = 'M'; // for an exact match being made
for (k = 0; k < 3; k++) {
if (costs[k] < matrix[i][j].cost) {
matrix[i][j].cost = costs[k];
matrix[i][j].parent = k === 0 ? 'M' : k === 1 ? 'I' : 'D';
}
}
}
}
// find the final cost at the end of both words
return matrix[targetWord.length-1][compareWord.length-1].cost;
}
```

So what is this really doing? There’s really 4 parts to this solution:

### 1. Matrix initialization

We know that the edges along the rows and columns have a fixed cost at the extremes. When you compare a word to nothing, the cost scales linearly by letter as you add one more. And we know that a blank word carries no cost.

### 2. Cost calculation

Next, we compare the current position in the matrix to some previous adjacent cell. That leaves us with 3 calculations to compute:

**Moving up and to the left.**We assume we matched the previous cell (i.e. if we matched, both characters can advance) so all that is left to calculate is if the cell matches or not. It costs 0 for an exact match and 1 if we have to substitute one letter for the other.**Moving up.**Our target character had to advance once without advancing our comparison word. In other words, we had to insert a character which carries a cost of 1 no matter what.**Moving left.**Converseley, if our target character cannot advance but our comparison word did, we would need to delete a character to reduce matching costs which, again, has a price of 1 no matter what.

### 3. Optimize our path forward

Now that our costs are calculated in comparison to previous adjacent cells, we can find the minimum cost thus far. We just iterate through all of the cost enumerations and pick the cheapest one of the three. From here we can begin moving onto the next cell in the matrix.

### 4. Determine the total cost

Once we’ve enumerated through every cell in the cost matrix, we simply find the final cost at the very bottom of our matrix, which has compared every letter in our `targetWord`

with every letter in our `compareWord`

. That final cost is the cheapest cost to navigate from the target word to our comparison word.

## String comparisons solve a few problems

The beauty of this solution is that it can really work in a variety of other problems:

### Substring matching

Need to spellcheck just a word within a whole text? Initializing the first row removes the conditional to assign an insertion cost since we’re not trying to build one string into the other. Then in step 4, rather than count the cost of the letter in the big text, we just find the cost at the last point in our substring rather than the whole text.

### Longest Common Subsequence (LCS)

This is one of those classic problems in Dynamic Programming: you want to find all of the shared characters between two words if you only advance character-by-character. The example in the book uses `democrat`

and `republican`

with a LCS of `eca`

(i.e. `e`

is followed by a `c`

is followed by an `a`

in each of the two words, and it’s of length 3 which is longer than any single letter in both words and is longer than, say, `ra`

which is also a subsequence in the two words).

Leveraging our existing code, if we just make the cost of a substitution in our match calculation `2`

instead of `1`

, the cost is disproprotionately greater than any other cost and we’d never want to swap out letters.

Why is this the solution? Think about it this way: all we ever want are matches. We want all of the other letters out of our way, which either means insertion or deletion just to get the exact sequence of something like `LETTER-e-LETTER-LETTER-c-LETTER-LETTER-a`

. The least number of insertions/deletions to form an equal patter between our target and comparison words gets us our common subsequence.

## We did it! But we aren’t done yet

We’ve now introduced the fundamental aspects of Dynamic Programming with a few examples and variations. Read over the code a few times to let it sink in (trust me, it took a *really* long time for me to see this). We’ve got 2 more parts of this chapter to cover, so in the meantime, take a look at the Daily Problem and we’ll see you soon!

## Onto the next Daily Problem

Suppose you are given three strings of characters:

`X`

,`Y`

, and`Z`

, where`|X| = n`

,`|Y| = m`

, and`|Z| = n + m`

.`Z`

is said to be ashuffleof`X`

and`Y`

iff`Z`

can be formed by interleaving the characters from`X`

and`Y`

in a way that maintains the left-to-right ordering of the characters from each string. (a) Show that`cchocohilaptes`

is a shuffle of`chocolate`

and`chips`

, but`chocochilatspe`

is not. (b) Give an efficient dynamic-programming algorithm that determines whether`Z`

is a shuffle of`X`

and`Y`

. Hint: the values of the dynamic programming matrix you construct should be`Boolean`

, not numeric.

Think on this and we’ll check it out in the next few days as we explore more examples of Dynamic Programming.

#### Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.