Last time we covered a bunch of examples utilizing dynamic programming. Again, I recommend you check out that article first before you check this one out otherwise you’re not really gaining too much. Today we will be concluding our section on dynamic programming by discussing its limitations and when *not* to use it as a strategy.

This article is part of the Data Structures and Algorithms Series. If you missed the previous article, check that out first.

## Answers to the previous Daily Problem

A certain string processing language allows the programmer to break a string into two pieces. It costs

`n`

units of time to break a string of`n`

characters into two pieces, since this involves copying the old string. A programmer wants to break a string into many pieces, and the order in which the breaks are made can affect the total amount of time used. For example, suppose we wish to break a 20-character string after characters 3, 8, and 10. If the breaks are made in left-right order, then the first break costs 20 units of time, the second break costs 17 units of time, and the third break costs 12 units of time, for a total of 49 steps. If the breaks are made in right-left order, the first break costs 20 units of time, the second break costs 10 units of time, and the third break costs 8 units of time, for a total of only 38 steps. Give a dynamic programming algorithm that takes a list of character positions after which to break and determines the cheapest break cost in`O(n^3)`

time.

As always, we’re going to approach this with the three step process outlined before for solving dynamic programming problems:

- Describe the problem recursively
- Ensure parameters are polynomial and not exponential
- If so, we can store partial results along the way to be efficient

Can we describe this recursively? We know we want to compute something along the lines of `breakCost(str, start, end)`

which says “find the minimum break cost of string `str`

starting at position `start`

and ending at position `end`

”.

Taking a step back, we know each split costs `n`

. With `n`

characters we can possibly break after to exhaustively search for costs, we know that is now costing us `O(n^2)`

because we need to create a partial solution matrix that fills in the cost for all possible breaks in all possible positions. Just like partitioning books or comparing words for edit distance, the cubic time comes in after we’ve created our matrix value and we’re now calculating the cost of our cached value of our previous partition with the next break and finding the minimum cost of the two *for all positions from our start position to the end position* in our `breakCost`

function, which is also of some fraction of length `n`

.

So given we’ve had so many examples of dynamic programming working, when doesn’t it work?

## Algorithmic limitations of dynamic programming

In general we must use algorithms that are both correct and efficient. What conditions cause dynamic programming to fail under these circumstances?

### In correctness

**Dynamic programming fails correctness if the recurrence relation cannot be expressed.** Imagine we wanted to find the most *expensive* path from point A to be B without visiting a location more than once. Can we do it with dynamic programming?

One problem is we can never enforce the *visiting more than once* part. When you create a recurrence relation for this problem you’re trying to compute a future cost for all remaining locations against your previously-stored solution up to some intermediate point. But in dynamic programming, computing that max cost could include points you’ve already visited. You could very easily create a cycle and an infinite loop here.

The other problem is ordering. Sure, you know you start at A and end at B, but nothing dictates which order we transition from point-to-point in between A and B - is it biggest to smallest? Left to right? Without an order we also have the opportunity to infinite loop if the cost pushes us into one direction that is cyclical.

As we saw in step 3 of our dynamic programming checklist, if we can’t utilize a partial solution to help compute the state after the partial solution, the caching doesn’t get us anywhere. In all of our previous examples we have an implicit ordering to evaluate things like strings or arrays. But when you have something like an undirected graph such as a network of cities on a map, you lose that order and you can’t create a reliable recurrence relation.

### In efficiency

**Dynamic programming fails efficiency without order of the input object.** Again, without order, we don’t have a reliable recurrence relation. Which means we can’t express in faster polynomial time but the full exponential time. Since the partial results do nothing for us, our recursion is unbounded, and recursion blows up to exponential time in the worst case (see our naive fibonacci algorithm in part 1 for proof).

So yeah…if you have maps, networks, graphs without any sort of ordering…don’t use dynamic programming.

## Concluding on Dynamic Programming

Dynamic programming is hard. You need to look at this stuff a bunch of times before it clicks. If you still need more examples, check out this page for 10 more problems to practice. If you follow the three-step process of defining the recurrence relation, expressing the parameters in polynomial time, and storing the partial results in an array/matrix, you can pretty much solve any dynamic programming problem.

With enough examples and banging your head against the wall, these will start to become second nature (but you’ve got to practice!). And with that, onto the Daily Problem…

## Onto the next Daily Problem

Eggs break when dropped from great enough height. Specifically, there must be a floor

`f`

in any sufficiently tall building such that an egg dropped from the`f`

th floor breaks, but one dropped from the`(f − 1)`

st floor will not. If the egg always breaks, then`f = 1`

. If the egg never breaks, then`f = n + 1`

. You seek to find the critical floor`f`

using an`n`

-story building. The only operation you can perform is to drop an egg off some floor and see what happens. You start out with`k`

eggs, and seek to drop eggs as few times as possible. Broken eggs cannot be reused. Let`E(k,n)`

be the minimum number of egg droppings that will always suffice. (a) Show that`E(1, n) = n`

. (b) Show that`E(k,n) = Θ(n^1/k)`

. (c) Find a recurrence for`E(k,n)`

. What is the running time of the dynamic program to find`E(k, n)`

?

#### Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.