Shortest Path Problems in the Real World

We’ve come along way with graphs so far. We know how to structure them, how to explore them, and how to travel them in a way that minimizes cost (however you choose to define the cost between each vertex). A special kind of cost-saving traversal deals with the shortest path between two points.

Have you ever used Google Maps or Waze? If you’re reading this I’m going to assume you have. Map directions are probably the best real-world example of finding the shortest path between two points. Today we’re going to explore the algorithms for solving the shortest path problem so that you can implement your very own (vastly simplified version of) Google Maps or Waze!

This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.

Answers to the previous Daily Problem

Suppose we are given the minimum spanning tree T of a given graph G (with n vertices and m edges) and a new edge e = (u, v) of weight w that we will add to G. Give an efficient algorithm to find the minimum spanning tree of the graph G + e. Your algorithm should run in O(n) time to receive full credit.

Prim’s or Kruskal’s will suffice in solving this, but to run this in linear time we’d probably prefer Kruskal’s. A graph G + e is no different to solve than G since G is just a subtree of G + e. So why should the algorithm change? Kruskal’s can be used as is, but here’s the distinguishing factors to look out for:

  • If e is a backward edge, we ignore it because it’s not helping us explore more of the tree. But this is important because if we detect a cycle, we’ve now found the edge we can throw out (if it’s not our current edge e) in our DFS exploration.
  • If e is a forward edge, we also ignore it for the same reason, since forward edges have already been explored.
  • If e is a tree edge, we have to compare it against all other edges leading to v to determine if we have to include it in our MST. If e’s weight w is less than that of the other incident weights, we will add e to our MST for G + e. Otherwise our MST hasn’t changed from graph G.
  • If e is a cross edge, it doesn’t factor into the MST and we can safely ignore it.

This runs in O(n) time because our DFS to find the new edge only costs O(n) in a sparse graph, and once we’re there it’s just some constant-time operations to do comparisons to see if the new edge will be swapped into the MST or not.

Applications for shortest paths

As we saw above, transporation problems (with solutions like Google Maps, Waze, and countless others) are a prime example of real-world applications for shortest path problems. There are a few others to consider as well if you aren’t convinced yet.

Motion planning

Have you ever used a flip book to animate a scene? How would you animate someone walking in that book? We all have an idea in our head as to what a human looks like when they walk. At each page of the flip book, you’re using the path of the limbs to anticipate the next frame. Flip book animation is like a shortest path problem.

When we flip between frames in a flip book, to get to the next one, we’re having our character move in the most natural (i.e. shortest) path from one point in space to the next. If you swing your leg up, it’s not going to move erratically. Instead, it will move in a smooth motion, traveling along an arc that is provided in part by the contraction of your muscles and the anatomy of your bones to allow for that motion.

Communication networks

What’s the fastest way to send an email? Where’s the best place to provide a CDN of your images and JavaScript? These are both shortest path problems. The answers lie in distributed networks such as Amazon Web Services and relay networks of mail servers.

This is why, for example, you are asked to choose where you want your servers to live on AWS. If you are starting a blog that caters to local businesses in Boston, it’s going to be faster to serve them images and content from the us-east region instead of ap-southeast. Information travels pretty fast, but even basic spacial reasoning can convince us it will take less time to travel to our Boston customers from servers in Ohio or Virginia than from servers in Singapore or Sydney.

How do we solve shortest path problems?

So given all of these kinds of applications, how would we go about beginning to solve them? For unweighted graphs, BFS is sufficient. Since all edges have equal weights, it doesn’t matter how we get from A to B, just that we get there in the fewest hops, and BFS will be able to document that for us in O(n + m) which is linear and very efficient. But as we saw with MSTs, unweighted graphs aren’t very interesting problems.

Weighted graphs are much more challenging to solve. BFS is insufficient for solving weighted graphs for shortest paths because BFS can find a short path but not the optimal shortest path. This is because BFS could find you the path with the least weight, but requires you to traverse the most number of edges.

This is like how when you put into Google Maps for the shortest timed route, it will have you taking all of these weird shortcuts even though you know that there are more direct routes with less turns and stop signs (but probably more traffic). So how do we solve the shortest path problem for weighted graphs? We’re going to explore two solutions: Dijkstra’s Algorithm and the Floyd-Warshall Algorithm.

Dijkstra’s Algorithm

Dijkstra’s is the premier algorithm for solving shortest path problems with weighted graphs. It’s also an example of dynamic programming, a concept that seems to freak out many a developer.

Dynamic programming is another divide-and-conquer technique where we use the results of a subproblem in order to help answer the general problem we are trying to solve. That’s it!

Dijkstra’s is a dynamic programming application because if we have a path from s->v->e where s is the starting vertex and e is the ending one, we know that there is a middle vertex v such that there is a shortest path between s->v.

In other words, we can step back from e all the way back to s with subproblems of saying “so if we know want to know the shortest path from s->e, can we compute the shortest path from s->(e-1)? If we can find that, can we computer the shortest path from s->(e-2)?” and so on, until we’ve reached the shortest path from s to it’s closest neighbor. This is applying dynamic progamming in the form of Dijkstra’s Algorithm.

class Graph {
  // rest of structure from previous articles
  let ADDED, DISTANCES, PARENTS;

  dijkstra(startVertex) {
    ADDED = new Array(this.vertices.length).fill(false);
    DISTANCES = new Array(this.vertices.length).fill(Number.POSITIVE_INFINITY);
    PARENTS = new Array(this.vertices.length).fill(-1);

    DISTANCES[startVertex] = 0;
    let currentVertex = startVertex, currentEdge;

    while (!ADDED[currentVertex]) {
      ADDED[currentVertex] = true;
      currentEdge = this.connections[currentVertex];

      while (currentEdge) {
        let nextVertex = currentEdge.adjacencyInfo;
        let weight = currentEdge.weight;

        if (DISTANCES[nextVertex] > (DISTANCES[currentVertex] + weight)) {
          DISTANCES[nextVertex] = DISTANCES[currentVertex] + weight;
          PARENTS[nextVertex] = currentVertex;
        }

        currentEdge = currentEdge.nextVertex;
      }

      currentVertex = 1;
      let bestCurrentDistance = Number.POSITIVE_INFINITY;

      for (let i = 0; i < this.vertices.length; i++) {
        if (!ADDED[i] && bestCurrentDistance > DISTANCES[i]) {
          bestCurrentDistance = DISTANCES[i];
          currentVertex = i;
        }
      }
    }
  }
}

Does this algorithm look familiar? It should. Dijkstra’s Algorithm is, in fact, extremely similar to Prim’s Algorithm from the last article. In fact, it is so similar, I only had to change 3 lines (1 of which was the name of the function). The only real difference between Prim’s and Dijkstra’s is in how they compare distances.

In Prim’s, we check to see if the next vertex’s distance is greater than the current edge weight and if it has been added yet. If it hasn’t, then we set the distance to the next vertex equal to that current edge weight and make the current vertex the parent of the next.

In Dijkstra’s, all we do differently is check to see if the next vertex’s distance is greater than the current edge weight PLUS the distance of the current vertex. That summed value is what gets added to the distance array for the next vertex, and we add the current vertex to the parent of the next vertex as normal. In sum, all we are doing extra in Dijkstra’s is factoring in the new edge weight and the distance from the starting vertex to the tree vertex it is adjacent to.

The implication here is that Dijkstra’s not only finds the shortest path from s to e, it also finds the shortest paths from s to all other vertices in the graph. By having to inspect all neighbors at every given step, Dijkstra can map all shortest routes from all vertices. Powerful stuff, but at a cost of O(n^2) since every vertex is compared to every other vertex.

Floyd-Warshall Algorithm

With most of these graph problems so far, our examples lead us to pick vertices on the outer ends of the graph, much like how we start with the root node of a tree. But what if you wanted to start in the middle? What if you wanted to know the most centrally-located vertex in a graph? In fact, the first example I could think of is Sim City.

In Sim City, the “goal” (which I put in quotes because the game is open-ended and has no real objective ending) is to create a vibrant, happy city of people, or “sims.” It’s essentially one condensed simulation in urban planning. You have to provide people power for their homes, roads for them to travel to work (and places to work), and all of the amenities a local municipality needs like schools, police stations, and parks. But where do you place all of this stuff to make people happy?

For many of the buildings, like police stations, they can only operate in a certain radius to effectively stop crime before it’s too late. Logically, if you put a police station on the edge of town and someone commits a crime on the other end, it’s going to take more time for the police cars to arrive on the scene than if it were centrally located.

And since cities in Sim City can be quite large, it’s not sufficient to just place one police station in the very middle of the map and hope for the best. You’ll need several stations to cover the entire map. And your map, like the real world, is not simply a square grid of flat grass and plains. Natural features like rivers, oceans, and mountains can complicate how a station can effectively police an area.

Lucky for you, there is an algorithm called Floyd-Warshall that can objectively find the best spot to place your buildings by finding the all-pairs shortest path. In other words, at every vertex we can start from we find the shortest path across the graph and see how long it takes to get to every other vertex. Each time we start over, we keep score of the total moves required for each vertex. The shorest average distance will come from that central vertex, which we can calculate with an adjacency matrix. And since we are now adding another layer of checking every vertex on top of what is essentially Dijkstra’s, this algorithm runs in O(n^3) time.

Floyd-Warshall doesn’t actually produce a singular return value of the optimal location. Instead, it returns the distance matrix with all of the optimal paths mapped out, which is often sufficient enough for most problems of this scope. Even though cubic time may seem slow, the fact is this algorithm runs fast in practice, in part because it utilizes an adjacency matrix to handle the mapping of all of its distance values (one of the rare instances that we originally mentioned where an adjacency matrix is a better data structure than an adjacency list). It also helps that the algorithm is simple to implement, too:

class AdjacencyMatrix {
  constructor(size) {
    // ES6 gives us a nice way of filling in a 2D array
    this.weights = new Array(size).fill(
      new Array(size).fill(Number.POSITIVE_INFINITY)
    );
    this.vertices = size;
  }

  floydWarshall() {
    let distance;
    for (let k = 0; k < this.vertices; k++) {
      for (let i = 0; i < this.vertices; i++) {
        for (let j = 0; j < this.vertices; j++) {
          distance = this.weights[i][k] + this.weights[k][j];
          if (distance < this.weights[i][j]) {
            this.weight[i][j] = distance;
          }
        }
      }
    }
    return this;
  }
}

You can see from the triple-nested for loops very clearly that this is indeed a O(n^3) worst-case algorithm. This cost is acceptable for finding the all-pairs shortest path, but is also a good candidate for solving what are called transitive closure problems. This is best explained with an example.

Who has the most power in a friend group? If mapped on a graph, you might think it’s the center of the friend group, because he/she has the most immediate friends (i.e. the most direct connections to other people, or the vertex with the highest degree). In fact, it is the person who has the farthest reach into the entire graph.

The President of the United States is the most powerful person in the world not because he has the most friends, but because he has the largest, most powerful network at his disposal. He may not have everyone in his phone, but the people in his phone can eventually connect him to virtually anyone.

In fact, there’s a popular phenomenon around this very concept of transitives closures called Six Degrees of Kevin Bacon. It asserts that Kevin Bacon is the most powerful celebrity because “he had worked with everybody in Hollywood or someone who’s worked with them.”

To prove this statement true once and for all, you could plot every Hollywood celebrity on an adjacency matrix and map their relationships with each other as edges with weights for the strength of the relationship. If Kevin Bacon has the all-pairs shortest path to every other celebrity in Hollywood then this Wikipedia entry is not just a parlor game, but a true account!

Onto the next Daily Problem

Let G = (V, E) be an undirected weighted graph, and let T be the shortest-path spanning tree rooted at a vertex v. Suppose now that all the edge weights in G are increased by a constant number k. Is T still the shortest-path spanning tree from v?

More problems to practice on

  1. Problem 6-15.
  2. Problem 6-17.

Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.

A new version of this app is available. Click here to update.