 # A-Star is Born

What’s the relationship between Wikipedia articles, protein networks, and mapping software? All of them can be represented in a mathematical structure called a graph. A graph is used to model relations between objects, formally called nodes. The relations or connections between them are called edges, and these connections may be strong or weak, represented by weights.

In the above case, every article and the hyperlinks between them in Wikipedia can be represented as nodes and edges. You may have started reading the article on the Calabi-Yau manifold, and fifteen minutes later, you may be reading about panzerkampfwagens. Now you start searching for the most random thing you can think of. You think of Villarreal CF and try to get to Ayahuasca. Jump to Argentina, then Freedom of Religion, and you have it. Ayahuasca. This hypothesis of graph theory is called six degrees of separation. Here, we have to get from A to B along the graph as fast as possible. And most of the time, it is possible with six edges or under. Since internet links are all equal, weights don’t play a significant role here. So you try to find something that is vaguely related to your endpoint. You try to get as close as possible to the endpoint with every move you make. And this is a good strategy to follow, you can get there pretty quick, with the shortest path. This strategy, however, is a greedy algorithm, you blindly try to race by running down the shortest path. Every iteration is just a step forward and calculating the distance to the goal.

Consider a transport network. The intersections, bus stops, or metro stations would form nodes. We assign lower weights to the edges that can be crossed easily. Hence, railways, freeways, and highways would have low weights since they can be traversed quite quickly. The weights would also depend on factors like distance between the nodes, roadworks, and traffic. You are given this dense and complex network and told to make a mapping software to tell people the directions from one place to another. One of the simplest algorithms to solve the problem is Dijkstra’s algorithm.

Edsger W Dijkstra, a Dutch computer scientist, sat thinking about an optimal algorithm to find the shortest path from A to B along a weighted graph. He came up with an algorithm, now famously known as Dijkstra’s shortest path algorithm. Let’s say the start is at a node A. Move on to all the neighbouring nodes its connected to, and find out the cost to go there. Now find the node which costs the least to go to. Take this as your primary node, and repeat the process while ignoring nodes already traversed. If you happen to, however, find a path that costs less than an existing path to a node, change the cost to the node to the new, lower cost. In doing so, you are guaranteed to hit the shortest path and reach the final decision more efficiently than a brute force approach.

Pseudo Code for Djikstra’s Algorithm

function Dijkstra(Graph, source):

create vertex set Q // set of unexplored nodes

for each vertex v in Graph: // initializing the nodes in the graph

dist[v] ← INFINITY

prev[v] ← UNDEFINED

dist[source] ← 0

while Q is not empty:

u ← vertex in Q with min dist[u] // starting with the node that costs the least

remove u from Q

for each neighbor v of u: // only v that are still in Q

alt ← dist[u] + length(u, v)

if alt < dist[v]:

dist[v] ← alt

prev[v] ← u

return dist[], prev[] However, it is not perfect. The Dijkstra algorithm is a greedy algorithm. When it gains new information, it should adapt to its new surroundings, and use the information, which the Dijkstra does not do very efficiently. Obviously, we have optimization methods to counteract this problem [artially, but this is not the major issue. Another flaw can be exposed with a graph having weighted edges strategically placed such that the lighter edges are all on one side and the heavier ones on the other. The algorithm is going to end up prioritizing searching on the side with lighter nodes, even if the destination is on the heavier side.

This algorithm has an average case runtime of O(V + E log(V)) for a graph with V nodes and E edges. But is this the most efficient algorithm? No. Can we do better? How? The answer is a mixture of both the algorithms we’ve seen until now, and it’s called the A* algorithm.

Developed by Peter Hart, Nils Nilsson and Bertram Raphael, the A* algorithm is superior to Djikstra’s algorithm on multiple grounds. It combines the best of both worlds. If you implement Dijkstra’s, you might end up wasting time and precious computing resources searching on highways for the route to the nearest hospital, which happens to be on a country road.

A* algorithm successfully tracks both the distance and the cost to travel while searching for the routes. You start the same way as Dijkstra, but also have the data of the shortest distance from every node to your destination pre-processed and stored. So you add up the distance remaining to the cost of travel. If you start straying far from the destination, the distance remaining will decrease the node’s priority, checking its influence and ensuring you retain focus. You spend your resources exploring better nodes and hence, finding a better route.

Pseudocode for A* algorithm

function A_Star(start, goal, h)

openSet = {start} // set of explored values

prev = empty // prev node

gScore = Infinity // for all nodes that are not in openSet

gScore[start] = 0

// For node n, fScore[n] = gScore[n] + h(n)

// h(n) is the distance to goal from current node

fScore = Infinity

fScore[start] = h(start)

while openSet is not empty:

current = min(fScore in openSet)

if current = goal

reconstruct_path(prev, current)

openSet.Remove(current)

for each neighbor of current:

// d is the weight of the edge from current to neighbor

// tentative_gScore is the distance from start to the neighbor through current

tentative_gScore = gScore[current] + d(current, neighbor)

if tentative_gScore < gScore[neighbor]

// This path to neighbor is better than any previous one. Record it!

prev[neighbor] = current

gScore[neighbor] = tentative_gScore

fScore[neighbor] = gScore[neighbor] + h(neighbor)

if neighbor not in openSet Despite not being the most efficient, Dijkstra’s algorithm is still the holy grail. It can also be modified to start finding the path both ways simultaneously, i.e., from A to B and B to A at the same time. Applying this in A* would also give a faster result, with the modification that instead of searching for the goal, the current node from the start should keep looking for the current node from the end, and vice-versa. Another optimization technique is to implement the road hierarchy properly with the edges that form a shortest path for many routes being given lower weights.

To speed up this process even further, the cache of the map application would also store some random data it found along the way during previous searches. Say, for example, while searching for the route from A to Z, it happened to find that the shortest path from J to T is not the order J-K-L-M-N-O-P-Q-R-S-T, but J-H-G-T. It saves that information. The next time, if you search for the route from F to W which goes via J and T, the software knows this shortcut.

With actual road networks, we will also have to account for real-time changes in the graphs. A new road might be built across the city, which would reduce the traveling distance, but as people start to use it more often, Braess’ paradox would take over and bless that road with eternal traffic giving it a high weight. Landslides may wipe out roads in hilly regions and that might not only force us to search for longer, but also disconnect that subgraph from the main graph. The graphs may have to be “directed” to account for one-ways in the road system.

Graph algorithms are not restricted to mapping software, having multiple applications in other subjects as well, from sociology to chemical interactions and electrical power supply to linguistic analysis. In fact, it is an entire field of research in mathematics and Computer Science which is quite heavily funded due to its immense industry applications. The premise of providing exorbitant funding is that even the most abstract concepts may end up helping in finding a solution to a very practical problem.

References: