## Building an undirected graph and finding shortest path using Dictionaries in Python

**Prerequisites:**

In this article, we will be looking at how to build an undirected graph and then find the shortest path between two nodes/vertex of that graph easily using dictionaries in Python Language.

Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the **DSA Self Paced Course** at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer **Complete Interview Preparation Course****.**

In case you wish to attend **live classes **with experts, please refer **DSA Live Classes for Working Professionals **and **Competitive Programming Live for Students**.

### Building a Graph using Dictonaries

**Approach:** The idea is to store the adjacency list into the dictionaries, which helps to store the graph in any format not only in the form of the integers. Here we have used characters as a reference on those places any custom objects can also be used.

Below is the implementation of the above approach:

### Python3

**Output:**{ 'G': ['C'], 'F': ['C'], 'E': ['A', 'B', 'D'], 'A': ['B', 'E', 'C'], 'B': ['A', 'D', 'E'], 'D': ['B', 'E'], 'C': ['A', 'F', 'G'] }

### Shortest Path between two nodes of graph

**Approach:** The idea is to use queue and visit every adjacent node of the starting nodes that is traverse the graph in Breadth-First Search manner to find the shortest path between two nodes of the graph.

Below is the implementation of the above approach:

### Python3

**Output:**Shortest path = A B D

*arrow_drop_up*

### Introduction

Dijkstra's algorithm is an algorithm which finds the shortest paths between nodes in a graph. It was designed by a Dutch computer scientist Edsger Wybe Dijkstra in 1956, when he thought about how he might calculate the shortest route from Rotterdam to Groningen. It was published three years later.

In computer science (as well as in our everyday life), there are a lot of problems that require finding the shortest path between two points. We encounter problems such as:

- Finding the shortest path from our house to our school or workplace
- Finding the quickest way to get to a gas station while traveling
- Finding a list of friends that a particular user on social network may know

These are just a few of the many "shortest path problems" we see daily. Dijkstra's algorithm is one of the most well known shortest path algorithms because of it's efficiency and simplicity.

### Graphs and Types of Graphs

Dijkstra's algorithm works on undirected, connected, weighted graphs. Before we get into it, let's explain what an undirected, connected, and weighed graph actually is.

A graph is a data structure represented by two things:

- Vertices: Groups of certain objects or data (also known as a "node")
- Edges: Links that connect those groups of objects or data

Let's explain this with an example: We want to represent a country with a graph. That country is made out of cities of different sizes and characteristics. Those cities would be represented by vertices. Between some of those cities, there are roads that connect them. Those roads would be represented by edges.

One more example of a graph can be a circle route of a train: the train stations are represented by vertices, and the rails between them are represented by the edges of a graph.

A graph can even be as abstract as a representation of human relationships: We can represent each person in a group of people as vertices in a graph, and if they know each other, we connect them with an edge.

Graphs can be divided based on the types of their edges or based on the number of components that represent them.

When it comes to their edges, graphs can be:

A directed graph is a graph in which the edges have orientations. A second example mentioned above is an example of a directed graph: the train goes from one station to the other, but it can't go in reverse.

An undirected graph is a graph in which the edges do not have orientations (therefore, all edges in an undirected graph are considered to be bidirectional). An example of an undirected graph can be the third example mentioned above: It's impossible for person A to know person B without person B knowing the person A back.

Another division by the edges in a graph can be:

A weighted graph is a graph whose edges have a certain numerical value joined to them: their weight. For example, if we knew the lengths of every road between each pair of cities in a country, we could assign them to the edges in a graph representation of a country and create a weighted graph. i.e. the weigth or "cost" of an edge is the distance between the cities.

An unweighted graph is a graph whose edges don't have weights, like the forementioned graph of relationships between people - we can't assign a numerical value to "knowing a person"!

Based on the number of components that represent them, graphs can be:

A connected graph is a graph that only consists of one component - *every vertex inside of it has a path to every other vertex*. If we were at a party, and every person there knew the host, that would be represented with a connected graph - everyone could meet anyone they want via the host.

A disconnected graph is a graph that consists of more than one component. Using the same example, if somehow an outsider who doesn't know anyone ended up at the party, the graph of all people at the party and their relationships, it would now be disconnected, because there is one vertex with no edges to connect it to the other vertices present.

### Dijkstra's Algorithm

The original Dijkstra's algorithm finds the shortest path between two nodes in a graph. Another, more common variation of this algorithm finds the shortest path between the first vertex and all of the other vertices in a graph. In this article, we will focus on this variation.

We will now go over this algorithm step by step:

First, we will create a set of visited vertices, to keep track of all of the vertices that have been assigned their correct shortest path.

We will also need to set "costs" of all vertices in the graph (lengths of the current shortest path that leads to it). All of the costs will be set to 'infinity' at the begining, to make sure that every other cost we may compare it to would be smaller than the starting one. The only exception is the cost of the first, starting vertex - this vertex will have a cost of 0, because it has no path to itself.

As long as there are vertices without the shortest path assigned to them, we do the following:

- We pick a vertex with the shortest current cost, visit it, and add it to the visited vertices set
- We update the costs of all its adjacent vertices that are not visited yet. We do this by comparing its current (old) cost, and the sum of the parent's cost and the edge between the parent (current node) and the adjacent node in question.

Let's explain this a bit better using an example:

Let's say the vertex 0 is our starting point. We are going to set the initial costs of vertices in this graph like this:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | inf |

2 | inf |

3 | inf |

4 | inf |

5 | inf |

6 | inf |

7 | inf |

8 | inf |

We pick the vertex with a minimum cost - that vertex is 0. We will mark it as visited and add it to our set of visited vertices. The visited nodes will be marked with green color in the images:

Then, we will update the cost of adjacent vertices (1 and 6).

Since and , the new costs will be the following (the numbers in bold will be visited vertices):

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | inf |

3 | inf |

4 | inf |

5 | inf |

6 | 7 |

7 | inf |

8 | inf |

Now we visit the next smallest cost node, which is 1.

We mark it as visited, and then update the adjacent vertices: 2, 6, and 7:

- Since , new cost of vertex 2 will be 13
- Since , the cost of vertex 6 will remain 7
- Since , new cost of vertex 7 will be 24

These are our new costs:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 13 |

3 | inf |

4 | inf |

5 | inf |

6 | 7 |

7 | 24 |

8 | inf |

The next vertex we're going to visit is vertex 6. We mark it as visited and update its adjacent vertices' costs:

Since , the new cost for vertex 7 will be 8.

These are our new costs:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 13 |

3 | inf |

4 | inf |

5 | inf |

6 | 7 |

7 | 8 |

8 | inf |

The next vertex we are going to visit is vertex 7:

Now we're going to update the adjacent vertices: vertices 4 and 8.

- Since , the new cost of vertex 4 is 9
- Since , the new cost of vertex 8 is 11

These are the new costs:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 13 |

3 | inf |

4 | 9 |

5 | inf |

6 | 7 |

7 | 8 |

8 | 11 |

The next vertex we'll visit is vertex 4:

We will now update the adjacent vertices: 2, 3, 5, and 8.

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually *learn* it!

- Since , the new cost of vertex 2 is 11
- Since , the new cost of vertex 3 is 19
- Since , the new cost of vertex 5 is 24
- Since , the cost of vertex 8 remains 11

The new costs are:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 11 |

3 | 19 |

4 | 9 |

5 | 24 |

6 | 7 |

7 | 8 |

8 | 11 |

The next vertex we are going to visit is vertex 2:

The only vertex we're going to update is vertex 3:

- Since , the cost of vertex 3 stays the same

Therefore, the current costs for all nodes will stay the same as before.

The next vertex we are going to visit is vertex 8:

We are updating the vertex 5:

- Since , the new cost of vertex 5 is going to be 23

The costs of vertices right now are:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 11 |

3 | 19 |

4 | 9 |

5 | 24 |

6 | 7 |

7 | 8 |

8 | 11 |

The next vertex we are going to visit is vertex 3.

We are updating the remaining adjacent vertex - vertex 5.

- Since , the cost of vertex 5 remains the same

The final vertex we are going to visit is vertex 5:

There are no more unvisited vertices that may need an update.

Our final costs represents the shortest paths from node 0 to every other node in the graph:

vertex | cost to get to it from vertex 0 |
---|---|

0 | 0 |

1 | 4 |

2 | 11 |

3 | 19 |

4 | 9 |

5 | 24 |

6 | 7 |

7 | 8 |

8 | 11 |

### Implementation

Now that we've gone over an example, let's see how we can implement Dijkstra's algorithm in Python!

Before we start, we are first going to have to import a priority queue:

We will use a priority queue to easily sort the vertices we haven't visited yet, which will more easily allow us to choose the one of shortest current cost.

Now, we'll implement a constructor for a class called :

In this simple parametrized constructor, we provided the number of vertices in the graph as an argument, and we initialized three fields:

- : Represents the number of vertices in the graph
- : Represents the list of edges in the form of a matrix. For nodes and , of the edge
- : A set which will contain the visited vertices

Now, let's define a function which is going to add an edge to a graph:

Now, let's write the function for Dijkstra's algorithm:

In this code, we first created a list of the size . The entire list is initialized to infinity. This is going to be a list where we keep the shortest paths from to all of the other nodes.

We set the value of the start vertex to 0, since that is its distance from itself.

Then, we initialize a priority queue, which we will use to quickly sort the vertices from the least to most distant. We put the start vertex in the priority queue.

Now, for each vertex in the priority queue, we will first mark them as visited, and then we will iterate through their neighbors.

If the neighbor is not visited, we will compare its old cost and its new cost.

The old cost is the current value of the shortest path from the start vertex to the neighbor, while the new cost is the value of the sum of the shortest path from the start vertex to the current vertex and the distance between the current vertex and the neighbor.

If the new cost is lower than the old cost, we put the neighbor and its cost to the priority queue, and update the list where we keep the shortest paths accordingly.

Finally, after all of the vertices are visited and the priority queue is empty, we return the list . Our function is done!

Let's put the graph we used in the example above as the input of our implemented algorithm:

Now, we will simply call the function that performs Dijkstra's algorithm on this graph and print out the results:

Once we compile the program, we will get this output:

Which is exactly what we wanted!

### Time Complexity

In this algorithm, we pass each edge once, which results in time complexity of , where represents the number of edges.

We also visit each node once, which results in time complexity of , where represents the number of vertices. Each vertex will be put in a priority queue, where finding the next closest vertex is going to be done in constant time. However, we use time to sort the vertices in this priority queue.

This results in *time complexity* of this algorithm being .

### Conclusion

A graph is a very common data structure used in mathematics as well as computer science. Dijkstra's algorithm finds the shortest path from one vertex of a graph to the other vertices.

In this article, we went through what a graph is and mentioned different types of graphs with a couple of examples of their practical uses. We also explained what Dijkstra's algorithm is and went over an example of it and implemented it in Python. Finally, we explained the time complexity of Dijkstra's algorithm.

## 24. Shortest Paths¶

## A self learner’s guide to shortest path algorithms, with implementations in Python

### Getting Started

In graph theory, a path is a sequence of **distinct **vertices and edges connecting two nodes. There can be a plethora of paths that lead from one source node to a destination node. Consider the following example:

Consider the component (0, 1, 2, 3), we have two possible ways of getting from 0 to 3. add in (3, 4, 5, 6) and we have 2 x 2 possible ways. Add in another 8 similar components, and we will have 2¹⁰ = 1024 possible paths.

The number of paths can grow exponentially in terms of the input, but only a subset of those paths minimizes the sum of edge weights, and those are called shortest paths.

Generating a set of all paths to find the shortest ones is, as we have demonstrated, very wasteful. In this article, I will expound on the most fundamental shortest paths algorithms. The aim is to ensure an understanding not only of how such algorithms work, but why they do, while avoiding convoluted mathematical proofs and emphasizing intuition instead, all of which will be bolstered with python code.

Prior knowledge of basic graph algorithms such as BFS and DFS is a bonus, but not requisite. If you know what an edge and a vertex are, you probably know enough.

Throughout this article, a graph G(V, E), with V representing the set of vertices in the graph, and E representing the set of edges in the graph, will be represented as an Adjacency List.

For example, the adjacency list for the above graph is represented as below:

Breadth First Search (BFS) is a fundamental graph traversal algorithm. The way it works is, for every node, we scan all of its adjacent nodes, and store them so that we can scan each of them in turn in the next iteration.

### Implementation

This implementation in python, from MIT 6.006, while not the one you are likely to use in practice, will help us understand better what BFS is actually doing, and how we can leverage it to find shortest paths.

In every iteration of the while loop, we explore a new frontier of nodes by looping through the neighbors of each node in the last frontier.

The result is we end up dividing the graph into levels, as shown in the figure above, where **the first level is comprised of nodes that are at least one edge away from the source**, the second of nodes that are at least two edges away from the source, and so on and so forth. In the code above, the distance array *dist *holds this value for every node in the graph.

We can then surmise that after running BFS on a graph, we can figure out the way to reach any node from the source using the least number of edges. Well, that to me sounds like the shortest path to any vertex in the graph ! (not quite …).

Along with the distance array, we are also maintaining an array (or hash table if you prefer) of parent pointers, conveniently named parent, in which we specify, for every discovered node v, the node u we discovered v from, i.e. it’s parent.

We are using that array to keep track of whether a node has been visited before, but that alone doesn’t require specifying the parent of every node. The real reason we are doing that, is to be able to backtrack to the source and construct the shortest path by following those parent pointers.

A more practical implementation of BFS would use a queue to store the nodes to explore next. But it abstracts away the notion of levels, making it harder to understand how BFS gives us the shortest path.

### Running Time

- V iterations are required to initialize parent and distance arrays
- The while loop will only run V times. Because we keep track of every node we see in the parent array, we only add unexplored nodes to be scanned in the next iteration.
- Per every iteration of the while loop, we pop a node from the queue and we scan all of the nodes adjacent to it by an edge. That is equal to the number of outgoing edges from V. The cost of doing that to every node in the graph is O(Total number of edges = E).

Thus, The running time of BFS is** O(V + E)**.

### A Caveat

The caveat is, as stated before, that this is only the shortest path in terms of the number of edges, i.e. this would only qualify as a “real” shortest path in case the graph is either unweighted or all the weights are the same. Consider the following example where the shortest path from 0 to 2 is not the one with the least number of edges:

So how can we fix this?

Well, one way to do it is to transform the initial weighted graph into an unweighted one whilst keeping the specifications of the problem statement intact, by breaking the weighted edges into edges of weight 1 and linking them together with fake nodes.

what is the running time of this algorithm ?

This is still BFS, so the running time should still be linear in terms of the edges and vertices. But how many of those do we have in our new graph ?

- For every edge of weight
**w**, we replace it by**w**edges of weight 1:**E’ = O(W*E).**W being the maximum edge weight. - For every Edge of weight w, we create w -1 new vertices, and the old vertices have yet to go anywhere.
**V’ = O(W*E + V).**

Thus, the running time is **O(V + W*E)**, and it’s here that we begin to see the caveat with this approach: It is dependent on how large the weights are. This may as well work well for graphs with small weights, but it’s seldom useful in practice. We would rather have an algorithm that can quickly find the shortest paths regardless of the value of the weights.

To that end, we need introduce a vital technique that is the key to finding the shortest paths in a graph.

Relaxing an edge (u, v) consists in simply checking whether we can find, from a source node s, **a shorter path to v through (u, v)**.

We do so by comparing the distance to v through the old path we know of, with that of the path formed by the shortest path to u, and the edge (u, v).

Relax(u, v, weight):if d[v] > d[u] + weight(u, v):

d[v] = d[u] + weight(u, v)

parent[v] = u

If indeed we found that we could reach v faster through the edge (u, v), then we update the following values:

- d[v]: the distance to the node v from a source s. we update this with the new value that we just compared it to.
- parent[v], the direct parent of v, which is now u. Since, like in BFS, we find the path by following parent pointers, this is akin to updating the shortest path to v.

### Template for shortest path algorithms

Using the technique we learned above, we can write a simple skeleton algorithm that computes shortest paths in a weighted graph, the running time of which does not depend on the values of the weights.

- Select edge (u, v) from the graph.
- Relax edge (u, v).
- repeat 1 and 2, selecting edges in some order, until no edges can be relaxed (d[v] ≤ d[u] + weight(u, v) for all edges (u, v))

If no edges in the graph can be relaxed, that means there is no better way to reach any node through any of its adjacent edges, i.e. there is no shorter path to get to any node that the path we have at the moment.

Well that’s simple enough! However, as it is often the case, the devil is in the details, and in this particular case, the devil insidiously lurks within the word** ‘some order’**. An unwise relaxation order will result in exponential time. A good order may even result in linear time. The efficiency of this algorithm hinges on the order in which you relax the edges, and each of the subsequent algorithms differs in that regard.

Since we already know how to explore the edges and vertices of a graph in a certain order that is called BFS, why don’t we try that ? Let us explore the graph in breadth first order, scanning vertices and relaxing edges along the way.

This time, however, we can’t just be content with scanning every vertex once. Unlike when edges were unweighted, we can’t possibly know whether there could be a better path to reach a node, a path that goes through nodes lying multiple frontiers ahead and are yet to be explored, like in the example below.

Instead, if when scanning a node u, an edge (u, v) can be relaxed, we relax it and then we would add it to the queue. But we don’t want to add a node to the queue that is already in it, and is scheduled to be scanned later, that would be wasteful. Therefore, for each node we have to keep track of whether it is in the queue at any given moment.

The implementation would go something like below: (ignore lines 26 to 30 for now)

But why is this algorithm able to find the shortest path ?

### Intuition

Similarly to how we did in BFS, we will try to analyze this algorithm by looking at the levels in the queue.

Level 0 consists only of the source node s.

Level 1, same as BFS, consists of all the nodes adjacent to the source node. Those are nodes within an edge of the source, i.e. through a path of length 1.

Level 2 will now consist of all the nodes adjacent to the nodes at level 1, whose edges **can be relaxed**. So these are all the nodes in the graph that can be reached through a path of length 2, faster than a path of length 1.

At level 1, all the shortest paths (from source s) of length 1 are computed correctly.

At level 2, all the shortest paths of length 2 are computed correctly.

At level V-1, all the shortest paths of length V-1 are computed correctly.

A path can only have V nodes at most, since all of the nodes in a path have to be distinct from one another, whence the maximum length of a path is V-1 edges. Thus, after V-1 levels, the algorithm finds all the shortest paths and terminates.

### Negative weight cycles

Up until this point I have purposefully neglected an important detail about weighted graphs. Consider the following graph:

Nodes 1, 2 and 3 form a cycle. What is particular about this cycle is that the sum of the weights of its edges is negative. Finding the shortest path in a graph with a negative cycle is an NP-complete problem, for which there is no known algorithm that can compute an efficient solution, and it’s easy to see why.

We have seen that the key for finding shortest paths is the order in which edges are relaxed, but no matter the order in which you relax edges 1–2, 2–3 and 3–1, you will never reach a point where no edges can be relaxed, for you can always decrease the distance by going through the negative edge one more time, by closing the cycle one more time.

Wherefore it behooves us to detect the presence of these cycles within our algorithm to keep it from running ad infinitum. But how ?

- We have already made arrangements to ensure a node does not appear more than once during a level or phase.
- We also know that under regular circumstances there should be no more than V-1 levels.

Therefore, a node can only be pushed into the queue at most V-1 times during the entirety of the execution. We can then keep track of how many times a node was added to the queue (line 26), and if that number exceeds V-1, we have detected a negative cycle (lines 28 to 30).

**note:** A consequence of this is that we cannot use this algorithm on **undirected graphs with negative edges**, because a single negative undirected edge would count as a negative cycle (since its equivalent to 2 directed edges, (u,v) and (v,u)).

### Running time

- We know that the algorithm has V-1 phases, each corresponding to the V-1 levels we just spoke of. (For example in phase 1, I pop all the nodes in level 0(which is just the source), scan all their adjacent nodes and make the next level of nodes to be scanned.)
- Since we ensured no vertex appear on the queue more than once in the same level, there will be no more than V vertices in each level.
- In each phase, the worst case would be if we had all the vertices in a single level and had to scan all the edges in the graph, which translates to O(E).

Therefore, the total running time of the algorithm is **O(V.E)**.

This is quadratic time in case the graph is sparse (E = O(V), not very bushy) and cubic time in case the graph is dense (E = O(V²), very bushy graph, lots of edges coming out of each node). That can be pretty slow, is there no way to speed this up ?

well, it turns out we can’t do any better when it comes to graphs that contain negative weight edges. But who needs them anyway ? It’s not like they appear frequently in practice. If we had a graph with non negative edge weights, is there a way to leverage that to come up with a faster algorithm ?

Consider the following example:

If we posit that there are no negative edges in the graph, can we make a guess as to which one of these edges forms a shortest path?

It’s the edge with minimum weight, that links **node C **to the** source**.

Why is that the case ? Well, If there existed a shorter path to node C, then it would have to go through one of the nodes from A to E, all of which have higher weights in their edges than the edge we picked, which had the minimum weight. **Because we have no negative weight edges, the cost of a path would never get smaller as you add in more edges, **so there is no hope of decreasing the total path weight as we progress further in the graph.

Thus, we can tell with certainty that (S,C) is the shortest path to node C, and as a result, we don’t have to go back to it anymore, and therein lies the essence of Dijkstra’s algorithm.

### Algorithm

Dijkstra adopts a greedy approach to edge relaxation. At each iteration, It selects the closest node to the source. It relaxes all of its outgoing edges, and then never checks it again, because it is be sure that this is the current value is the shortest path to that node.

It keeps repeating the process until it has exhausted every node on the graph, or, alternatively, if you have a destination node in mind, until it has reached that particular node.

Pseudo-code Dijkstra (adj, source):visited = {}Q = priority_queue()

Initialize Q with (node, distance) values, distance being 0 for the source and infinity for every other node.while Q is not empty:

u = extract_min(Q) # deletes u from Q

visited = visited ∪ {u}

for each vertex v, weight in Adj[u]:

if d[v] > d[u] + weight(u, v):

d[v] = d[u] + weight(u, v)

parent[v] = u

Q.decrease_key(v, d[v])

The algorithm leverages a priority queue data structure instead of a regular queue like the previous ones in this article, since we need to pop the closest node at every iteration.

You can find here a visualization of Dijkstra that you can play with, created by David Galles at the University of San Francisco.

### Intuition

We have already explored part of the intuition as to why this algorithm manages to compute shortest paths. We can extend that reasoning beyond the first iteration:

For this particular state, the priority queue will return N. There exists no shorter path to node N, since any other path to N will have to go through the nodes that were already in the priority queue at the time (the way we explore nodes and relax all their adjacent edges makes it so), all of which are farther than A from the source.

As for the other nodes, while their distance values can decrease in future iterations, they can never become smaller than A’s distance, because their best bet of decreasing their distance is to find paths that go through A.

### Running Time

We have the following operations:

**O(V) inserts**into priority queue (initialization step)**O(V) extract min**operations (queue starts with V nodes and keeps popping until it runs out)- for every one of the extracted vertices, we do as many decrease key operations as there are outgoing edges (in the worst case). summing that over all nodes amounts to
**O(E) decrease key**operations.

The actual time complexity hinges on the data structure used to implement the priority queue.

**Using an unsorted array**, extracting the minimum would require a full pass through the vertices, incurring a cost of O(V). Decreasing a key’s value can be done in constant time. The result is **O(V² + E) = O(V²)**.

**Using a ****binary heap**, both operations would cost O(log(V)). the total running time is **O(V.log(V) + E.log(V)) = O(E.log(V))**.

Using a Fibonacci heap, you can get O(E) running time, but that’s too fancy a data structure for practical use.

As to which one is the better approach, it (clearly) depends on the value of E. The best value E can have is V -1* (when the graph is just connected). For sparse graphs E = O(V), making the binary heap a better option.

The worst case happens when the graph is dense and we have edges from every node to almost every other node. In that case, **E **= O(V²), and you’re better off using the array implementation to save cost over those decrease key operations.

*I’m ignoring the fact that E can be less than V-1 when have multiple disconnected components in the graph, since those won’t be reachable from the source anyway. You can either run BFS from the start node to determine which nodes are connected to it and only put those in the priority queue, or populate the latter as you go in the for loop rather than initializing it from the start(which is the case for the python implementation below).

### Implementation

This is an implementation that uses a heap as a priority queue, and is the most useful one in practice. But it slightly differs from the pseudocode:

Notice that instead of decreasing the key of a node in place, we just push the new (node, distance) pair in the queue, trusting that the queue will return the pair with the smallest distance. As for the duplicate larger ones that remain in the queue, they will be filtered out in line 12 by checking if they have already been visited.

The reason I opted for this, for the decrease key to be O(log(V)), we need to be able, from a node u, to locate the (u, du) pair in the queue quickly. Python’s heapq implementation does not offer such functionality. You can either implement it by hand, use some external module, or get creative.

Since the number of nodes in the queue is no longer capped at V nodes, we can have at most O(E) nodes in the queue. The asymptotic running time is O(E.log(E)) = O(E.log(V²)) = O(E.log(V)), so it basically remains unaffected.

Dijkstra’s running time is pretty good, and for a random graph, there is no better asymptotic running time, though algorithm that run faster in practice do exist.

However, for the particular case of Directed Acyclic Graphs (DAGs),there is one last algorithm that is faster than Dijkstra’s, and that can even work with negative weight edges !

The creative name in the title is curtesy of the fact that this algorithm lacks one, since no one really knows who first invented it.

Now, the primary instinct one should develop upon encountering a Directed Acyclic Graph (DAG) in the wild, is to topologically sort it. This would give us a linear ordering such that if (u, v) was a directed edge from u to v, than u, the predecessor, will certainly appear before v, the successor.

In other words, **no node can come before its predecessor on any path. All paths lead forward.**

Therefore, by iterating through each node in topological order, relaxing all of its outgoing edges, we can rest assured that once a node is scanned for its neighbors it would never have to be again, because its distance value would never be updated, because there literally is no way to go back to it from its successor nodes, as is the property of topological sort.

This is basically the same reasoning used to prove the correctness of Dijkstra’s greedy approach, but without the complicated explanations (the actual formal proof of Dijkstra is pretty gnarly…) or the fancy data structures. and the total running time is same as BFS, **O(V + E)**.

**Implementation**

We first topologically sort the graph. The code below uses Depth First Search (DFS) for that purpose. I will not get into the details of this since it is not a shortest path algorithm.

And now we can implement the shortest path algorithm:

Phew, well It’s about damn time. This took longer that I expected. We have explored the most fundamental single source shortest algorithms. Here is a summary of the algorithms we have seen so far:

And below is a decision tree to help with choosing the best option for your use case:

A couple of notes to finish:

- All of the algorithms described in this article, are designed to find the shortest path from
**one particular source node,**hence why they are called**single source shortest paths algorithms**. There are other algorithms that can find all the shortest paths from all nodes on the graph, and that are more efficient than running one of the algorithms we have seen V times (as there are V source nodes). I might write an article on those next time. - I actually lied about the
**Bellman Ford**Algorithm. The algorithm presented in the article is actually called “**Shortest path faster**”, an improved version of Bellman ford that has the same asymptotic complexity, but works faster in practice. They are also the same proof of correctness, but I find it easier to see that proof using the shortest paths faster algorithm. That, coupled with practical efficiency is what made me choose the variation over the original.

## Shortest path python

## Python tutorial

### Dijkstra's shortest path algorithm

bogotobogo.com site search:

Introduction

We want to find the path with the smallest total weight among the possible paths we can take.

Dijkstra's shortest path algorithm

Dijkstra's algorithm is an iterative algorithm that provides us with the shortest path from one particular starting node (**a** in our case) to all other nodes in the graph.

To keep track of the total cost from the start node to each destination we will make use of the **distance** instance variable in the **Vertex** class. The **distance** instance variable will contain the current total weight of the smallest weight path from the start to the vertex in question. The algorithm iterates once for every vertex in the graph; however, the order that we iterate over the vertices is controlled by a **priority queue** (actually, in the code, I used **heapq**).

The value that is used to determine the order of the objects in the priority queue is **distance**. When a vertex is first created **distance** is set to a very large number.

When the algorithm finishes the distances are set correctly as are the predecessor (**previous** in the code) links for each vertex in the graph.

Code

In the code, we create two classes: **Graph**, which holds the master list of vertices, and **Vertex**, which represents each vertex in the graph (see Graph data structure).

The source file is Dijkstra_shortest_path.py.

The function **dijkstra()** calculates the shortest path. The **shortest()** function constructs the shortest path starting from the target ('e') using predecessors.

Output:

Graph data: ( a , f, 14) ( a , c, 9) ( a , b, 7) ( c , d, 11) ( c , f, 2) ( c , a, 9) ( c , b, 10) ( b , d, 15) ( b , a, 7) ( b , c, 10) ( e , d, 6) ( e , f, 9) ( d , c, 11) ( d , e, 6) ( d , b, 15) ( f , a, 14) ( f , c, 2) ( f , e, 9) Dijkstra's shortest path updated : current = a next = f new_dist = 14 updated : current = a next = c new_dist = 9 updated : current = a next = b new_dist = 7 updated : current = b next = d new_dist = 22 not updated : current = b next = c new_dist = 9 updated : current = c next = d new_dist = 20 updated : current = c next = f new_dist = 11 updated : current = f next = e new_dist = 20 not updated : current = d next = e new_dist = 20 The shortest path : ['a', 'c', 'f', 'e']The steps to calculates the path are:

- Assign to every node a tentative distance value: set it to zero for our initial node and to infinity for all other nodes. Actually, initialization is done in the Vertex constructor: self.distance = sys.maxint
For the starting node, initialization is done in

print '''Dijkstra's shortest path''' # Set the distance for the start node to zero start.set_distance(0)**dijkstra()** - Mark all nodes unvisited. This is also done in the Vertex constructor: self.visited = False
- Set the initial node as current. Create a list of the unvisited nodes called the unvisited list consisting of all the nodes. We do it using tuple pair, (distance, v) def dijkstra(aGraph, start, target): print '''Dijkstra's shortest path''' # Set the distance for the start node to zero start.set_distance(0) # Put tuple pair into the priority queue unvisited_queue = [(v.get_distance(),v) for v in aGraph] heapq.heapify(unvisited_queue)
- For the current node, consider all of its unvisited neighbors and calculate their tentative distances. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B (through A) will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, keep the current value. while len(unvisited_queue): # Pops a vertex with the smallest distance uv = heapq.heappop(unvisited_queue) current = uv[1] current.set_visited() #for next in v.adjacent: for next in current.adjacent: # if visited, skip if next.visited: continue new_dist = current.get_distance() + current.get_weight(next) if new_dist < next.get_distance(): next.set_distance(new_dist) next.set_previous(current) print 'updated : current = %s next = %s new_dist = %s' \ %(current.get_id(), next.get_id(), next.get_distance()) else: print 'not updated : current = %s next = %s new_dist = %s' \ %(current.get_id(), next.get_id(), next.get_distance())
- When we are done considering all of the neighbors of the current node, mark the current node as visited and remove it from the unvisited set. while len(unvisited_queue): # Pops a vertex with the smallest distance uv = heapq.heappop(unvisited_queue) current = uv[1] current.set_visited() ... # Rebuild heap # 1. Pop every item while len(unvisited_queue): heapq.heappop(unvisited_queue) # 2. Put all vertices not visited into the queue unvisited_queue = [(v.get_distance(),v) for v in aGraph if not v.visited] heapq.heapify(unvisited_queue)
For each new node visit, we rebuild the heap: pop all items, refill the

**unvisited_queue**, and then**heapify**it. - A visited node will never be checked again. #for next in v.adjacent: for next in current.adjacent: # if visited, skip if next.visited: continue
- If there is no unvisited node, the algorithm has finished. Otherwise, we go back to step 4.
- Gather predecessors starting from the target node ('e'). In the code, it's done in
**shortest()**function. def shortest(v, path): ''' make shortest path from v.previous''' if v.previous: path.append(v.previous.get_id()) shortest(v.previous, path) return

more

Python Home

Introduction

Running Python Programs (os, sys, import)

Modules and IDLE (Import, Reload, exec)

Object Types - Numbers, Strings, and None

Strings - Escape Sequence, Raw String, and Slicing

Strings - Methods

Formatting Strings - expressions and method calls

Files and os.path

Traversing directories recursively

Subprocess Module

Regular Expressions with Python

Regular Expressions Cheat Sheet

Object Types - Lists

Object Types - Dictionaries and Tuples

Functions def, *args, **kargs

Functions lambda

Built-in Functions

map, filter, and reduce

Decorators

List Comprehension

Sets (union/intersection) and itertools - Jaccard coefficient and shingling to check plagiarism

Hashing (Hash tables and hashlib)

Dictionary Comprehension with zip

The yield keyword

Generator Functions and Expressions

generator.send() method

Iterators

Classes and Instances (__init__, __call__, etc.)

if__name__ == '__main__'

argparse

Exceptions

@static method vs class method

Private attributes and private methods

bits, bytes, bitstring, and constBitStream

json.dump(s) and json.load(s)

Python Object Serialization - pickle and json

Python Object Serialization - yaml and json

Priority queue and heap queue data structure

Graph data structure

Dijkstra's shortest path algorithm

Prim's spanning tree algorithm

Closure

Functional programming in Python

Remote running a local file using ssh

SQLite 3 - A. Connecting to DB, create/drop table, and insert data into a table

SQLite 3 - B. Selecting, updating and deleting data

MongoDB with PyMongo I - Installing MongoDB ...

Python HTTP Web Services - urllib, httplib2

Web scraping with Selenium for checking domain availability

REST API : Http Requests for Humans with Flask

Blog app with Tornado

Multithreading ...

Python Network Programming I - Basic Server / Client : A Basics

Python Network Programming I - Basic Server / Client : B File Transfer

Python Network Programming II - Chat Server / Client

Python Network Programming III - Echo Server using socketserver network framework

Python Network Programming IV - Asynchronous Request Handling : ThreadingMixIn and ForkingMixIn

Python Coding Questions I

Python Coding Questions II

Python Coding Questions III

Python Coding Questions IV

Python Coding Questions V

Python Coding Questions VI

Python Coding Questions VII

Python Coding Questions VIII

Image processing with Python image library Pillow

Python and C++ with SIP

PyDev with Eclipse

Matplotlib

Redis with Python

NumPy array basics A

NumPy Matrix and Linear Algebra

Pandas with NumPy and Matplotlib

Celluar Automata

Batch gradient descent algorithm

Longest Common Substring Algorithm

Python Unit Test - TDD using unittest.TestCase class

Simple tool - Google page ranking by keywords

Google App Hello World

Google App webapp2 and WSGI

Uploading Google App Hello World

Python 2 vs Python 3

virtualenv and virtualenvwrapper

Uploading a big file to AWS S3 using boto module

Scheduled stopping and starting an AWS instance

Cloudera CDH5 - Scheduled stopping and starting services

Removing Cloud Files - Rackspace API with curl and subprocess

Checking if a process is running/hanging and stop/run a scheduled task on Windows

Apache Spark 1.3 with PySpark (Spark Python API) Shell

Apache Spark 1.2 Streaming

bottle 0.12.7 - Fast and simple WSGI-micro framework for small web-applications ...

Flask app with Apache WSGI on Ubuntu14/CentOS7 ...

Fabric - streamlining the use of SSH for application deployment

Ansible Quick Preview - Setting up web servers with Nginx, configure enviroments, and deploy an App

Neural Networks with backpropagation for XOR using one hidden layer

NLP - NLTK (Natural Language Toolkit) ...

RabbitMQ(Message broker server) and Celery(Task queue) ...

OpenCV3 and Matplotlib ...

Simple tool - Concatenating slides using FFmpeg ...

iPython - Signal Processing with NumPy

iPython and Jupyter - Install Jupyter, iPython Notebook, drawing with Matplotlib, and publishing it to Github

iPython and Jupyter Notebook with Embedded D3.js

Downloading YouTube videos using youtube-dl embedded with Python

Machine Learning : scikit-learn ...

Django 1.6/1.8 Web Framework ...

Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

*Sponsor Open Source development activities and free contents for everyone.*

*Thank you.*

- K Hong

## Shortest Paths¶

(G, source)

Compute weighted shortest path length and predecessors.

(G, source, target[, weight])

Returns the shortest weighted path from source to target in G.

(G, source, target[, weight])

Returns the shortest weighted path length in G from source to target.

(G, source[, target, …])

Find shortest weighted paths and lengths from a source node.

(G, source[, …])

Find shortest weighted paths in G from a source node.

(G, source)

Find shortest weighted path lengths in G from a source node.

(G, sources[, target, …])

Find shortest weighted paths and lengths from a given set of source nodes.

(G, sources[, …])

Find shortest weighted paths in G from a given set of source nodes.

(G, sources)

Find shortest weighted path lengths in G from a given set of source nodes.

(G[, cutoff, weight])

Find shortest weighted paths and lengths between all nodes.

(G[, cutoff, weight])

Compute shortest paths between all nodes in a weighted graph.

(G[, cutoff, …])

Compute shortest path lengths between all nodes in a weighted graph.

(G, source, target[, …])

Dijkstra’s algorithm for shortest paths using bidirectional search.

(G, source, target[, weight])

Returns the shortest path from source to target in a weighted graph G.

(G, source, target)

Returns the shortest path length from source to target in a weighted graph.

(G, source[, …])

Compute shortest paths and lengths in a weighted graph G.

(G, source[, …])

Compute shortest path between source and all other reachable nodes for a weighted graph.

(G, source)

Compute the shortest path length between source and all other reachable nodes for a weighted graph.

(G[, weight])

Compute shortest paths between all nodes in a weighted graph.

(G[, weight])

Compute shortest path lengths between all nodes in a weighted graph.

(G, source)

Compute shortest path lengths and predecessors on shortest paths in weighted graphs.

(G[, weight, heuristic])

Returns True if there exists a negative edge cycle anywhere in G.

(G, source[, weight])

Compute shortest path lengths and predecessors on shortest paths in weighted graphs.

(G[, weight])

Uses Johnson’s Algorithm to compute shortest paths.

### You will also be interested:

- Bunny thank you gif
- Firefighter lights for sale
- Cary apartments under 800
- Iphone 6s football cases
- Easy tattoo drawings
- Metropcs phone trade in
- Mac mail signature image links not working
- Rave t shirts ideas
- I phone 6 theme
- Love to infinity records

Your turn. Not so tremulously as the first time, he pulled off his swimming trunks to the middle of his thighs, his legs spread apart. Below interfered.

**1358**1359 1360