For better or for worse, thereâ€™s always more than one way to do something. Luckily for us, in the world of software and computer science, this is generally a Very Good Thingâ„˘.

And why is it good? Well, for one, Iâ€™m a big fan of options and having many of them to choose from. But really, it all comes down to different types of problemsâ€Šâ€”â€Šsome of which can seem similar to things weâ€™ve seen beforeâ€Šâ€”â€Šand the various solutions that fit best to solve them. This is certainly the case for the seemingly simplest of problems: take sorting, for example. As we learned early on in this series, there so many different methods of doing something as basic as sorting a series of numbers and putting them in order. In fact, the multiplicity of options is often *exactly* what makes a task that ought to be â€śbasicâ€ť seem far more complicated.

Hereâ€™s the thing, though: if we can get manage to climb over the hump of over-complication and somehow make it over to the other side, then we can start to see that all of these various solutions arose from a need to solve similar, but *ever-so-slightly different* problems. That was certainly the case with the origins of many of the sorting algorithms we know (and hopefully love!), and itâ€™s the case with graph traversal algorithms, too. Last week, we learned about one approach to the problem of walking through a graph: breadth-first search. Today, weâ€™ll flip this approach on its head, and look at a solution that is similar, and yet also the inverse of BFS.

So, without further ado: letâ€™s dive right into the deep end, shall we?

### A primer, before going deep

A helpful first step in knowing how any algorithm works and what it does is by knowing what the algorithm *does not* do. In other words, when weâ€™re learning something new, it can be useful to compare the new thing that weâ€™re learning to the things that we already know well and feel fairly comfortable with.

This is particularly the case when we start getting into more complex algorithms, like graph traversal algorithms. So, letâ€™s start with a definition, and then see how depth-first search compares to the other graph traversal algorithm that we are already familiar with: breadth-first search.

The ** depth-first search** algorithm allows us to determine whether two nodes, node

*x*and node

*y*, have a path between them. The DFS algorithm does this by looking at all of the children of the starting node, node

*x*, until it reaches node

*y*. It does this by recursively taking the same steps, again and again, in order to determine if such a path between two nodes even exists.

Now, if we contrast DFS to what we know about BFS, or *breadth-first search*, weâ€™ll start to see that, while these two algorithms might *seem* similar, they are fundamentally doing two very distinct things. The striking difference between the two algorithms is the way they approach the problem of walking or traversing through the graph. As we discovered last week, the BFS algorithm will traverse through a graph *one level at a time*, visiting all the children of any given vertexâ€Šâ€”â€Šthe neighboring nodes that are equidistant in how far away from the â€śparentâ€ť node in the graph.

However, depth-first search takes a different approach: it traverse down one single *path* in a graph, until it canâ€™t traverse any further, checking one child node at a time.

The depth-first algorithm sticks with one path, following that path down a graph structure until it ends. The breadth-first search approach, however, evaluates all the possible paths from a given node equally, checking all potential vertices from one node together, and comparing them simultaneously.

Like architecture and biology, in this case, the old adage rings true: form really *does* follow function. That is to say, the *way* that both of these algorithms are designed help give us clues as to what their strengths are! Breadth-first search is crafted to help us determine one (of sometimes many) shortest path between two nodes in the graph. On the other hand, depth-first search is optimized not to tell us if a path is the shortest or not, but rather to tell us if the path *even exists!*

And, as we can probably imagine, different situations, problems, and graphs will lead us to choose one of these algorithms over another. But, weâ€™ll come back to this later on. For now, letâ€™s just focus on getting to know depth-first search a little better.

Depth-first search is a particularly interesting algorithm because itâ€™s likely that weâ€™ve all used some variation of it at some point in our lives, whether we realized it or not. The easiest way to reason about depth-first search in a graph is to simplify it into a much easier problem.

The DFS algorithm is much like solving a maze. If youâ€™ve ever been to a real-life maze or found yourself solving one on paper, then you know that the trick to solving a maze centers around following a path until you canâ€™t follow it anymore, and then backtracking and retracing your steps until you find another possible path to follow.

At its core, thatâ€™s all that the depth-first search algorithm really is: a method of getting out of a maze! And, if we envision every graph as a maze, then we can use DFS to help us â€śsolveâ€ť and traverse through it.

Using this metaphor, when we employ DFS, all weâ€™re really doing is continuing to walk through the path of a graph until we reach a dead end. If and when we reach a dead end, we backtrack until we find another path that we havenâ€™t yet traversed through or walked down, and repeat the process. Eventually, weâ€™ll be able to determine whether or not we can get out of the mazeâ€Šâ€”â€Šthat is to say, whether or not a path between the starting node and the ending node exists.

One interesting thing to note before we start putting all this DFS theory into practice: the process of backtracking at a dead end and then *repeating* the walk down one single path of a graph is actually just ** recursion**! Weâ€™re taking the same action again and again and, in programmatic terms, this would end up being a

*recursive function call*, or a function that calls itself until it hits some sort of base case. As weâ€™ll see in a moment, recursion plays a big part in how DFS actually runs.

### Depth-first, in action

Exactly like what we saw in last weekâ€™s exploration of BFS, we can start our traversal of a graph with DFS in a similar fashionâ€Šâ€”â€Šwherever we want!

When it comes to both breadth-first search and depth-first search, there are only two major points to keep in mind when initiating a graph traversal: first, we can choose any arbitrary node to start our traversal with, since there is no concept of a â€śrootâ€ť nodes the way that there are in tree structures. And second, whatever we do, we want to ensure that we donâ€™t repeat any nodes; that is to say, once we â€śvisitâ€ť a node, we donâ€™t want to visit it again. Similar to what we did with the breadth-first search algorithm, weâ€™ll mark every vertex we visit as â€śvisitedâ€ť in order to ensure that we donâ€™t repeat nodes in our traversal unnecessarily.

So, letâ€™s try to run a DFS algorithm on the directed graph above, which has seven nodes that weâ€™ll end up needing to check, or â€śvisitâ€ť in the course of our graph traversal.

We can arbitrarily choose any node to start with, letâ€™s choose node a as our starting â€śparentâ€ť node. Since we know that depth-first search is all about finding out whether a path exists or not between two nodes, weâ€™ll want to be sure that we can keep track of where we came from as we walk through our graphâ€Šâ€”â€Šin other words, weâ€™ll need to keep some kind of trail of â€śbreadcrumbsâ€ť as we traverse.

For every node that we visit, weâ€™ll keep track of where we came from and use that to both *backtrack* when we need to, and also as an easy way to keep track of the path that weâ€™re constructing through the graph. When we choose node a as our â€śparentâ€ť node, weâ€™ll set a parent pointer reference, just like we did with our BFS algorithm. Since the â€śparentâ€ť vertex is the first one weâ€™re visiting in this algorithm, it doesnâ€™t have a â€śparentâ€ť pointer, since weâ€™re not coming from anywhere!

So, weâ€™ll set node a's parent pointer to NULL, and mark node a as â€śvisitedâ€ť. A simple way to keep track of which node weâ€™re currently searching through is by employing a stack data structure. The moment that we check node a, we can push it right on top of our stack. Since our stack is empty to start with, node a is the only element thatâ€™s actually *in* our stack. Weâ€™ll mark it as â€śvisitedâ€ť.

Next, weâ€™ll want to (recursively) visit every single node that is *reachable* from node a. Just as it doesnâ€™t matter *which* node we start with, it doesnâ€™t really matter *which* neighboring vertex we visit nextâ€Šâ€”â€Šjust as long as the vertex is reachable, and is one of the neighbors of a. For example, we could arbitrarily choose to visit node c next.

Weâ€™d push it onto the stack, which now contains two elementsâ€Šâ€”â€Ša reference to node a as well as a reference to node câ€”and weâ€™ll visit the node that is currently on top of the stack. In the process, weâ€™ll set its parent pointer to the vertex that we *just* came from: node a.

Now that weâ€™ve visited node c, thereâ€™s only one thing left to do: lather, rinse, and repeat! Okay, okayâ€Šâ€”â€Šyou can skip the first two. Really all we need to do here is just repeat the process (suds optional, obviously).

For example, since we can choose *any* node that is reachable from node c, we could choose node d as the next node we visit. Weâ€™ll add it to the top of the stack, mark it as â€śvisitedâ€ť, and set its parent pointer.

From node d, weâ€™ll visit node e: add it to the stack, mark as â€śvisitedâ€ť, and finally, set its parent pointer to the node we just came from: node d.

But now we have a problem: we canâ€™t repeat this process because thereâ€™s simply *nowhere* *to go* from node e!

Weâ€™ve gone as deep as we can down this particular path from the node that we started with, and weâ€™ve hit a dead end; that is to say, weâ€™ve reached a node with no reachable vertices!

Given our conundrum, letâ€™s pause for a moment and take a look at our stack of â€śvisitedâ€ť nodes, which has the following nodes on it: e, d, c, and a, in that order, from the top of the stack to the bottom. Since there is nowhere to go *from* node e, we effectively have no other node to visit, which means that we have no other node to add to the top of the stack. At least, given where we currently are, at node e. But, node d, the second element in the stack *might* have somewhere to go, right?

And this is exactly where the backtracking and the idea of â€śbreadcrumbsâ€ť comes into playâ€Šâ€”â€Šnot to mention recursion! When weâ€™ve gone as deep as possible down the graph, we can backtrack one step (one *vertex*) at a time, and check to see if there are any other paths that we could possibly take.

So, since we canâ€™t search through any paths from vertex e (since none exist), weâ€™ll pop vertex e off of the top of the stack once weâ€™re finished with it. This leaves node d at the top of the stack, so weâ€™ll repeat the same process againâ€Šâ€”â€Šthat is to say, weâ€™ll check to see if any of node d's neighbors can be visited and if there is a path down the graph from that node.

Once we backtrack from node e to d, weâ€™ll notice that thereâ€™s only one direction for us to go; there is only one node to check, which is node f. Weâ€™ll add it to the top of the stack, mark it as visited, and check to see if it has any children that we an visit.

Weâ€™ll notice that, after we backtracked and changed which node we were checking, looking at, or â€śvisitingâ€ť, the top of the stack changed. We popped off some nodes, and added on others, but the main parent node remained the same. We repeated the same steps again and again, with each node that was added to the top of the stackâ€Šâ€”â€Šand those steps were the same things we checked for the parent node, vertex a, when we added it to the stack when we first started out! This is *recursion* coming into play.

From node f, we have no choice but to visit g, which is the only accessible nodeâ€Šâ€”â€Šthe only one that is available for us to visit. So, weâ€™ll add g to the top of our stack, visit it, and check to see where we can go from there.

As it turns out, from node g, there is only one place for us to go: node c. However, since we were smart enough to keep track of the nodes that we visited, we already know that c has been visited and is part of this path; we donâ€™t want to visit it again! So, weâ€™ve come to another dead end, which means that we can backtrack. Weâ€™ll pop off node g from the stack, check to see if the next node has any other children we can traverse through. As it turns out, node f doesnâ€™t have any child nodes that we havenâ€™t already visited, nor do nodes d and c; so, weâ€™ll pop all of them off from the top of the stack.

Eventually, weâ€™ll find that weâ€™ve backtracked our way all the way to our original â€śparentâ€ť node, node a. So, weâ€™ll repeat the process again: weâ€™ll check to see which of its children we can visit, which we *havenâ€™t already visited* before. Since weâ€™ve already visited nodes c and g, the last remaining option is to visit b.

Again, weâ€™ll do what weâ€™ve done with every single node thus far: add node b to the top of the stack, mark it as â€śvisitedâ€ť, and check to see if it has any children that we can traverse through that havenâ€™t been visited yet. However, node b's children are e and d, and weâ€™ve visited both already, which means that weâ€™ve actually visited *all* of the nodes in this graph! In other words, our depth-first graph traversal is officially complete for this structure.

Weâ€™ll notice that, with each node that we pushed and later popped off the stack, we repeated the same steps from within our depth-first search algorithm. Indeed, what we were *really* doing was recursively visiting nodes. Effectively, every time that we reached a new node, we took these steps:

- We added the node to the top of the â€śvisitedâ€ť vertices stack.
- We marked it as â€śvisitedâ€ť.
- We checked to see if it had any childrenâ€Šâ€”â€Šand if it did, we ensured that they had not been visited already, and then visited it. If not, we popped it off the stack.

With every new node added to the stack, we repeated these steps from within the context of the previous node (or previous function call) on the stack. In other words, we *recursively* visited each node down the path, until we reached a dead end.

As it turns out, this recursive repetition of visiting vertices is the main characteristic of most implementations of the depth-first search algorithm!

### Real-life recursion and runtime

The recursion of the DFS algorithm stems from the fact that we donâ€™t actually finish checking a â€śparentâ€ť node until we reach a dead end, and inevitably pop off one of the â€śparentâ€ť nodeâ€™s children from the top of the stack.

We can think of the recursive aspect of DFS as a function call to â€śvisitâ€ť a node within another, already-running function call to â€śvisitâ€ť a node. For example, when we begin visiting node a, we are still *in the process of visiting* node a when we start visiting one of its children, like node c.

And yet, despite the recursion that is built-in to depth-first search, the runtime of this algorithm in real life isnâ€™t actually too terribly affected by the recursive aspect of this graph traversal technique. In fact, even with the recursion, the process of visiting every vertex in the graph once takes ** constant time**. So, if checking a vertex once isnâ€™t the expensive part of this algorithmâ€¦then what is?

The answer is lies in the edgesâ€Šâ€”â€Šmore specifically, the price of checking the outgoing edges from each vertex that we visit can turn out to be both pretty expensive, and time-consuming. This is because some nodes could have only one neighboring vertex to check, and thus, only one edge, while other nodes could have five, or ten, or many more edges to check! So, really, the runtime of checking every outgoing edge from one vertex to another depends solely upon the size/length of any given nodeâ€™s *adjacency linked list*, which is calculated as ** linear time**.

Weâ€™ll recall from the basics of graph theory that a graph can have either undirected or directed edges. Just as undirected and directed graphs had slightly different runtimes based on whether the edges appeared once or twice in the adjacency list representation of a graph for breadth-first search, itâ€™s a similar story here, too.

In fact, we could have applied DFS in the exact same way to the same graph weâ€™ve been dealing with, even if it were undirected. The only major difference would have been the fact that, when considering which vertex to â€śvisitâ€ť next while running DFS, each edge in the graph would have been considered twice.

Thus, the actual runtime of DFS is actually no different than that of BFS: they both take linear time, with the slight differentiation being the number of edges (the length of the adjacency linked list) of the graph, based on whether the graph is directed or undirected. For a *directed* graph, the runtime amounts to a runtime of O**(V + |E|)**, while for an undirected graph, the runtime is calculated as **

*O(V + 2|E|)*

**, both of which result in ****.

*linear time*But waitâ€Šâ€”â€Šhow does all of this theoretical recursion tie back into the actual implementation of this algorithm? We already know how graphs are represented using adjacency lists. We also know how to use those representations to make sense of other algorithms, like breadth-first search. So how can we make sense of the depth-first algorithm in a similar way?

Well, letâ€™s think about what would happen when we run DFS on the adjacency list representation of this same graph. The image below illustrates what that adjacency list might look like.

When we first visit our â€śparentâ€ť node a, we added it to our stack, and marked it as visited. In the context of our adjacency list, we are marking our â€śvisited arrayâ€ť, and flagging the index of the vertex we just pushed onto our stack (index 0), marking its â€śvisitedâ€ť status as TRUE.

Next, weâ€™ll take a look at the first item in node a's adjacency linked list. In this case, the first item in the list is a reference to the item at index 2, which is node c. Weâ€™ll visit node c next, and, in the process, weâ€™ll put the rest of the work of iterating through node a's adjacency linked list â€śon holdâ€ť. In other words, weâ€™re going to look up the node at index 2 next, rather than iterate through the rest of node a's adjacency linked list and look at whatever element happens to be at index 1.

Since the next step is to mark the node at index 2 as visited, weâ€™ll do exactly that. The vertex at index 2 is node c, so weâ€™ll add it to our stack, mark it as visited, and check its first neighbor in its adjacency linked list. Weâ€™ve already gone through these steps previously, so letâ€™s skip to the point where we hit the dead end and have to backtrack back *up* to the â€śparentâ€ť nodeâ€Šâ€”â€Š*thatâ€™s* where things get interesting!

After weâ€™ve traversed down all the way to check and visit node g, we hit a dead end, and backtrack back up to node a, located at index 0. It is only at this point that we pick up where we left off; that is to say, weâ€™ll continue now with the process of iterating through node a's adjacency linked list (*Finally!*).

The next element in node aâ€™s adjacency linked list is a reference to the index 1, which is a reference to the vertex b. At this point, weâ€™ll also notice that the entire â€śvisitedâ€ť array is marked with TRUE's everywhere, with only one exception: vertex b. This means that once we check node b and mark it as â€śvisitedâ€ť, we will have traversed through the whole graph and voilĂ â€Šâ€”â€Šweâ€™re done!

That wasnâ€™t too terrible, was it?

The differences between depth-first search and breadth-first search can be subtle at first and tricky to notice at first! They both can be implemented on an adjacency list representation of a graph, and they each result in the same runtime, and involve iterating through the adjacency list of every vertex within a graph. However, there are slight differences in their implementation that we can start to pick up on once we see each of these algorithms in action.

The important thing to remember about both of these algorithms is that neither one is necessarily *better* than the other. For example, depth-first search is great in determining whether a path exists between two nodes, and doesnâ€™t necessarily require a lot memory, since the entire graph doesnâ€™t need to be initialized or instantiated in order to traverse through it. However, DFS isnâ€™t helpful in finding a shortest path between two nodes; indeed, we might end up inadvertently finding the longest path! In comparison, BFS is great at finding the shortest path between two nodes, but often requires us to store the entire graph as we search through it, level by level, which can cost a lot in space and memory.

Each solution has its benefits and drawbacks. But, they are two different ways of solving a problem and, depending on what kind of problem we have, they might just end up being the perfect tool for the job.

### Resources

Depth-first search can be explained and implemented in a few different ways, and trying to understand all of themâ€Šâ€”â€Šat least, when youâ€™re first learning the DFS algorithmâ€Šâ€”â€Šcan feel overwhelming. However, once youâ€™re more comfortable and familiar with how it works, itâ€™s helpful to know the different implementations and idiosyncrasies of how DFS works. If youâ€™re looking to gain a deeper understanding of this algorithm, here are some good examples and implementations to help you get started.

- Depth-first Search (DFS) on Graphs Part 2, Sesh Venugopal
- Depth-First Search, Department of Computer Science, Harvard
- When is it practical to use DFS vs BFS?, StackOverflow
- Depth-First Search (DFS), Topological Sort, MIT OpenCourseWare
- Graph Traversalsâ€Šâ€”â€ŠBreadth First and Depth First, CollegeQuery

## Oldest comments (0)