Recently, I've been working through Dave Thomas's Code Kata problems in an effort to sharpen my programming skills. The early ones were fairly easy, and I got through them pretty quickly. That changed once I got to Kata 19, Word Chains.

The idea sounds simple enough. The puzzle is to write a program that will transform a starting word to an ending word. One letter is changed at a time, and each word in the chain must be a real word.

I initially coded a seemingly easy solution, but it only worked in a small subset of cases. I didn't think the problem through enough before jumping to code. In my original implementation, each word in the chain needed to have a letter in the proper position of the ending word. This worked for a case like "cat" to "dog" (cat, cot, cog, dog) but not in a more tricky case such as "the" to "end" (the, tie, die, did, aid, and, end) that had more intermediate steps.

Clearly, I was on the wrong path.

Finally, it occurred to me that I would have to calculate all the possible word transformations, then find the shortest path. Uh-oh. Sounds like a job for a graph. Now, I haven't looked at graph theory since college. I spent a couple of days researching the topic, even watching some interesting MIT lectures on algorithms (which I highly recommend).

Finally, I set to coding my new solution. After loading the list of words from the dictionary file, a graph is constructed. The starting node is the starting word, then all possible edges are recursively calculated. In a classic case of over-engineering, I then implemented Dijkstra's algorithm to find the shortest path.

The solution was plenty fast, finding any word chain I could throw at it in well under a second. But as I thought about it more, I realized Dijkstra was a bit of overkill. Part of Dijkstra's algorithm involves measuring the weights of different edges to find the shortest path. However, in this case, all the edges are equal in weight (a weight of 1).

With all edge weights equal, it was essentially a breadth-first search. Realizing that, I wrote a new implementation to find the shortest path, using a simple breadth-first search.

(Image: Wikipedia)

After getting that working, I found another optimization. Due to the way the breadth-first search traverses the graph, the graph doesn't need to be generated ahead of time. At each step of the search, when a node (a word) is visited, all the next words need to be generated. This can actually be calculated at each step, building the graph on the fly. This can reduce memory usage, since it will result in a smaller graph, only building what it needs to as it searches the graph.

The key to the breadth-first search is a Map that associates a node with its predecessor in the shortest path. This Map also is used to remember what words have already been visited:

Map<String, String> previous = new HashMap<>();</p>
Queue<String> q = new LinkedList<>();<br />

String word = null;
while (!q.isEmpty() && !(word = q.remove()).equals(toWord)) {
  List<String> nextWords = getNextWords(word);
  for (String nextWord : nextWords) {
    if (!previous.containsKey(nextWord)) {
      previous.put(nextWord, word);
      q.add(nextWord);<br />

When the search is completed, the shortest path can be constructed using the gathered predecessor data. Starting with the ending word, the path is built in reverse by looking up each word's precedessor. This continues until the starting word is reached:

List<String> path = new ArrayList<>();
String word = toWord;
while (!word.equals(fromWord)) {
  path.add(0, word);
  word = previous.get(word);

path.add(0, fromWord);

With that, the shortest word chain is calculated. This was a tough challenge for me, but it was interesting to re-learn all that graph theory that I had forgotten.

You can see the full Java solution on my GitHub page here: