A Simple Programming Puzzle Seen Through Three Different Lenses
By anders pearson 30 Oct 2007
The other day, I stumbled across Mark Nelson’s blog post describing a fairly simple NPR word puzzle: “Take the names of two U.S. States, mix them all together, then rearrange the letters to form the names of two other U.S. States. What states are these?”
Mark Nelson is a programmer, so his first instinct of course was to write a small program to solve it. Mine too. I immediately stopped reading his post, opened a new emacs buffer and spent five minutes coming up with this:
states = [
"alabama", "alaska", "arizona", "arkansas", "california", "colorado",
"connecticut", "delaware", "florida", "georgia", "hawaii", "idaho",
"illinois", "indiana", "iowa", "kansas", "kentucky", "louisiana",
"maine", "maryland", "massachusetts", "michigan", "minnesota",
"mississippi", "missouri", "montana", "nebraska", "nevada",
"newhampshire", "newjersey", "newmexico", "newyork", "northcarolina",
"northdakota", "ohio", "oklahoma", "oregon", "pennsylvania", "rhodeisland",
"southcarolina", "southdakota", "tennessee", "texas", "utah", "vermont",
"virginia", "washington", "westvirginia", "wisconsin", "wyoming"]
seen = dict()
for state1 in states:
for state2 in states:
if state1 == state2:
continue
letters = list(state1 + state2)
letters.sort()
key = "".join(letters)
if seen.has_key(key):
(old1, old2) = seen[key]
if old1 == state2 and old2 == state1:
continue
else:
print "found it: %s + %s, %s + %s" % (
state1, state2, old1, old2)
exit(0)
else:
seen[key] = (state1, state2)
Sure enough, it gave me the answer (and like Mark Nelson, seeing it made me realize that it probably would have been even more productive to just think about it for a minute instead of writing the code).
To me, that Python code is about as simple and direct as it gets. There’s nothing clever going on there. Just loop over the cartesian product of states x states, skip a couple obvious cases and then normalize the names of the states by sorting the letters and put them into a dictionary. As soon as you hit a combination that’s already in the dictionary, you have your answer.
That approach to me would qualify as a “naive” or “brute force” approach. I figured there might be a more optimized approach, but with only 50 states it wasn’t really worth optimizing any more. Running it took all of 18 milliseconds.
When I went back and read the rest of the blog post, my jaw dropped. His first pass, in C++ involved a quadruply nested loop, STL libraries, and was taking upwards of an hour to run. Looking at his code, he really took the notion of a “brute force” approach to a level I hadn’t even considered. His second and third passes improved the runtime and even simplified the code, but they still run on the order of a few seconds. That’s for a compiled, low level language. My Python code written without any particular thoughts toward efficiency running in an interpreter (so the time of starting up the python process, parsing the code, and interpreting it directly, line by line without compiler optimizations are all included in the running time) was beating it by a couple orders of magnitude.
The key, of course, is the algorithm. Python, like Ruby, Perl, and other relatively high level languages, has supports a dictionary (or hashtable if you prefer) data type at the syntax level. As a result, anyone programming in one of those languages quickly learns how to make the most of them and becomes familiar with a number of idioms, including the one I used of testing for uniqueness by keeping a ‘seen’ dictionary, inserting keys one by one and looking for a collision. It’s dead simple, commonly used in scripting languages, and extremely efficient since inserting into a hashtable is O(1) and tends to be one of the most finely tuned parts of a scripting language’s interpreter/VM.
There’s nothing Python specific about the algorithm. In fact, in the comments on the post, Vince Huston submits a C++ solution that’s basically identical to my approach (and probably outperforms everyone else handily). If I were forced to solve the same problem in C++, I would probably have come up with something similar. I would not be at all surprised if Vince Huston has had some experience coding in scripting languages and Mark Nelson hasn’t.
The point isn’t that Mark Nelson is a bad programmer. Far from it. Looking around at the rest of his site, at articles like his explanation of the The Byzantine Generals Problem (which was how I ended up on his site in the first place), and at the books he’s written, I’d guess that overall, he has more breadth and depth to his understanding of algorithms and programming than I do.
My point is really just to repeat the tired, broken record cry of advocates of higher level languages that 1) using a faster, more appropriate algorithm will usually go much further in optimization than low level optimizations (using a compiled language, static types, fretting over clock cycles in an inner loop, etc) and 2) learning different programming paradigms, languages, and idioms will improve your programming even if you end up going back to a lower level language. In this case, some familiarity with dictionary idioms common in scripting languages helps immensely in producing a fast, simple solution.
Another comment on his post goes even further. Tom Moertel submits a solution implemented in Haskell. From a performance standpoint, it’s pretty much the same approach as mine, using a dictionary (in this case, Haskell’s Data.Map library) to do the comparisons on normalized strings. What makes it a really nice solution though, is that he approaches it by starting with a “clusterBy” higher order function that takes a signature function and a list and returns a list of groups clustered by the signature (my explanation is bad; his examples make it much more clear). Essentially, instead of directly solving the problem, he creates a general purpose tool which can then trivially be applied to the specific problem at hand. clusterBy is the kind of function that I could see being useful to other problems. So not only does he end up with the solution to this problem, he also ends up with a useful tool to make future programming tasks simpler. Haskell’s smooth support of higher order functions makes this a natural approach and it seems to be the way that proficient Haskell programmers end up taking on most problems.
Python had the good taste to steal a few items from Haskell’s bag of tricks though, so I was inspired to try a Python version. Here’s a rough equivalent of clusterBy:
:::python
def cluster_by(f,lst):
transformed = [f(x) for x in lst]
d = dict()
for t,i in zip(transformed,lst):
d.setdefault(t,[]).append(i)
return d.values()
Not as pretty as the Haskell, but works essentially the same way.
Then, to solve the original problem, we need a normalize function:
:::python
def normalize(t):
letters = list(t[0] + t[1])
letters.sort()
return "".join(letters)
It takes the tuple of two state names, sorts all the letters in them and returns that as a string. The final piece is just to apply cluster_by with the normalize function to the cartesian product of states x states and find the resulting entries with multiple entries (and since I was going Haskell-style, I decided to use list comprehensions instead of for loops as well):
:::python
clustered = cluster_by(normalize,[(x,y) for x in states for y in states])
print [x for x in clustered if len(x) > 2]
It runs in 25 milliseconds on my machine, so slightly more overhead than the procedural approach, but close enough to not matter and, IMO, it’s cleaner and would lend itself more to further reuse of the cluster_by or normalize functions.
So first knowing an idiom from a scripting language can produce better C++ code (see Vince Huston’s solution), then taking a lesson from Haskell I’m able to come up with better Python code.
I’m waiting for a Prolog programmer to come along and smoke everyone next.
Tags: programming functional haskell python