thraxil.org:

A Gentle Introduction to Heka

Sysadmins know that monitoring, metrics, logging, and alerting are deceptively complicated problems. And complicated problems have developed complicated tools.

I've used a lot of them. Individually, they all suck in different ways. At least in the open source world, no single tool will handle everything. The combination of multiple tools needed to cover all the territory sucks even more. Multiple services to run on every machine, multiple config formats, multiple open ports and data directories that have to be kept track of. An exponential explosion of things that can break or be misconfigured.

Things are looking up though. Over the last few years, a few key pieces have been falling into place. Graphite has emerged as a pretty good backbone to hang a lot of the other pieces off. If you can figure out how to turn whatever you're interested into a single number, you can feed it into Graphite. Poll Graphite for that metric and you can send an alert if it passes a threshold. There are any number of tools out there to take your graphite metrics and turn them into beautiful dashboards that you can put on a big screen monitor in your office so everyone can see the health of your whole system at a glance. It's not perfect, but it handles a lot of use-cases and makes life simpler.

That's only one piece of the puzzle though. Funneling everything down to a single numerical metric works for quite a few things, but there are still plenty of cases where you want actual log entries, preferably even parsed out into fields and tagged with timestamps. If you can feed it into elasticsearch, Kibana will give you a pretty reasonable interface for looking through it. Even for the data making its way to Graphite, you still have to collect it from a variety of sources and grind it up and turn it into the right format.

What's left is basically a plumbing problem. Collecting data from logs various logs and applications in different formats and routing it around, parsing some, filtering out noise at the source, aggregating and rolling up some, turning some of it into counters and rate variables and getting the right bits out to the right endpoints.

There are a few options for this now. Riemann and Logstash will both handle this kind of plumbing. They have very different approaches, but can both fill that role.

Recently, Mozilla announced Heka, which is another player in that space. Heka is clearly heavily inspired by Logstash, but has its own flavor. It's a very young project, but so far I'm really liking what I'm seeing. It's written in Go and compiles to a small, fast binary that's painfully easy to deploy and is very light on resource usage. I tend to run small virtual servers without a lot of memory to spare, so I've been paying close attention.

Where Heka gets difficult is in configuring it. This is a problem that it shares with the other similar "plumbing" tools. The problem is that since they mostly just connect other components together, when looked at in isolation, it's hard to know exactly what you should do with it at first. You can easily download the right Heka binary for your system and run it. And you'll see... nothing. In order to see it do anything, you'll need to at the very least figure out how to get some kind of data into it and tell it how to output data to some other system that you can watch. To do that, you'll have to read quite a bit of the documentation. I've found that Heka's current documentation particularly rough for this initial "but how do I actually use it?" use case. The example config file that you first encounter does a good job of giving you a sense of what Heka is capable of, but isn't likely to work for your setup and you'll have to read a lot more to make sense of what it's doing.

So I figured it would be helpful to come up with a couple very simple config files that might be more helpful for someone just starting out with Heka.

Of course, first, you'll want to download the appropriate Heka binary for your system. Personally, for testing purposes, I just grabbed the right tarball and unpacked it to get the hekad binary. You can run it very straightforwardly with:

$ hekad -config=/path/to/config.toml

It will either complain about errors in your config file or it will run in the foreground.

Now let's make some config files. The format is TOML which is super easy to get right, so at least there's that.

First up is pretty much a "hello world" equivalent config. We just want Heka to watch one log file and, when a line is appended to it, to print that line out to another config file. This is the simplest Heka config I've been able to make that actually does something. Here it is:

Save that to echo.toml and point hekad at it.

Then open two more terminals. In one, append a line or two to /tmp/input.log:

$ echo one >> /tmp/input
$ echo two >> /tmp/input
$ echo three >> /tmp/input

In the other terminal, run a tail -f /tmp/output. You should see something like:

one

two

three

If you append more text to the input, you should see it show up the output. There's an extra newline on each of them, but at least we're seeing something. We've successfully recreated a unix pipe.

There are a few things to note in the config. Obviously, there's a "LogfileInput" section that points it at a log file to watch and a "FileOutput" section that tells it to write output to a particular file. The "seekjournal" line tells Heka where it can store its own data for basic internal bookkeeping. That's only necessary because I'm running it as my regular user and Heka's default, var/run/hekad/seekjournals/ isn't writeable by my user, so I just point it off at a temp file for now. Then, in the output section, this line:

message_matcher = "Type =~ /.*/"

Is telling Heka to apply this to any and all messages that it encounters. Remember that Heka's job is to handle lots of different inputs, apply various filters to them, and route them to the right outputs. So everything that isn't an input needs to be specific about which messages it would like to handle. Our simple output section just matches anything whatsoever. As you get more complicated Heka configs, this output section is a handy one to be able to paste in when you're debugging so you can see everything that Heka is doing.

In the next post, I'll show you a slightly more complicated config that will let Heka run as a replacement for statsd.

TAGS: sysadmin heka

A Quick Meditation on Types

This is valid Java:

But this is not valid Python:

It's pretty easy to pick them apart and understand exactly why each is what it is, but I think it's worth meditating on why it is somewhat surprising to most programmers.

Ricon East 2013 Talk Summaries

This week I attended Ricon East, a conference organized by Basho, makers of the Riak distributed database. Riak was featured prominently at the conference but it was intended more as a conference on distributed systems in general spanning academia and industry.

For once, I brought a laptop and actually took full notes of every talk I went to (there were two tracks going on, so I could unfortunately only be at half of them). Last weekend, I even wrote myself a little toy wiki in Go (with Riak as the backend) to take notes in and really get in the spirit.

I'm a fan of Richard Feynman's basic technique for learning, which can be summarized as "write down a summary of the topic as if you were explaining it to a new student". It's simple, but forces you to identify the areas that you don't really understand as well as you think. Then you can go off and learn those areas.

Since distributed systems are an area that I'm deeply interested in and trying to learn as much as I can about, I decided to use this as an excuse to apply the Feynman technique and write up summaries of all the talks I saw. The following summaries are explanations largely from memory, informed by my notes, of my understanding of the content of each talk. I'm sure I got some stuff wrong. A lot of the content was in areas that I'm not experienced with and only loosely understand. I was typing notes as fast as I could while trying to listen at the same time, and I probably misheard some things and mistyped others. If you were at the talks (or just know the subjects well) and have corrections, please let me know. As I find links to speakers' slide decks, I'll try to add those.

First though, a few words about the conference overall. This was one of the best run conferences I've been to in a long time. Everything mostly ran on time, there were only a few minor technical problems (though the projection screens in Stage 2 were pretty rough looking), the venue and catering were excellent, it was full but not overcrowded, and the talks were consistently well-executed with a good range of accessibility and depth. If my "summaries" seem long, it's because all of the talks were really that information dense. I'm leaving out a lot of detail.

Automatically Scalable Computation

by Dr. Margo Seltzer, Harvard SEAS

video, slides

Margo Seltzer and her grad students at Harvard have been working on a new technique for automatically parallelizing sequential code at the VM/runtime level.

The idea begins with looking at a running machine in terms of the full set of registers and memory as a single (very large) bit vector. If you know the full state of the machine at a given time, it should be completely deterministic (we are excluding all external IO from this model). An instruction executing is just a function on the input state that produces another state as the output and computation is just this process repeated. You can view computation as a trajectory through this large-dimensional state space. A naive approach to thinking about making that computation parallel is to pick N points on the trajectory and have each processor start at those points (this makes more sense with a diagram, sorry). Of course, that presupposes that you know what the final computation trajectory would look like, which is clearly not realistic (and if you did know it, you might as well jump straight to the end) as it would require having pre-computed every state transition in the entire state-space ahead of time (and having it all stored where the outputs can be retrieved).

Seltzer's approach is a variation on this naive version. The proposal is to say: instead of being able to have each processor pick up at exactly the points on the trajectory that we know the program will hit, what if we can come up with reasonable guesses that put us in close to the same area? If that's possible, then parallel processes could work to refine from there and speed up the computation. Indeed, it turns out that real world programs tend to follow predictable enough trajectories through the state space that this might be reasonable.

The overall system involves a few components. There's a VM, which takes the input state vector, knows how to find the instruction pointer in that vector, will look up the current instruction, run it, and generate an output state vector. Then there are components that look at pieces of input vectors and use different strategies to try to predict pieces of output vectors. Eg, some operate at a single bit level and make predictions like: if a bit was 1 last time, it will be 1 next time. Others work at a 32-bit word level and do, eg linear regression.

Aggregators combine those individual predictors and make larger predictions by running a given input vector though N instructions and adjusting the weights of each of the predictors with each round. Basically, very simple machine learning. There is a trajectory cache which maps input vectors to known output vectors.

A key insight here is that instead of caching the entire state vector each time, the system pays attention to which bits in the inputs are read and written to during the course of execution. So while a given state vector may be huge (since it encompasses all of the program's memory), in the course of running 1000 instructions from a given start state, only a small subset of that memory will be read or written and the vast majority of the memory and registers will remain whatever they were before. So the cache takes advantage of this and stores the masks along with the values of the bit vectors in those masks. On a lookup, it can see if the current state matches any cached mask+value pairs. If it does, it can change only the bits that the cache entry says get changed, leaving the rest of the vector intact. What that is analogous to is taking a linear transformation of the cached trajectory fragment and applying it to the input vector.

Implementation of their system is still immature, but Seltzer shows that they are able to get pretty good speedups on real physics problems written in C and compiled with normal GCC. It's nowhere near linear speedup (with the number of additional CPUs) and is not as good as a programmer could do manually (all their examples are for embarassingly parallel problems). A 10x speedup from 128 cores sounds bad but she points out that at some point in the future your laptop will have that many cores and this kind of automatic parallelizing will be the only way that you'll be able to make use of most of that. She's also happy to note that there are still a large number of "stupid" things that the implementation does, so there's plenty of room for it to improve.

ZooKeeper for the Skeptical Architect

by Camille Fournier, VP of Technical Architecture, Rent the Runway

video slides

Camille presented ZooKeeper from the perspective of an architect who is a ZooKeeper committer, has done large deployments of it at her previous employer (Goldman Sachs), left to start her own company, and that company doesn't use ZooKeeper. In other words, taking a very balanced engineering view of what ZooKeeper is appropriate for and where you might not want to use it.

ZooKeeper is a distributed consensus and distributed locking system that came out of Hadoop and is now safely part of the Apache project. In the last few years, it seems to have worked its way into many places: neo4j, Apache Kafka, Storm, Netflix, OpenStack, ubiquitous in cloud computing, SaaS, PaaS, and at any organization that deploys very large distributed systems. This makes the systems that don’t use Zookeeper notable: Riak, Cassandra, MongoDB, etc.

A few reasons that ZooKeeper isn't used there: 1) from a CAP theorem perspective, ZooKeeper favors consistency over availability. It requires a quorum of nodes and will not run in a reduced capacity. That makes it an inappropriate match with systems like those mentioned above that favor availability via eventual consistency. 2) ZooKeeper has a very high operational complexity. It requires a hard minimum of three nodes, takes a fair amount of resources to run (Java...), and the ZooKeeper project strongly advises against colocating ZooKeeper nodes on servers that are running other services, or running them on virtual servers. Ie, you really should have at least three dedicated bare metal servers just for ZooKeeper to use it properly. If you are running a Hadoop cluster or a big SaaS operation, that’s probably not a problem. But if you're a vendor like Basho trying to get people to use your product, adding a ZooKeeper dependency makes it a hard sell. She has also experienced first-hand the problem that an out of control client that doesn't understand ZooKeeper can write too much data to it, filling memory (ZooKeeper needs to keep all its data resident in memory), and taking down the entire ZooKeeper cluster. Facebook and Google (Chubby is their equivalent to ZooKeeper) both deal with that problem by enforcing strict code review policies on any code that goes near the distributed locking. But at some level, you need to be able to trust any developers that are accessing it.

Why would you use ZooKeeper though? Why is it so popular? She divides the functionality into two main categories: distributed lock management, and service management/discovery. Distributed locking lets you do things like failover of nodes in a large system safely. If you have traditional database servers with master/slave replicas and a master goes offline, you need to promote one of the slaves to be the new master. If your system can’t agree on which slave to promote and two try to claim the crown, it’s a disaster. Likewise, if a slave gets promoted and then the original master comes back online because it was just a network partition and it doesn't realize that it’s been replaced, it will be disaster. ZooKeeper can be used to absolutely guarantee that only one of the database servers holds the "master" title at a given time. Service management/discovery would be, eg, the rest of your system being able to be notified that there is a new master that they should direct their writes to. Ie, having that be able to happen automatically based on events in the system rather than someone manually changing a config file settings and doing rolling restarts on the cluster to get that change reflected. Once a distributed system scales past a certain point, doing that kind of thing automatically rather than manually becomes crucial.

Distributed locking is typically needed if you have components that are not designed with a distributed, eventual consistency or immutable data kind of mindset from the beginning (eg, scaling out traditional databases rather than something like Riak). Service management can often be achieved to acceptable levels with other techniques like load balancing proxies or DNS (see a blog post from Spotify on how they are using DNS instead of ZooKeeper), or by storing service information in a shared database. Those all have their own scaling issues and potential problems, but are often simpler than running ZooKeeper). Ultimately, she argues that you should run ZooKeeper if you have one of the above use cases as a requirement, you have the operational resources to justify it, and you have rapid growth and need the dynamic resource allocation.

Scaling Happiness Horizontally

by Mark Wunsch, Gilt Group

video slides

Mark's talk was focused on Conway's Law, roughly "organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations" and its converse, that the software architecture at a development shop influences the communication structures of the organization in turn. Gilt's development team wanted to optimize for developer happiness and used this approach as their fundamental philosophy.

When most companies start out, they have one monolithic architecture, platform, framework, etc. That gets them quite a ways and then they begin to feel pain as they outgrow it. Eg, Twitter's beginnings on RoR, moving to JVM, Scala, etc. That pain manifests in development but also in a monolithic organizational structure. Eg, only one engineer might understand a crucial part of the system and they become a bottleneck for the rest of the organization. It’s a classic concurrency problem and managers become thread schedulers.

They view developer happiness as a dining philosophers problem. At any given time, a developer can either code or think. A developer who is only able to code and has no time to think burns out and is unhappy. A developer who only thinks and can not code (due to organizational bottlenecks like above) gets frustrated and is unhappy. These are deadlock and livelock problems at an organizational level. Gilt sought to eliminate them the way a distributed systems programmer would. Find the mutex and eliminate it. Make it a proper lock-free distributed system from the ground up.

What they ended up doing was breaking up into small autonomous teams and changing their architecture to "LOSA", "Lots Of Small Applications", basically SOA for every component in use. Teams then have "initiatives", "ingredients", and "KPIs". Initiatives are projects that they expect to work on for a while. Key Performance Indicators, or KPIs are metrics used to track the progress of those initiatives. They must be agreed on between the team and the management and simple to calculate. "Mission statements are an antipattern; they’re just a less useful version of a metric". Teams are otherwise autonomous. They self-select and pick initiatives to work on. Teams are organizationally flat and made up of "ingredients" like designer, architect, developer, ops, etc. "Job titles are an antipattern; they let people say 'that's not my job'".

There is a spectrum of communication styles from "hive minds" with tightly coupled, synchronous, informal communication to "pen pals" with loosely coupled, asynchronous, formal communication. A team should be a hive mind and teams should interact with other teams as pen pals. They apply CAP theorem to their organizational design and choose A over C. The result is that there is sometimes inconsistency between teams. Eg, it’s OK for their microsites to each have slightly different UI elements temporarily as one team advances that component faster than another team is keeping up with. Eventual consistency will bring things in line in a while and in the meantime, it’s better than deadlocking until the teams can achieve consistency. Other organizations like Apple, or small startups probably would value consistency over availability.

Gilt's preferred communication tool is code reviews. They use Gerrit. A code review gives them a permalink associated with every change along with a history of the communication and decision making process associated with that change.

Firefighting Riak At Scale

by Michajlo Matijkiw, Sr. Software Engineer at Comcast Interactive Media

slides

This talk was basically "we did a very large Riak deployment when Riak was very young, found a bunch of problems, solved them, and kept everything running".

Comcast wanted to use Riak to expose a simple, easy to use key/value store to their developers. They weren't necessarily interested in massive scale or performance. They built a small object wrapper around the Riak client that hid complexity like vector clocks, added monitoring and logging, and made it very simple to store objects at keys. It was an enterprise system, so they did enterprise capacity planning and load testing. Everything worked well until they started getting a lot of HTTP 412 errors. Those occur when you do a sequence of GETs and conditional PUTs based on eTags and a pre-condition fails. What they discovered was that overall latency in the cluster had gone way up. Not enough to trip their alarms, but enough that it was causing other problems. The analogy is "if you leave your apartment door unlocked while you run out quickly to switch your laundry, it’s probably ok, but if you leave it unlocked for a week while you go on vacation, you’ll come home to find everything gone". High latency was leaving a large window for other inconsistencies to show up. Machines were getting so overloaded that they were overheating the machines next to them causing cascading failures that way.

When hardware failed, they figured out that they could abandon a bad node, then reintroduce it to the cluster as a new node and that worked as a quick fix, so they did that a lot. That became their ops solution to all problems. They started getting some failures that were not hardware failures, but corruption to the bitcask data stores. They decided to use their usual fix even though Basho kept telling them not to do that. When they rejoined the bad nodes, they found that not all the data was getting read-repaired. Their N-values weren't set correctly and the old nodes that they had been replacing hadn't been replicating data. It had been fine up until the point where they reached a threshold and all the replicas of some pieces were gone. They ended up having to do surgery on the bitcask files, which are append-only log structures. So they went in and replayed them up until the point of the corruption, getting back all the data from before that point, and then forcing the cluster to read-repair everything after that. Riak has since built in detection and correction for that situation.

Retrospective lessons: better monitoring of utilization and latency were needed. They needed to spend more time with clients showing them proper ways to use (and ways not to use) the system, and they needed to introduce proper choke points so they could throttle traffic to limit damage.

Bloom: Big Systems From Small Programs

by Neil Conway, Berkely PhD candidate.

video, slides

This talk was a continuation of Neil's talk BashoTalk on CALM computing and his professor's talk from the first Ricon, which was a big hit. This one reviewed the basic ideas and introduced the Bloom language that he’s been working on which implements the ideas.

Distributed computing is the new normal (between mobile devices and cloud computing). You are all building distributed systems whether you want to admit it or not. But our tools are still built for traditional problems. The kinds of optimizations in distributed systems are new (global coordination, locality, etc) and compilers don't know about them. They can't detect deadlocks, replica divergence, or race conditions. We have no tools for debugging an entire system. We know how to build distributed systems, but we don’t know how to make developers productive and efficient at writing them. We can do better, but we need new approaches.

The two main approaches to date are either: enforce global order at all nodes with distributed locks and consensus, allowing us to only have to consider one possible event ordering, but paying the price in terms of availability and increased latency penalties. Or, we can ensure correct behavior for any event ordering with "weak consistency", but that makes race conditions and deadlocks hard to reason about and is a major burden on developers.

His approach involves Bounded Join Semilattices, which are just a mathematical way of talking about objects that only grow over time. The core is a monotonic merge() function. The classic example is Sets with union as the merge() function. Eg, if you start with {a}, {b}, {c}, you can merge() those into {a,b}, {a,c}, {b,c}, and merge() again into {a,b,c}. The order that any of those merge’s happened in doesn't matter; you will end up at the same result eventually. Ie, merge() is commutative. You can also think about Integers with max(), Booleans with OR(), etc. These lattices form the basis of "convergent replicated data types" like sets, counters, graphs, and trees. If you can express your computation in terms of CRDTs, you can build systems that are easy to reason about mathematically and do not depend on the ordering of events. Neil calls this "disorderly data" and "disorderly computation".

CRDTs and asynchronous messaging form a core model that eliminates inconsistency. However, not everything can be expressed in terms of monotone functions. So they've been working on a language, Bloom, that has those primitives and has syntax to explicitly mark non-monotone parts, allowing the compiler to reason about the overall program and point out to the user which parts are safe and which are not (and would require some kind of synchronization).

One of the key techniques is to use immutable data wherever possible. Neil showed the example of a shopping cart model. There would be multiple SessionApps involved and a Client that adds and removes a number of items, possibly hitting a different SessionApp each time, eventually checking out. Instead of having a shared mutable state representing the cart, each SessionApp keeps a log of add and remove operations; adding entries to those logs and merging logs are monotonic. They can be completely inconsistent during most of the process, only synchronizing at the final checkout point, which is non-monotonic, but it's intuitive that that is the point where synchronization needs to happen.

Bloom is implemented as a Ruby DSL (but they really don’t care about Ruby) with async message passing actors added and rule triple syntax. Each triple is a left-hand side, an operation, and a right-hand side. The engine looks at the RHS values, and if they are true, sets the LHS values to true, using the operator to track how the two are related in terms of time and location. Eg, same location, same timestamp means that it is a traditional computation. Same location, next timestamp means that it’s some kind of persistence operation like storing data or deleting something. Different location, non-deterministic timestamp implies that the rule application represents communication between parts of the system. The result is concise high level programs where state update, asynchrony, and non-monotonicity are explicit in the syntax.

Just Open A Socket -- Connecting Applications To Distributed Systems

by Sean Cribbs, Software Engineer at Basho

video, slides

Sean Cribbs has written many of the client libraries for Riak and presented on techniques and lessons learned writing client libraries for distributed systems.

Riak supports streaming operations, which are very good for performance. The client can ask for a set of results and get individual notifications for each result as the cluster sends it back. This lets it act on each ASAP instead of having to wait for the full set, which is a huge win. You would normally do that by setting up an iterator that runs over the results. The Ruby client does that with code blocks and uses libcurl for the HTTP communication. Each block is a closure that will execute on each response that comes in. That worked well until he started getting lots of errors. Eventually tracked it down to connection caching in libcurl to avoid setup/teardown overhead. That turned out to be implicit global state and made the blocks non-reentrant. The fix was to use multiple connections (which you ought to be doing when building a distributed system anyway) and pooling them.

His second example involved a customer complaining that Riak 1.2 was performing 20% slower than the previous release on their queries. They were pretty sure that all the changes they made would only improve performance and couldn't duplicate the problem. The customer was using the Python client and ProtocolBuffers instead of HTTP and making mapReduce requests over secondary indices. They were eventually able to duplicate the problem in test clusters when they used the same clients access it. Did a DTrace and optimized a bunch of things but still didn't understand why it was slowing down so much. Eventually noticed that it was making a LOT of calls to gen_tcp:send() with very small payloads, just a couple bytes each, but each call was taking ~30ms. Essentially, since it was doing mapreduce over index data rather than the main data, the queries and responses were both very small and they were running into Nagle's algorithm. Nagle's algorithm is a low level optimization in the TCP/IP stack that tries to maximize network throughput by bunching up traffic to/from the same hosts into large enough packets to be efficient. In a nutshell, it sets a threshold of a few KB and collects smaller packets, holding off on firing them all off until it passes the threshold or a timeout passes. For most network traffic, that makes much more efficient use of the network and saves a lot of switching overhead that would be incurred by sending out every small packet as soon as it can. But in this case, they were hitting a pathological case and it was killing their performance. Their solution was to buffer the requests themselves so they could control the latency instead.

Distributed systems fail in very interesting ways. Clients often have a hard time telling all the different types of failure apart. Sean recommends dividing it into two main classes: system related or network errors such as network failures, connection refused, etc. and unexpected results such as quorum failures, key not found, bad request, server side error. The first class can usually be dealt with by doing a retry, preferably with an exponential backoff. The second type can be handled with a retry only if the request is idempotent, but you need to be very careful to make sure that requests are really idempotent. A third class of failures are timeouts and those are the trickiest kind to deal with. He doesn't have a good solution for what to do then. Often a retry, but not always.

Overall, he suggests opening multiple connections, keeping them isolated, connect to as many peers as you can (rather than rely on intermediate load balancers), know the structure of your entire network stack well, and keep in mind that clients are not passive observers of the system, but active participants whose presence changes the system.

Nobody ever got fired for picking Java: evaluating emerging programming languages for business-critical systems

by Alex Payne, Co-Founder at Breather, wrote the O'Reilly Scala book

video, slides

Alex Payne wrote the O'Reilly Scala book, but considers himself a true polyglot programmer and also organizes the Emerging Languages Conference every year. That conference has seen 48 new languages presented since 2010. So the problem is that there is so much to build, so many languages, and no good way to choose what language is appropriate for a given task.

There are existing "crappy" solutions to the problem. You can make a totally arbitrary selection. You can make it a popularity contest (slightly better than the first option). Or you can do "Design By Hacker News" and pick the language and technologies that seem to have the most buzz online.

He presents, instead, an evidence based approach to language selection. The first problem with that approach is that you need to avoid subjective criteria. He includes in that category: readability, terseness, productivity, agility, and mindshare. He does argue though that there are a few subjective criteria which have some value when evaluating emerging languages. Developer appeal: some programmers are just excited by novelty. Strategic novelty: you get to invent your own design patterns for the language. This promotes sort of a zen beginner mind mentality, which can be good for problem-solving. Homesteading: you may get to write the basic, fundamental libraries. This is fairly easy since you have examples from other languages to copy and improve upon. Malleable roadmap: emerging languages are often still in search of their ideal problem domain and if you're an early adopter, you can help move it in a direction that's useful to you. Finally, accessible community: you will probably be able to gain easy access to the small number of people involved in writing and driving the library.

He then sets out the actual objective criteria that can be used for language choice: performance (mostly), library breadth, available primitives (eg, Actors and CSP in languages like Erlang and Go), stability of syntax, security history, and development cycle (how often are new versions of the language released and how is backwards compatibility, etc handled). With those criteria, he recommends taking the very boring MBA like approach of setting up a weighted matrix and analyzing each option with it. Programmers are disappointed when he gives them that advice, but he doesn't know of a better approach.

Once a language is chosen, he does suggest three strategies for succeeding with an emerging language: First, acknowledge the risk, and hedge against it by also working in a more traditional, "safe" language. Eg, start up a new project in the new language, but keep your old code going alongside it. Next, involve yourself in the language community. You can help shape it, you will be on top of potentially dangerous language changes, and you will have more ready access to the language experts to call upon if you have trouble. Finally, play to your pick's strengths and use another language where it's weak. He points out that if you look at any large development organization like Google, Facebook, Apple, or even Microsoft, they are all polyglot environments. He doesn't know of any even moderately large organization that has managed to truly standardize on only one language for development.

High Availability with Riak and PostgreSQL

by Shawn Gravelle and Sam Townsend, developers from State Farm

Two developers from State Farm talking about rolling out a nationwide platform to be allow all their customers to interact with all their services through all channels (mobile, web, phone, and at terminals in their offices). Their priorities are availability, scalability, performance, supportability, and cost in that order. Their applications layer requires three nines of uptime (about eight hours of downtime per year allowed) and the infrastructure layer itself should perform at four nines (52 minutes a year of downtime).

At a high level, they had a greenfield project. Planning three sites active, each would get 100% replicas of the data. All infrastructure would be virtualized, use automatic provisioning, and would be oriented around services. Application and service development is Spring and Java, using PostgreSQL for relational data and Riak for non-relational, all running on top of RHEL.

For the Riak part, they had a seven person team, needed to hook it into their existing monitoring, security, and testing setups. The needed to get replication latency between data centers low and consistent. Each site would have a customer facing cluster, the business area would get their own full silo starting at a five node cluster and growing as needed, an internal-facing cluster with read-only replication and lower availability requirements that would be used for running batch analysis jobs that they didn't want impacting the customer facing clusters, and a backup cluster that would get a full replica set, then be shut down and the filesystem snapshotted once per day for off-site backups. They wrapped Basho's client jar in their own which they added monitoring, security, and some data type restrictions to which was then the single interface that the application developers would have to access the cluster.

They had very strict security requirements which they addressed by running every node behind Apache so they could re-use their existing security infrastructure. Their custom jars also had strict logging requirements and ensured that all operations were logged with all the relevant client info. The downside was that they only had per-node security granularity. So all of their internal applications could potentially access the data stored by other applications.

Their main challenges deploying Riak were that they did not have much experience using non-relational data storage and had to do a lot of internal education to teach application developers how to make best use of it, when to use relational datastores instead, etc. Eg, developers would contact them asking them to create buckets for them because they were used to the RDBMS model where they would define a DDL and hand it off to DBAs to create the table structure for them before they could use it. They had to explain to them that with Riak, you just start saving data in the structure you want and it is created on the fly. Developers also had a lot of doubts about performance characteristics which they addressed by doing heavy load testing and benchmarking ahead of time so they could preempt most of those. They maintain three full copies of the ENTIRE cluster setup that code has to pass through on a deployment pipeline: testing, load testing, and pre-production.

They used both PostgreSQL and Riak alongside each other to play to each database's strengths. Their background is heavily relational and PostgreSQL is very smart and has a lot of functionality that they find extremely useful like spatial data awareness. Riak lines up perfectly with their five priorities but tries to be "dumb" about the data that's stored in it (in a good way). PostgreSQL has issues with size, both in terms of scaling the entire database up and also with individual row sizes (if you go over about 8KB, it works but can negatively affect performance) and a lot of their data objects are large forms and documents that are acceptable to be opaque to the database but can grow quite large. PostgreSQL doesn't scale writes out as far as they'd like with a standard master-standby replication scheme, making it a single point of failure. Auto sharding tools exist for PostgreSQL but they aren't as useful for them since they are acting as a service provider and don't necessarily know much about the schema or access patterns of the application data. Riak is transactional internally but does not participate with the applications' transactions the way a RDBMS does. They developed a "pseudo-transactional" approach where they start a transaction in Postgres, write data to Riak, wait for it to succeed/fail and then commit or rollback the Postgres transaction. That works though it's not ideal in all situations.

They highly recommend logging and automating everything. If you have 52 minutes a year of downtime allowed, you can't waste any of that doing things manually or risk human error during those short windows. However, be careful to not automate decision making. Eg, automate failover, but make recovery after the failover be manually initiated. That helps prevent cascading failures where there are bugs in your automated processes. They found that planned outages were harder to deal with than the unplanned ones. Postgres upgrades w/ replication involve locking both sites temporarily and dealing with all the traffic backing up. On unplanned, it crashes so there's no traffic. Plus, on unplanned outages, people expect some data loss and rollback; on planned ones, they expect none.

How Do You Eat An Elephant?

by Theo Schlossnagle and Robert Treat, at OmniTI

video slides

Two consultants at OmniTI, who focus on web scalability and actually implement systems for their clients instead of just advising. Existing monitoring tools were not sufficient for the deployments they were doing so they decided to build their own to handle the large scale that they work at. In particular, they deal with multiple datacenters which don't necessarily have connections to each other so most fully centralized monitoring systems like Nagios don’t work. RRD backed monitoring like Munin (Graphite and its Whisper data store didn't exist at the time) typically roll old data out after a year but their clients are largely interested in looking at multi-year trends ("how have our traffic patterns changed over the last four years?") so that wasn't a good solution.

They picked Postgres to back their system because they had a Postgres contributor on staff and had experience scaling Postgres out to databases in the hundreds of terabyte range. They built it and it worked well, but it was hard to install, with quite a few components that had to be installed and configured correctly on each instance. Plus, once it was installed, their clients didn't necessarily understand how to make the best use of it: "when you download the tool, you don't get the philosophy." So they decided that the best solution was to "shove it down their throats as an SaaS".

They rebuilt it as a SaaS, still on Postgres and it was wildly successful. Which means it was a nightmare. Since anyone could hook up new systems to be monitored without going through them, growth was unpredictable and faster than they expected. They discovered that if you have monitoring as a service, it’s far more sensitive to downtime than just about any other system you could imagine. When all your clients are using it to track their uptime, if you go down, all of their system graphs have a gap where you were down, so everyone notices. The CEO comes in on Monday morning, looks at the traffic graphs for the weekend, sees the gap and panics. So the tolerance for downtime was now much lower than before.

They planned on storing data rolled up every five minutes for each metric, averaging about 50 metrics per server, and storing that data for up to ten years (their definition of "forever" for internet companies). That comes out to 50 million rows of data per server. They went through quite a few steps to get this manageable, and developed some clever tricks with Postgres array data types and schemas to let them mix real-time metric collection with batch-mode rollups of the data. Storage growth and performance were basically handled.

The major issue became fault tolerance. PostgreSQL is rock solid, but the hardware it runs on would still fail sometimes and they wanted to be able to do updates. In a cluster as large as they were running, these failures happen regularly. PostgreSQL can do reasonably good fault tolerance with a replication setup but it fundamentally expects you to be able to pause traffic while it does some of the more sensitive failover operations (because PG's focus is on guaranteeing ACID and consistency rather than availability). With their incredibly low tolerance for downtime, this was becoming seriously difficult.

They decided that long term they needed to switch to a column store of some type that was simpler to deal with from a fault tolerance perspective. They ruled out commercial offerings like Oracle RAC and Teradata (would not fit with their pricing model and require dedicated DBAs), Hadoop (their data is heavily structured), Cassandra (column-store, which is good, but apparently Cassandra insists on being able to delete old data when it feels like it? I’m not too familiar with Cassandra, so I’m not sure what they were referring to), and took a close look at Riak. Riak was a very close fit for what they needed but at the time it didn't have a column-store backend (LevelDB wasn't implemented yet) and it had more overhead than they needed (all their data operations were append-only, so Riak's entire model for eventual consistency would be wasted on them) and eventually decided to just design their own simple column-store database for their service.

They built "Snowth" in eight programmer weeks over three months (and put four programmer-months more into it over the next three years), and managed to achieve a 93% reduction in storage usage and 50-60% reduction in IOPs and it no longer had a single point of failure. They ran the two systems in parallel for 14 months, slowly shifting more and more of the traffic to Snowth via feature flags as they gained confidence in it, eventually shutting off PostgreSQL.

Realtime Systems for Social Data Analysis

by Hilary Mason, chief data scientist at Bit.ly

video slides

Most of us are familiar with bit.ly and use it regularly. The basic business model is that one person takes a long link, puts it into Bitly to get a shorter version, then shares that link with their friends on various social networks. Bitly sees what links people are putting in to share, and then they can follow that link as it is shared and other people click on it. From this they can do basic demographics and learn a lot about what people share, how links propagate through social networks, and they can sell that insight to advertisers and marketers.

A typical link's traffic spikes quickly soon after being posted and then trails off over time. This pattern has caused bitly to be very focused on real-time or near real-time analysis and they've been doing more experiments with real-time visualization and summarizing of the data that comes through their system.

Hillary Mason's job at Bitly is to guide the analysis and guide their strategies for taking a stream of raw data and telling stories with it. The talk was peppered with frequent fun tidbits of things that she's learned about peoples' link sharing and consuming habits. Eg, the internet is more than half robots (shouldn't surprise anyone), pictures of dogs were shared more than pictures of cats (the internet, apparently, is not made of cats), people share links that make their identity look good (news, technology, science) but they consume links about gossip, sports, and sex (every employee that she’s seen start there eventually goes through a phase of getting really depressed about humanity based on how much traffic articles about the Kardashians get).

On a technical level, she went through a few different real-time components of their stack. First is a simple classifier to determine the language of a link. Basic "is this English or Japanese?" kinds of questions. Originally, they used Google’s service for this heavily until Google shut it off, but it was always less than perfect anyway. The standard machine learning approach to this problem is supervised learning. Get a large corpus of text that’s been marked out by language (these are readily available), train your classifier on it, and get a set of weights for a Bayesian network that can classify new content. This works well for standard text, but fails on short bits like tweets, social network status updates, and image sharing. They realized that they have access to more than just the content though and started feeding the browser request headers and server response headers in as well, paying close attention to character set encodings, etc and using those to adjust the weights. They did something similar for detecting robots vs humans but instead of using the content, they found that the best indicator was actually the spike and trail pattern of the traffic over time. Any link that has a different pattern tends to be something other than a human sharing links with other humans (eg, people wrap JavaScript embed links in Bitly or the links to avatar images and those get automatically included all over the place).

The other side of their experimentation has been around the visualization and real-time mining of different kinds of trends. Eg, supporting real-time search, which means "show me links mentioning term X that people are sharing right now", automatically identifying bursts from the traffic patterns (eg, on the day of the talk, the algorithms could easily tell that Angelina Jolie was in the news), and how to group articles together into related "stories". They built real-time search with the Zoie Solr plugin running alongside a Redis cluster that held scoring info. When a search comes in, they search Solr to find links that match the term, then query Redis for scoring info to sort the results. Links that hadn't been shared in the last 24 hours automatically fall out of those databases, so they stay a pretty steady size.

Bit.ly was originally written in PHP but that didn't last very long. Now, typically as new features are developed, she describes them as going through one phase of badly written Python, then a phase of much better Python, then, when they are proven, well-understood and obviously need to be faster, they get rewritten in C or Go.

Using Datomic with Riak

by Rich Hickey, creator of Clojure + Datomic

video

Datomic is the distributed database developed by Rich Hickey, the creator of the Clojure programming language. Datomic is not open source (boo!), but integrates tightly with a number of open source applications like Riak.

Datomic is a fairly radical new approach to databases and at some level aims to operate in a space that isn’t really exactly what other databases do. Hence, he views Datomic as complementing RDBMSs and distributed databases like Riak rather than competing with them. That idea is reinforced by Datomic's implementation which doesn't have its own storage model and instead uses Riak or any number of other databases as the underlying storage, treating them as opaque datastores in the same way that a regular database treats the filesystem as opaque block storage.

Datomic aims to be a sound model of information with time and brings declarative programming to applications, reducing complexity. In Rich's view, where you typically see an impedence mismatch between databases and applications, the application is wrong. Declarative systems like SQL are right and the application is wrong. Datomic wants you to be able to treat the entire database as a single value that you can calculate on.

Datomic built around a database of atomic facts (called "datoms"). You can assert and retract datoms. Datomic keeps all of those assertions and retractions and gives you a consistent view of the world according those datoms. In this way, it is very analogous to an RDF triple store kind of view of the world, but hopefully not as painful. Datomic guarantees transactional writes (so it chooses C over A in CAP and you must accept the availability constraints that that leaves you with).

He defines some terms: “value”: something immutable, magnitude, quantity, number, or immutable composite thereof, “identity”: a putative entity we associate with a series of causally related values (states) over time, “state”: value of an identity at a moment in time, “time”: something that is relative. A is either before or after B, and “epochal time model”: identity (succession of states). Apply functional transformation from value to value to move along time. All data in Datomic is immutable and it is built around Okasaki’s Purely Functional Data Structures, particularly trees of the immutable data.

The core components are a Transactor, which does transaction coordination and accepts writes, which it applies serially in an append-only structure like LevelDB or BigTable, an App Server, which runs on each node and does querying and indexing. App Servers all have access to the same underlying (pluggable) storage, so distributed storage like Riak is ideal, and the top level nodes of the tree structure are stored in a distributed, consistent store, which is usually Zookeeper (particularly in deployments using Riak for the data storage). Essentially, the “values” go into Riak and the “identities” go into Zookeeper. Those top level nodes are very few and very small but consistency on them is crucial to holding the whole thing together, hence Zookeeper. Since data is immutable, the app servers can always do consistent reads against the distributed storage and never have to worry about coordination. Only the writes have to be synchronized through the Transactor + Zookeeper. Further, because the data is immutable, they can use Riak in “R=1” mode, meaning that Riak, on a read, doesn’t need to verify that the data exists on more than one node (how it would normally detect conflicts) and that is the most efficient way possible to query Riak, avoiding all the overhead of vector clocks and siblings. Datomic automatically generates a few spanning indexes for the data and allows you to specify additional ones, those end up stored in the same structure as the data with a key or two at the top in zookeeper and most of the index as immutable data in Riak.

Lessons Learned and Questions Raised from Building Distributed Systems

by Andy Gross, chief architect at Basho

video slides

This talk was the closing Keynote and is very hard to summarize; I highly recommend just looking up video of the talk and watching it. Definitely worth it.

Andy Gross is the chief architect at Basho and one of the current big names in distributed systems. He is a self-taught programmer (starting with a VIC20), college dropout, has written databases, filesystems, CDNs, and according to legend, wrote the first version of Riak on a plane to Boston to interview at Basho. He considered the prototype a failure but they hired him anyway. His career advice is "BS or charm your way into jobs you aren’t qualified for, quit before it gets boring, and repeat".

Herb Sutter published a paper in 2005 called "The Free Lunch is Over" that pointed out the fundamental shift towards concurrency and distributed systems. Moore's law had stopped giving us faster and faster single-core processors and the clear direction was going to be more and more cores running in parallel. So we could no longer rely on software getting faster just by virtue of the underlying hardware clock speed going up. Instead it had to be able to take advantage of more and more parallel cores and that required a fundamental change in how software was designed (unless Dr. Seltzer has her way). The software industry was not ready for what was going to happen to them and probably still isn't.

For part of the industry though, it has meant a renaissance in distributed systems research. We are all writing distributed systems now whether we want to acknowledge it or not. Distributed systems are one of those areas where the key concepts keep recurring and being rediscovered. In the late 70’s Leslie Lamport dug up an (already old) Dijkstra paper on self-stabilizing systems and hailed it as Dijkstra's most important work. Programmers now are just rediscovering Lamport's work and finding it relevant and cutting edge when applied to modern technology. Andy put a hack in his browser that made it so if he tried to go to Reddit or Hacker News, it redirects him to a random article of Lamport’s.

He stressed the importance of abstractions as how we move computer science and engineering forward and wants to see more common abstractions for distributed systems.

He asks, “Where’s my libPaxos?” If Linux can have literally hundreds of toy filesystems implemented, why can’t we just import a standard library for distributed consensus? “Where’s my libARIES?” (for write-ahead logging). Why do we have to keep reimplementing these fundamental abstractions that recur over and over in distributed systems. Languages that support immutable data structures and have good concurrency primitives (Actors in Erlang/Scala/Rust, CSP in Go, STM in Clojure/Haskell) are a step forward but not nearly enough. These things should all be reusable primitives that we just have at our disposal so we can move on to the next layer of the problem. He’s worked on solving this by writing Riak Core, which is a reusable implementation of distributed consensus, gossip protocols, consistent hashing, etc. Riak itself is just a set of plugins on top of Riak Core, RiakCS is a different set of plugins on the same core, and Riak Core has a couple large deployments at other organizations (Yahoo!, OpenMX, and Stackmob) but otherwise hasn’t seen much adoption.

The other major deficiency is in testing tools for distributed systems. Unit tests are woefully insufficient. Distributed systems fail in more creative and interesting ways than other software and traditional tools for testing, debugging, and monitoring aren't good enough. Many of the algorithms used in distributed systems can be proven correct but still prove to be buggy when implemented (insert relevant Knuth quote). Something in between formal verification and unit tests is needed. A good start in that direction is a tool called QuickCheck which generates, like a fuzzer, unit tests for your code based on static analysis of the source code, then it runs millions of them covering the entire domain of every function rather than just a few discrete values and automatically reduces failures into minimal test cases. Run it overnight and you’ll find heisenbugs that normal unit tests wouldn't have found. Unfortunately "Quick"check is very complicated, requiring a lot of training to use, takes a long time to set up for each codebase, and the value of the tests that it creates decays quickly, so it takes a massive investment in time to keep the quickcheck config up to date and useful. There’s also a tool called “PULSE” which works with Erlang to spoof the scheduler and run your code under pathological scheduling conditions, making sure that you get hit with every weird edge case around out-of-order message delivery and overloaded or deadlocked components.

He finished with a list of the remaining "Hard Problems" he sees in distributed systems:

My History with News Feeds

Back when RSS first came out and a couple major sites started supporting it, I cobbled together an "aggregator" that was just a Perl script that pulled down the dozen or so feeds that I knew of (Slashdot, memepool, and a couple others), did a tiny bit of munging to get them merged into one XML file that sat in my web root, then used an XSL transformation (via Apache Cocoon) to turn that into a single HTML page that had all the links from all those sites in one place.

I thought it was the coolest thing ever.

It was broken about 70% of the time because one feed or another in the group would have a stray ampersand or angle bracket in a headline making the entire thing invalid and breaking the transformation.

More feeds kept coming out and it got too cumbersome to do things that way, so I eventually built a database backed reader with a mod_perl frontend that had an interface a little more like the Reader style with a stream of news and a notion of read vs unread entries. 70% of the feeds were still invalid at any given time and couldn't be parsed, but at least they didn't take down my whole aggregator. I think I spent a lot of time sending email to sites letting them know that their feeds were invalid.

Then Mark Pilgrim released his Universal Feed Parser Python script along with his whole Postel's Law rant. I was way more into Perl at that point, but I knew a little Python and the promise of being able to parse even invalid feeds was quite enticing, so I rewrote the import side of my aggregator as a Python script using feedparser that then did a POST to the mod_perl app with the data. It worked great. Probably only 10% of the feeds were broken at any given time now.

The hourly cronjob that ran on my desktop pulling down and parsing all the feeds was a beast though. I would pretty much have to stop working for five minutes every hour while my machine chugged away. I put up with that for probably a year or two. It sucked, but I was hooked on having fresh news feeds all pulled into one place where I could quickly see only the new items. It was like having an IV drip of information heroin straight into my veins.

Then Bloglines came out. It had a snappier feeling interface than mine, seemed to do an even better job handling broken feeds, and most importantly, didn't drag my workstation down every hour. So I switched there without hesitation and was happy for years.

Reader came out and I stuck with Bloglines for quite a while but the Plumber kept showing up more and more frequently (Bloglines users know what I'm talking about), and there was an inevitable feeling that Google was just going to crush them sooner or later, so I eventually switched away. Most of the Reader interface I liked much less than Bloglines, but it was fast, and the keyboard commands for navigation were really nice. I've been there ever since.

I looked at my feed count the other week when they announced it was shutting down and it was well over 600. I occasionally try out standalone reader clients and mobile clients, but none of them work for my intense feed reading workflow.

A typical session for me is that I would open up Reader in my browser in the morning. There will be a few hundred unread entries. 95% of them I know will be complete noise, so the goal is to get through them all and find the entries worth actually reading in as little time as possible. In Reader, hitting g, then a, puts you in a stream of all the entries from all the categories all merged together. Then j over and over again to advance, marking each entry as read along the way.

Each entry that comes up has about half a second for me to decide if it's noise or something that might potentially be interesting. Noise just gets a j and I'm on to the next one. Potentially interesting, I open it in a background tab and continue going through my feeds. I can make it through hundreds of entries in just a few minutes that way. Once I've made it through all my feeds and have no more unread items, I have about a dozen to two dozen browser tabs going in the background. I work through each of those with browser keyboard commands (Ctrl-PgDn and Ctrl-w mostly). One pass through to sift out ones that on second look really aren't worth spending time on. Then a pass where I actually read any of them that look digestable in a couple minutes. Then a final pass where long articles I want to read later on my tablet get sent to Pocket, and what's left is generally programming-centric stuff that I want to actually do something with immediately. Eg, a new Python module that looks useful that I will dig through the code and changelog or even install and write a small script to play with.

The whole process takes about an hour, and the end result is that I've got a half dozen articles sent to my tablet for subway reading, I've read about a dozen short to medium length posts, I've probably played with a bit of code, and I've also seen a few hundred headlines go past so I have a sense of what's going on even in areas that I didn't bother investigating deeply.

I have a smartphone and an Android tablet like a good little techie, but I've had zero interest in ever reading news feeds on them, even using the Google Reader Android app. Without my full browser tab setup, I could never figure out a way to get through my feeds at even a tenth the speed of my regular workflow. I'm much happier using the devices for reading the longer articles that I'd previously saved off.

When Google changed the behavior of the "share" button on Reader last year to integrate it with Google+ and everyone got upset, I didn't really share in the outrage. But then, I also never understood why an aggregator would have a "share" button in the first place. I certainly didn't use it.

The other week, Google announced they were shutting down Reader and I said, "well, guess I'm going to have to write my own again". So the next weekend, that's what I did. I've been on a bit of a kick for the last year, freeing myself from walled garden services. I shut down my facebook account, put up my own photo gallery site and let my Flickr account lapse. I just no longer want to be dependent on services that are out of my control.

At this point, my aggregator is unpolished, but it fully supports my workflow for reading feeds. Actually, I already prefer it to Reader since everything non-essential to my workflow is stripped out. It's running at http://feeds.thraxil.org/ and is open to anyone to sign up, but I can make no guarantees about longevity. Honestly, at this point, you should only think about using it if you want to help me debug it.

If you've got any programming chops, I recommend writing your own. It's a fun problem that will expose you to more of the development landscape than a typical web app (eg, a task queue to take the heavy lifting out of the request/response cycle is pretty much a necessity) and you'll be able to reduce your own level of dependency on services controlled by other people with priorities that don't match your own.

New Year Python Meme 2012

Following Tarek's lead:

1. What is the coolest Python application, framework or library you have discovered in 2012?

I'm really tempted to go along with Tarek and pick PyZMQ, since 0MQ has been a pretty major mind expander for me this year, but I honestly haven't used anything from it besides the basic 0MQ bindings. Instead, I'll pick IPython Notebook, which I've really only just begun playing with (after the PyData NYC conference), but is looking pretty exciting. I'm seriously impressed at how much potential is packed into something so simple. Salt really needs mentioning too. Very simple and powerful.

Yeah, I really suck at picking only one thing to highlight.

2. What new programming technique did you learn in 2012?

I guess the biggest newish "technique" (rather than a language, library, or framework) for me in Python-land would be Behavior Driven Development. BDD with Lettuce was a big factor in the development of a major project at work. I'm hoping to streamline our Django/Lettuce/Selenium setup to make it easier for more of our projects to move in that direction.

3. What is the name of the open source project you contributed the most in 2012? What did you do?

Well, my biggest project lately, that I'm very proud of is a distributed image and thumbnail server called Reticulum that is written in Go, not Python. I did develop a prototype in Python/Django first before I settled on Go for the main implementation.

4. What was the Python blog or website you read the most in 2012?

I really have no idea. I have hundreds of blogs and aggregators in my RSS reader so it all just comes at me in one big firehose of information. By the time I've read and digested it, I've lost track of where it came in from.

5. What are the three top things you want to learn in 2013?

6. What are the top software, app or lib you wish someone would write in 2013?

I've got a big project at work starting up that is going to involve writing a ton of Netlogo. Don't get me wrong, Netlogo is a neat little platform and is very good for what it does and what we're using it for. But I would kill for a platform with the same capabilities for agent-based modelling that let me code in Python instead. Actually, years ago, I built a simulation framework in Python for our engineering department that came pretty close to being what I am imagining, but funding on that dried up, our priorities shifted, and it never quite made it out into the world. Maybe I'll revive it in a modernized form. Now I'm thinking about something architected like IPython Notebook (or built off it) with a Python engine and a web/websockets frontend that can make use of nice D3.js and SVG visualization.

Stupid Cellular Automata Tricks, Part 1

A few years back, I got really into Cellular Automata. I had a big project at work that involved implementing a bunch of simulations via CAs and generally found it to be an interesting topic.

I had the good fortune to spend some time on that project with Don Hopkins, who showed me a bunch of tricks and pointed me in some very fruitful directions (including recommending the book Cellular Automata Machines by Toffoli and Margolus as a primer).

The result is that I've got a bunch of tricks in and around Cellular Automata in my head. I don't think I invented any of these. I don't think Don even did (maybe he was just being modest). But they are things that I've never seen written down and probably should be. It recently occurred to me that I have a blog so I should, you know, do that.

I've got enough that I plan to do it as a small series of blog posts (if I can keep up the writing momentum over the next week or two).

Starting with a trick that's pretty simple but is easy to overlook.

It deals with standard two-dimensional CAs (think Game of Life) and is really just a clever way to avoid a common naïve implementation problem.

To review, there is a rectangular, tiled Grid made up of Cells, each of which has a particular State (such as "on" or "off" or a numerical value). At each generation, the new state of each cell is determined by applying a fixed set of Rules or a mathematical function using the states of the cells in its Neighborhood.

A typical neighborhood for CA is the Von Neumann Neighborhood which includes the cell itself (which we'll call "center" or just C) and the four cells that share sides with it, which we'll call "north", "south", "east", and "west" (or N, S, E, and W). There is also a Moore Neighborhood which expands the Von Neumann Neighborhood with Northwest, Southwest, Northeast, and Southeast.

The obvious representation for a CA grid in most programming languages will be a two-dimensional array of whatever type the cell's state needs to be stored as.

The naïve programmer then writes something like the following (pseudocode):

/* NOTE: square grid in all examples to simplify */
grid[SIZE][SIZE] 
for j from 0 to SIZE - 1:
    for i from 0 to SIZE - 1:
        c = grid[i][j]
        /* get the neighborhood for the cell */
        n = grid[i - 1][j]
        s = grid[i + 1][j]
        e = grid[i][j + 1]
        w = grid[i][j - 1]
        /* and calculate the next state... */

And they run into a problem right away. For cells on the edges of the grid, one or more of the neighbors in their neighborhood doesn't exist. They probably get some kind of runtime error with an array out of bounds exception.

The standard solution here is not hard to imagine. Typically, you decide if you want the edges of the grid to "wrap" around like a torus or just have some default value. Then insert some bounds checking in the code to handle it:

grid[SIZE][SIZE]
for j from 0 to SIZE - 1:
    for i from 0 to SIZE - 1:
        c = grid[i][j]
        /* get the neighborhood for the cell */
        if i != 0:
            n = grid[i - 1][j]
        else:
            n = grid[SIZE - 1][j]
        if i != SIZE - 1:
            s = grid[i + 1][j]
        else:
            s = grid[0][j]
        if j != SIZE - 1:
            e = grid[i][j + 1]
        else:
            e = grid[i][0]
        if j != 0:
            w = grid[i][j - 1]
        else:
            w = grid[i][SIZE - 1]
        /* and calculate the next state... */

Not pretty, but it works and lets the programmer get on to exploring the fun world of Cellular Automata.

Now, the problem is that there is a lot of conditional logic happening in an inner loop. Each of those tests gets carried out on every cell in the grid even though only a small percentage of them is really necessary (on a sufficiently large grid there will be way more inner cells than cells on the edges).

So here's the first trick.

What it boils down to is putting a "gutter" around the grid to ensure that every cell becomes one of the inner cells and doesn't require any testing.

That translates to expanding the two-dimensional array holding the grid so it has an extra row of cells around the entire border. Then, before running each generation, prepopulating those gutters with the wraparound data (or just defaults). The code then loops over only the inside part of the expanded grid and can always safely assume that every cell has a full set of neighbors.

The pseudocode becomes something like:

grid[SIZE + 2][SIZE + 2]
/* wrap gutters */
for i from 0 to SIZE + 1:
    grid[i][0] = grid[i][SIZE + 1]
    grid[i][SIZE + 1] = grid[i][1]
    grid[0][i] = grid[SIZE + 1][i]
    grid[SIZE + 1][i] = grid[1][i]

for j from 1 to SIZE + 1:
    for i from 1 to SIZE + 1:
        c = grid[i][j]
        /* get the neighborhood for the cell */
        n = grid[i - 1][j]
        s = grid[i + 1][j]
        e = grid[i][j + 1]
        w = grid[i][j - 1]
        /* and calculate the next state... */

This isn't the most mind-blowing thing ever, but I've seen quite a few programmers starting out with CAs miss it.

I also like it as an example of a general heuristic for optimizing code by making a pass over the data to clean or prepare it and avoid expensive boundary checking altogether at a later stage when it will be more expensive.

Let's take a moment to think about the complexity analysis though. In a strict, theoretical sense, none of this really matters. If you're dealing with an N by N grid, both pieces of code are O(N^2) because real computer scientists don't care about constants. If anything, from a theoretical perspective, the second approach is worse if you are interested in storage costs.

But in the real world, I can pretty much guarantee that the second approach will perform noticeably better. Sometimes constants matter.

The next part of this series, when I get around to writing it, will be a bit more implementation specific (Python, in my case) but will reap some additional rewards from the addition of gutters to the grid representation.

How Django Deals With a Hacker News Frontpaging

A post of mine from last year just made it onto the front page of Hacker News. This site is generally pretty low traffic so it was quite a surge. Check out the munin graph of the network activity on the server:

network traffic

It spiked up to around 1Mbps for a while.

I run this blog on a fairly low-end 512MB Rackspace Cloud VPS and it's implemented in Django (behind Apache/mod_wsgi, and talking to a PostgreSQL database) without any real attention ever paid to optimization. I don't do any caching or static publishing, so every request is processed through Django, and I run about a dozen different dynamic Django apps on the same server.

So how did it handle the surge in traffic? Did the CPU melt?

Here's the CPU usage graph from the same time period:

CPU traffic

Yeah... if you look real closely at the right there, you can kind of see that it increased a bit. But it was basically negligible. Memory usage and swap activity didn't show any changes either.

If my email inbox hadn't suddenly filled up with comment moderation emails, I'd probably never even have known anything was going on.

How I set up my Ubuntu Laptop

I finally got around to putting an SSD drive in my laptop this weekend. I have a nice Lenovo Thinkpad T500 that I've used for the last year and a half or so. It's pretty much just a sit on the couch and code or browse the web machine. I don't use it for anything really heavy duty or media related so I don't actually use much storage. That meant I could get away with putting a cheap 60GB SSD in it, which is more than enough storage for my needs on that machine.

First of all, yes, I think the SSD was a worthwhile improvement. For general web browsing and coding, it's not really noticeably faster. The bottleneck for web stuff is still the speed of the internet connection. Compiling is a little faster, but not by a lot. Starting applications is a little faster, but really, I tend to just leave applications open all the time so it's not saving me a lot there. Where the SSD is noticeable is booting and waking the laptop up from sleep. Waking it from sleep is now completely instantaneous. Booting takes 12 seconds to go from pushing the power button to having a usable desktop. I didn't measure it pre-SSD, but it used to be well over a minute. It's not a life changing improvement, but it does make me smile every time. If I traveled more with my laptop, I'd probably appreciate it even more.

Since switching to the SSD involved doing a completely fresh Ubuntu installation, this time around I took care to document everything I did to convert the laptop from a default Ubuntu 11.04 to how I prefer to have it set up. I thought it might be interesting to share my setup here.

I use the wonderful XMonad tiling window manager instead of Gnome or Unity. I've been using a tiling window manager of some sort since 2001 or so and I doubt I'll ever go back. Once you get over the learning curve, tiling window managers just get out of your way and feel far more efficient to use. Previously, I'd set up XMonad to work alongside Ubuntu's Gnome. So gnome-panel was running and it was a proper Gnome session, but XMonad replaced Metacity as the window manager. When Ubuntu 11.04 came out and they switched to Unity, it pretty much broke that setup. I actually gave Unity a try for a couple months since it seemed to be trying to take on some of the usability advantages of tiling window managers. It wasn't terrible, but it was far from great. When I put in the SSD, I decided to give up on the Unity experiment and just go to a full XMonad setup, using xmobar and trayer to handle the things that gnome-panel used to do for me.

I've also been taking advantage of Ubuntu One to keep my various config files synchronized between machines. I've played around with putting my config files in Git and had some experiences with that, but at the moment I'm liking the automatic synchronization approach of Ubuntu One better. I rolled my own version with lsyncd for a while, which worked fine, but since Ubuntu One comes set up by default, it seems like the easiest approach now. I'm sure you could do the same thing with Dropbox, if you prefer, but Ubuntu One has worked well enough for me that I haven't bothered looking into that. In the future, I'll probably work out some hybrid solution with Ubuntu One and git so I can have things automatically synchronized and versioned, but I'm keeping it simple for now.

First of all, once the basic Ubuntu 11.04 desktop install is complete, I install the packages I need. This is a matter of opening up a terminal and running:

$ sudo apt-get install aptitude thunderbird xmonad emacs xmobar trayer dwm-tools htop keychain chromium-browser git-core magit python-setuptools erlang-mode postgresql
$ sudo apt-get build-dep python-psycopg2

Then I log in to Ubuntu One by running

$ u1sdtool -c

It prompts me for my username/password, which I've set up before and then syncs everything I have in my account to an "Ubuntu One" directory. This takes a little while, but when it's finished, the important stuff in there are directories called 'configs' and 'emacs'. The 'emacs' directory contains all my emacs libraries. I'm not going to include any of that here since many volumes of blog posts and articles have been written about how to set up emacs. My setup has built up over many years and is very personal to my usage. In 'configs' I have:

bashrc  
emacs
gitconfig   
Xmodmap
xmobarrc  
xmonad.desktop  
xmonad.hs  
xmonad.start

These config files are the core of my configuration. They live in Ubuntu One but I symlink them out to their proper locations. It's important to note that these are just config files and don't contain anything sensitive like passwords or ssh keys. I'll go through them.

First, Xmodmap gets symlinked to ~/.Xmodmap. It has one simple, but very important purpose: erase the stupidity of most keyboards placing a useless CapsLock key in one of the most convenient spots. We simply replace it with a Control (essential for an emacs user):

keycode 66 = Control_L
clear Lock
add Control = Control_L

I set up ssh keys with:

$ ssh-keygen -t rsa

Distribute them to the machines I need to login to with

$ ssh-copy-id remote.machine

Then, so I don't have to enter my passphrase every single time I ssh somewhere, at the end of my ~/.bashrc I add:

keychain --quiet --nolock ~/.ssh/id_rsa 2>/dev/null
. ~/.keychain/${HOSTNAME}-sh

An additional step that I can't really automate, but that is still necessary for me for development is to copy the contents of my ~/.ssh/id_rsa.pub up to my github account.

My ~/.gitconfig just contains my full name and email address. It's what gets set up when you run:

$ git config --global user.name "Name"
$ git config --global user.email "email@domain.com"

That pretty much gets my terminal working the way I want. The final major section is my XMonad setup, which is a bit trickier.

/usr/share/xsessions/xmonad.desktop gives me the option of logging in to an XMonad session instead of Unity/Gnome:

[Desktop Entry]
Encoding=UTF-8
Name=XMonad
Comment=Lightweight tiling window manager
Exec=xmonad.start
Icon=xmonad.png
Type=XSession

Logging in runs /usr/local/bin/xmonad.start, which sets up the xmobar panel, various utilities, and finally kicks off XMonad itself:

#!/bin/bash
xrdb -merge .Xresources
trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 15 --height 12 --transparent true --tint 0x000000 &
gnome-screensaver 
gnome-settings-daemon
if [ -x /usr/bin/gnome-power-manager ] ; then
   sleep 1
   gnome-power-manager&
fi
if [ -x /usr/bin/nm-applet ] ; then
   nm-applet --sm-disable &
fi
exec xmonad

Xmobar gets its configuration from .xmobarrc:

Config { font = "-misc-fixed-*-*-*-*-13-*-*-*-*-*-*-*"
       , bgColor = "black"
       , fgColor = "grey"
       , position = TopW L 85
       , commands = [ Run Network "wlan0" ["-L","0","-H","32","--normal","green","--high","red"] 10
                    , Run Cpu ["-L","15","-H","50","--normal","green","--high","red"] 10
                    , Run Memory ["-t","Mem: %"] 10
                    , Run Swap [] 10
                    , Run Date "%a %b %_d %Y %H:%M:%S" "date" 10
                    , Run StdinReader
                    ]
       , sepChar = "%"
       , alignSep = "}{"
       , template = "%StdinReader% }{ %cpu% | %memory% * %swap% | %wlan0% | %date%"
       }

And XMonad itself is configured with a Haskell program in ~/.xmonad/xmonad.hs:

import XMonad
import XMonad.Hooks.DynamicLog
import XMonad.Hooks.ManageDocks
import XMonad.Util.Run(spawnPipe)
import XMonad.Util.EZConfig(additionalKeys)
import System.IO

myManageHook = composeAll
    [ className =? "Gimp"      --> doFloat
    , className =? "Vncviewer" --> doFloat
    ]

main = do
    xmproc <- spawnPipe "xmobar"
    xmonad $ defaultConfig
        { manageHook = manageDocks <+> myManageHook -- make sure to include myManageHook definition from above
                        <+> manageHook defaultConfig
        , layoutHook = avoidStruts  $  layoutHook defaultConfig
        , logHook = dynamicLogWithPP $ xmobarPP
                        { ppOutput = hPutStrLn xmproc
                        , ppTitle = xmobarColor "green" "" . shorten 50
                        }
        , modMask = mod4Mask     -- Rebind Mod to the Windows key
        , terminal = "xterm -r"

        } `additionalKeys`
        [ ((controlMask, xK_Print), spawn "sleep 0.2; scrot -s")
        , ((0, xK_Print), spawn "scrot")
        , ((mod4Mask, xK_p), sendMessage NextLayout)
        , ((mod4Mask, xK_space), spawn "exe=`dmenu_path | dmenu` && eval \"exec $exe\"")
        ]

It's a pretty minimal setup (I have a more complicated one I use on my desktops with dual-monitor setups). It switches the default mod key to the otherwise unused "Windows" key on the keyboard. It sets my default terminal to xterm with reverse video set (I prefer white text on black background for eye fatigue reasons). It also switches the default keybindings of Mod+Space and Mod+p, which is a personal preference.

That's pretty much it. From there I can do everything I need to do on my laptop.

TAGS: linux ubuntu

A Boggling Return to C

My first real programming class, back in high school, was taught in C. I found it difficult and I remember spending the whole year thinking that I really had no aptitude for programming. I think I got an OK grade in the end, but I felt like I was really struggling compared to the other students in the class. It wasn't until a year or two later, taking CS classes in college and finding them easy, that I figured a few things out. First, that C programming class was incredibly intensive. By the end of the school year, I'd implemented, from scratch, a 3D wireframe graphics engine. Second, I felt like I was struggling compared to my friends in the class, because my friends were complete prodigies who had already been coding in other languages for years.

I haven't really coded in C for the last decade though. Aside from a couple classes that happened to involve a bit of C as an aside, college CS classes were getting on the Java bandwagon and by the time I graduated, I'd discovered Perl as a great language to just get stuff done. Since then it's just been Perl, Python, and JavaScript with excursions into other high-level functional languages for fun (ML, Haskell, Lisp, Erlang, etc).

Recently, I had the urge to get back to my roots and relearn C. I'm not sure exactly why. I think I've just had a few too many moments lately where I've felt like software I was using or working on was slower and more bloated feeling than it ought to be. I know writing in C won't automatically fix that, but I've had the urge to write small, efficient programs. To make my own data structures and libraries where every clock cycle and byte of memory used are essential to the problem being solved.

Any time I try to learn a new language, I like to start with a toy program or two. Something simple enough to get my head around, but meaty enough to not be completely trivial.

For my first little toy project this time, I've dusted of my copy of K&R and written a program that, given a Boggle board, will solve it, finding all the english words that appear.

I implemented this before in Python about a year ago for fun. You can see that program here.

The C version is functional but will probably make more experienced C programmers cringe. C syntax is pretty straightforward and not hard to pick back up, but I've got very little experience with the standard libraries available and I'm still re-familiarizing myself with the idioms of developing in that language.

I also took a fundamentally different approach this time, since I've had a year of background thinking on the most efficient way to do this.

Both versions rely on the standard unix 'words' file (/usr/share/dict/words on most Linux boxen) as a master dictionary. They also both expect a board to be defined in a textfile in a very basic manner, eg:

tntb
gdar
elte
sish

I'm not going to go over the rules of Boggle, since you can google it.

The approach I used in Python was to first come up with a reduced alphabet consisting only of the letters that actually show up on the board. In the example case above: abdegilnrst. Then I go through each line of the words file, skipping it if there's an apostrophe, a capital letter (no proper nouns in Boggle), or any letters in the word do not appear in the restricted alphabet. The ones that are left are potentially findable words. Then I do a fairly straightforward search at each position of the board, going neighbor to neighbor to see if that word appears. Then at the end, I take all the words that were found and sort them by length.

That's a really rough, fairly brute-force approach. I won't apologize. That's kind of how I like to do things in Python. It probably took me an hour to write that code and it runs fast enough that I could use it to cheat and get really high scores on online Boggle games (start a new game, quickly type the board into a text file, run the Python script, which took a couple seconds, then spend the next minute or two typing in the words it found, starting with the longest, highest scoring ones first). Basically a total success as far as I'm concerned. Python absolute excels for this approach to problem solving where programmer-time matters a lot and there's a certain performance threshold of "good enough" that, once you reach, you don't gain by exceeding. With Python and a couple basic built-in data structures like lists and hash tables, there are few problems you can't tackle by just taking a very straightforward approach and adding a few quick filters to reduce the search-space drastically. Sophistication is possible, but rarely necessary.

But C is different to me. I feel like since C usually requires more lines of code and more care to get a program working but offers far more control and efficiency, it requires a different approach. It's worth taking the time to get it right to begin with and avoid wasting cycles and memory.

I've had a few ideas in my head lately for things that I want to build that I think will actually form a nice foundation for other work. Things that would really benefit from being small, fast, efficient and "right" from the ground up. This is what's got me interested in getting back into C.

Back to Boggle.

The C version takes a completely different approach, that I thought up recently and seems elegant. It's based on a Prefix Tree, or "Trie" data structure.

What I do is first go through the words file (again throwing away words that have apostrophes or capital letters) and building a Trie out of it. The result is a Trie structure representing every word in the english language. Checking to see if some word exists is O(n) where n is the number of letters in the word, and further will always be very small in this application. Next, we go through each position on the board as a starting point and then go through all the possible word paths from that point, checking against the master word Trie.

Now, the nice thing about using the Trie instead of just a plain hash table lookup or binary search or something, is that in addition to being able to tell if a given word exists, we can also find out whether there are any words that start with a given prefix just as quickly. That means that as we check every possible path through the board, we can eliminate subpaths very quickly when the Trie tells us that there can't possibly be any valid words that start with that prefix. Eg, starting at the 't' in the upper left corner of the example board and looking at all the paths that can originate from there, we can pretty much instantly eliminate them all since there are no words in the english language that start with 'tn', 'tg', or 'td'. This is the sort of reasoning that a human mind can do quickly, and a Trie is how we can get a computer to be about as clever in this case. This rapid pruning of the search space pays dividends quickly.

The code obviously isn't perfect. I've actually completely ignored the case of 'Qu', which Boggle puts as a single tile. It wouldn't be hard to make my code handle that one, but I left it out as it wouldn't fundamentally change the approach; it just adds a bit of extra complexity that would make it less clear.

One enhancement I could possibly make would be to do the same filtering step I did in the Python version, coming up with the restricted alphabet of only letters that appear on the board first and skipping any words in the english dictionary that have any letters besides those. That would make the Trie construction step faster and the resulting, smaller Trie would use less memory and be faster to search. But building the Trie of the entire dictionary already only takes 82 milliseconds on my laptop (the second part of the program, walking all the subpaths through the board and checking for matches takes about 20 milliseconds for a 5x5 board. By way of comparison, the Python program takes 1.5 seconds to run, so that's about a 10X speedup).

The other enhancement which I might do once I've thought about it a bit more is to serialize a built Trie to a file on disk that could be read in quickly and completely skip the step of building the same Trie every time the program is run.

Writing the C version took me a few evenings, with a lot of stupid mistakes and sidetracks, but it was fun. I like looking at the code and knowing that any inefficiency in it is because I didn't design it well enough, not because I'm using a massive library or framework that has a thousand features that I don't actually need for this particular problem.

Gear Review: 2000 Parker Fly Stealth

2000 Parker Fly Stealth

Ever since running across a thread in the Parker Guitar forums showing off a rare "Stealth" model last spring, I knew I had to have one. Fewer than one hundred were made and that was back in 2000, so I expected a long search and to have to pay through the nose. This August though, my search ended much sooner than I had thought and for a lot less money than I would have thought and I acquired my most lusted after guitar.

My previously reviewed NiteFly-M totally sold me on Parker Guitars. Nothing else plays like them. As I said, pretty much the only downside I could come up with was that it's just not made to easily take thick strings and tune down low as I like to do for playing metal. The floating bridge is very delicately balanced and designed for .009's or .010's. I suspect this is one of the main reasons Parkers aren't particularly well known among guitarists in the heavier genres.

The Stealth appears to have been a brief attempt by Parker to break into metal. It's a Parker Fly Deluxe but with a few changes. First, and probably most importantly, it has a fixed bridge. That makes it more stable, helps sustain, and makes it easier to set up for heavier strings. That was the main feature that I wanted (needed?) from a Parker.

Next, the body and neck are both basswood. Regular Parker Fly Deluxes have a poplar body and basswood neck. Switching to basswood makes the Stealth a little lighter (4.5lbs) and gives it a thicker tone with more bass response.

The frets are the extra large size that Parker normally only uses on the NiteFlys. Extra jumbo frets are a standard feature on metal oriented guitars.

They also added the clever Sperzel "D" Thing tuner to the guitar. The "D" Thing lets you drop the low E string down a step to D accurately with a quick flip of a switch. Very handy if you play switch back and forth between standard and drop tunings (actually not my thing, but it doesn't hurt).

The finish on the guitar is also unique. It's a textured grey and black splotchy sort of thing. Kind of like a high tech granite. It's mildly glossy but with the texture and pattern, it just never looks dirty or dusty.

Otherwise, it's the standard set of Parker Fly Deluxe features. Carbon-fiber reinforcement, glass-epoxy carbon-fiber fretboard, stainless steel frets, Sperzel locking tuners, custom DiMarzio pickups (Flys have such thin bodies that normal pickup routes would go clean through, so DiMarzio has to make custom thinner ones for them), piezo pickup in the bridge, stereo output, and the most amazing thin contoured body and neck seemlessly joined. Seriously, I describe it as like playing a razor blade:

Razor Blade

Pretty much the first thing I did when I got it was to take a deep breath and put some Elixir .012 strings on it and tune it to C standard. In the past, I've had to file the nut slots on my Schecter to take strings that thick, so I was a bit nervous that I'd have to do the same to the Parker (marring an otherwise pristine condition guitar). Thankfully, the .012's went on perfectly. .012's in C standard just about exactly match the overall string tension of .010's in E standard, which the guitar had originally been set up with, so I didn't even need to adjust the truss rod. Intonation only required the tiniest tweaks and it was done.

The result has been just as amazing as I'd hoped. I now have a guitar with the unmatched playability of a Parker that has serious amounts of brutal, low end crunch. The DiMarzio's, which have such a cutting sound in the NiteFly stay cutting in the Stealth, but gain a bit of force with the thick strings and basswood body. Those smooth stainless steel frets and that unbelievably thin neck make playing it just totally effortless.

It's a shame that Parker didn't manage to win over the metal players and apparently gave up for a while (although one of their recent models is an Adam Dutkiewicz (of Killswitch Engage) signature model, so I guess they're trying again).