Interested in Redis? You might be interested in the Redis Handbook I'm currently working on.

Over at Scalarium we constantly find outselves adding new statistics to track specific parts of the system. Thought it'd be a good idea to share some of them, and how we're using Redis to store them.

Yesterday I was looking for a way to track the time it takes for an EC2 instance to boot up. Booting up in this case means, how long it takes for the instance to change from state "pending" to "running" on EC2. Depending on utilization and availability zone this can take anywhere from 30 seconds to even 30 minutes (us-east, I'm looking at you). I want to get a feel for how long it takes on average.

We poll the APIs every so many seconds, so we'll never get an exact number, but that's fine. It actually makes the tracking easier, because the intervals are pretty fixed, and all I need to do is store the interval and increment a number.

Sounds like a job for a sorted set. We could achieve similar results with a hash structure too, but let's look at the sorted set nonetheless, because it's pre-sorted, which suits me well in this case. For every instance that's been booted up I simply store the interval and increment the number of instances.

In terms of a sorted set, my interval will be the member in the sorted set and the number of instances falling into that particular interval will be the score, the value determining the member's rank. Advantage here is that the set will automatically be sorted by the number of instances in that particular interval, so that e.g. the interval with the most instances always comes first.

We don't need anything to get started, we just have to increment the score for the particular interval (or member), in this case 60 seconds, Redis will start from zero automatically, I'll use the Redis Ruby library for brevity.

redis.zincrby('instance_startup_time', 1, 60)

Another instance took 120 seconds to boot up, so we'll increment the score for that interval too.

redis.zincrby('instance_startup_time', 1, 120)

After some time we have added some good numbers to this sorted set, and we can start keeping an eye on the top five.

redis.zrevrange('instance_startup_time', 0, 4, :with_scores => true)
# => ["160", "22", "60", "21", "90", "10", "120", "10", "40", "5"]

The default sort order is ascending in a sorted set, hence we'll get a reverse range (using the zrevrange command) of the five intervals with the highest score, i.e. where the most instances fall into.

To get the number of instances for a particular interval, we can use the zscore command.

redis.zscore('instance_startup_time', 60)
# => 21

To find the rank in the sorted set for a particular interval, e.g. to find out if it falls into the top five intervals, use zrevrank.

redis.zrank('instance_startup_time', 160)
# => 0

Now we want to find the intervals where a particular number of instances fall into, say everything from 10 to 20 instances. We can use zrangebyscore for this purpose.

redis.zrangebyscore('instance_startup_time', 10, 20, :with_scores => true)
# => ["120", "10", "90", "10"] 

Note that Redis has some nifty operators where you can e.g. ask for every interval that has more than 10 instances, using the +inf operator, useful when you don't know the highest score in the sorted set.

redis.zrangebyscore('instance_startup_time', 10, '+inf', :with_scores => true)
# => ["120", "10", "90", "10", "60", "21", "160", "22"]

Now you want to sort the sorted set by the interval, e.g. to display the numbers in a table. You can use the sort command to sort the set by its elements, but unfortunately there doesn't seem to be a way to get the scores in the same call.

redis.sort('instance_startup_time')
# => ["20", "40", "60", "90", "120", "160"]

To make up for this you could iterate over the results and fetch the results in one go using the multi command.

members = redis.sort('instance_startup_time')
redis.multi do
  members.each do |member|
    redis.zscore('instance_startup_time', member)
  end
end

So far we've stored all numbers in one big sorted set, which will grow over time, making the statistical numbers very broad and less informative. Suppose we want to store daily metrics and then run the numbers weekly and monthly. We just used a different key derived from the current date.

today = Date.today.strftime("%Y%m%d")
redis.zincrby("instance_startup_time:#{today}", 1, 60)

Suppose we have collected data in the last two days. Thanks to zunionstore we can add the two sets together. Assume you have data from all days of the week, then you can use zunionstore to accumulate that data and store it with a different key.

redis.zunionstore('instance_startup_time:week49',
                  ['instance_startup_time:20102911', 'instance_startup_time:20103011'])

This will create a union of the sorted sets for the two subsequent days. The neat part is that will aggregate the data of the elements in the sets. So if on the one day 12 instances took 60 seconds to start and on the second 15, Redis will create the sum of all the scores. Neat, huh? What you get is a weekly aggregate of the collected data, of course it's easy to create monthly data as well.

Instead of summing up the scores you could also store the maximum or minimum across all the sets.

redis.zunionstore('instance_startup_time:week49',
                  ['instance_startup_time:20102911', 'instance_startup_time:20103011'],
                  :aggregate => 'max')

Of course you could save the extra union and just create counters for days, weeks and months in one go, but that wouldn't give me much material to highlight the awesomeness of sorted set unions now, wouldn't it?

You could achieve a similar data structure by using hashes, but you can do some neat things on sorted sets that you'd have to implement manually with hashes. Sorted sets are pretty neat when you need a weighed counter, e.g. download statistics, clicks, views, prelisted by the number of hits (scores) for the particular element.

Tags: redis

The awesome dudes at Basho released Riak 0.13 and with it their first version of Riak Search yesterday. This is all kinds of exciting, and I'll tell you why. Riak Search is (way down below) based on Lucene, both the library and the query interface. It mimicks the Solr web API for querying and indexing. Just like you'd expect something coming out of Basho, you can add and remove nodes at any time, scaling up and down as you go. I've seen an introduction on the basics back at Berlin Buzzwords, and it was already shaping up to be nothing but impressive. But enough with all the praise, why's this stuff exciting?

  • The key/value model is quite restrictive when it comes to fetching data by, well anything else than a key. Keeping reverse lookup indexes was one way to do it, but the consistency model of Riak made it hard if not impossible to maintain a consistent list of interesting entries in an atomic way.

    Riak Search fills this gap (and not only for Riak, the key/value store, but for any key/value store if you will) by offering something that scales up and down in the same way as Riak, so you don't have to resort to e.g. Redis to maintain reverse lookup indexes.

    Run queries in any way you can think of, fetch ranges, groups, you name it, no need to do anything really. It even integrates directly with Riak through pre-commit hooks.

  • It's based on proven technology (Lucene, that is). It doesn't compete with something entirely new, it takes what's been worked on and constantly improved for quite a while now, and raises it onto a new foundation to make it scale much nicer, the foundation being Riak Core, Riak KV and Bitcasks, and some new components developed at Basho.

  • It uses existing interfaces. Imagine just pointing your search indexing library to a new end point, and there you go. Just the thought of that makes me teary. Reindex data, reconfigure your clients to point to a new endpoint, boom, there's your nicely scalable search index.

  • Scaling Solr used to be awkward. Version 1.5 will include some heavy improvements, but I believe the word shard fell at some point. Imagine a Solr search index where you can add and remove nodes at any time, the indexing rebalancing without requiring manual intervention.

    Sound good? Yeah, Riak Search can do that too.

Remember though, it's just a first release, which will be improved over time. I for one am just happy they finally released it, I almost crapped my pants, it's that exciting to have something like Riak Search around. And I say that with all honesty and no fanboyism whatsoever. Having used Solr quite a lot in the past I'm well aware of its strengths and weaknesses and the sweet spot Riak Search hits.

I urge you to play with it. Installing it and feeding it with data could not be easier. Well done, Basho!

Update: From reading all this you may get the impression that Riak Search builds heavily on a Lucene foundation. That's not the case. When I say that it builds on top of Lucene, I actually meant that it can and does reuse its analyzers and query parsing. Both can be replaced with custom (Erlang) implementations. That's the only part of Lucene that is actually used by Riak Search, because why reinvent the wheel?

Tags: riak, search, fulltext

I had the honor of speaking at JAOO, sorry GOTO, this year. Being part of so many great speakers, like James Gosling, Rich Hickey, Martin Fowler, Tim Bray, Michael Nygard, and Dan Ingalls (maker of several Smalltalk versions), made me feel nothing but humble, but not in a bad way. I talked about CouchDB, and if you care for it, check out my slides. This is my take away from the conference.

Be Humble

My point here is not to make myself look like someone who's unimportant, though I'm not important either. I'm humble, that's all. At the speaker dinner on Wednesday night I sat at a table with John Allspaw (Flickr/Etsy), Tom Preston-Werner (GitHub), Andy Gross (Basho), and Mike Malone (SimpleGeo). I knew some of these guys before, and talked in one way or the other, but this time was different. First of all, they're an incredibly smart bunch. Smarter than I'll probably ever be. Which is not a bad thing, because if anything it's a motivation to constantly improve myself, to never stop learning.

They shared stories from all the places they've worked, not gossip stories, but more stories on problems they solved and how they solved them. That just fascinated me. I could've sat there for hours, just listening to stories from how they did and do operations, how they handled certain problems, and all that at a scale that's usually way out of my league. I'm usually not a quiet person, but it's times like these where I can just sit and listen.

The problem I realized at some point though was, that in Germany, this culture of sharing simply doesn't exist. People don't talk much about operations, how they solve specific problems, the really interesting stuff. People talk about tools, languages, Amazon Web Services, all that stuff, but not how they go about to solve real life problems, at any scale. It's sort of sad, and I'm trying to come up with ideas on how to change that. Maybe it even happens, but outside of my usual circles. Other people from around here agree with me though, so I guess I'm not the only one thinking this way.

Because I just felt lucky being able to hear what they had to say. I love hearing these stories. There's a lot to gain from them, sometimes even more than just reading books (which you should still do of course). In a group I much prefer being the humblest in the band, and to just listen, obverse and learn. I love getting new ideas, new motivation and energy out of them. The motivation, together with a very specific track, lead to another realization.

Get Shit Done!

Every day there was one track at JAOO dealing with Scrum, Agile, Kanban, Devops, Lean, Continuous Something, you name it. I have a rather specific opinion on these topics, which I won't go into right here. I just find the amount of talk on the subjects ridiculous.

Which brings me right to the subject. Instead of talking about agile processes, or whatever kind of process, just get shit done. The secret to being a great coder, operations guy, or even writer is not to talk about becoming one, it's to just start writing. Or, as Tom Preston-Werner put it: Innovate, Execute, Iterate.

Talking about process won't get you anywhere. Pick what works for you and move on. If it doesn't work, reconsider specifically what doesn't, and improve. Don't blame the process. If shit doesn't get done, you have only yourself to blame. This realization is not exactly new, but it blows my mind how much time people spend talking about getting things done, instead of actually doing them. So here's the only tip I'll give you: get shit done. Working in a startup, which I just so happen to do, this is the only thing that matters.

My personal take-away from JAOO/GOTO, even though it's not even directly related to the conference itself but the stuff I experienced around it: Be humble, and get shit done.