It's true, I did write my diploma thesis using Vim. Old-school with LaTeX and C++. When I came to the Mac five years ago I was still using the now pretty much dead Carbon version of MacVim. And well, it just didn't feel right. I'm very comfortable on the command line, but on the Mac I wanted something that integrated well with the rest of that system, that behave like a real Mac application.

Of course, like a lot of people, I found TextMate. Who could resist the cool stuff as seen on the first Rails screencasts? I sure couldn't. I've been using it a good three and a half years now, and I still like it.

But recently the new Cocoa version of MacVim scratched my itch. It was Jamis Buck's blog entry that eventually pushed me over the edge, and had me trying out working with Vim for the last week or so. And holy crap, a lot has happened in the world of Vim since I left it for good. Omni completion, a really good Rails script package, and lots of other cool stuff.

So I gave it a go, and it was like Jamis said, it kinda felt like coming back home. I spent most of my university days with Vim, actually the first years using the old-school Solaris vi. So it basically felt like I never left.

I got pretty fluent with it pretty quickly, and started looking for a nice set of scripts that would fit my workflow. I found

It all felt pretty good in the beginning, especially rails.vim is an amazing package. But after using it for a week it made me realize one thing: That I haven't dived into the Rails bundles deep enough. There's a lot of things in rails.vim that the TextMate bundle also has. What is seriously cool in Vim is the completion, but I just don't use it that much, and it can be frickin slow if you're on a big project.

And that's mainly my main gripe, it all didn't feel very speedy. It took one to two seconds for a normal file to load, what with all the BufRead action going on for Ruby and Rails project files. I didn't mind it that much in the beginning, but it got really annoying. Plus, a lot of the plugins, like NERD tree, or taglist felt kinda bolted on.

So here I am working in TextMate, still loving Vim for it's flexibility and simple effectiveness, promising myself to delve deeper into what the bundles offer. It was a great week, and I'm glad that Vim gets the love it deserves.

One issue that drove me back to Vim was the fact that there's no news on what's happening in TextMate development, and what will be in 2.0. What the week in Vim made me realize were that TextMate could use stuff like split-screen editing, the ability to handle bigger files without hogging memory and CPU, and maybe some real good SCM integration.

My biggest gripe though was that file types didn't stick, switching from Rails to RSpec and Shoulda and back just seemed to confuse TextMate. I was made aware that there actually is a "fix" for that problem, but that just isn't a full solution. It helps right now, but I can only hope that TextMate 2 integrates something like TabMate, just maybe not with modlines but with metadata.

That's what it's been. And who needs to work when there's so many nice projects to work on, eh? Well actually, I did work, but in my free time I also worked on some other things, a new one, and existing projects.

I started wrapping up Macistrano for a first release. It's looking good so far, but I still need some new icons, so that it won't look like you have two instances of CCMenu running in your menu bar. If you're wondering what it is: It's a small desktop frontend written in RubyCocoa (so yes, it's Mac only) to monitor deployments running on Webistrano. Webistrano in turn is an enhanced web frontend for Capistrano. So as you can see, there's lots of meta involved here. I basically built a tool to access a tool that runs a tool. Neat stuff, eh? If your deployment is finished in Webistrano, Macistrano will notify you through the convenience of your desktop. You can also fire off deployments with it.

Speaking of Webistrano, I had good fun working on it too. Mainly some stuff that I wanted to see in it, like code preview for recipes, syntax check, versioning. But something that really scratched my itch was import for existing Capfiles, so I just had to try implementing it. As Jonathan will no doubt confirm, it was one of the first questions that pop up, when you tell people about Webistrano: Can it import my Capfile? Fret no more, it might just do that in the near future. Nice experience, because your definitely have to have a look at the Capistrano source to find out, how it actually stores the configuration internally.

Then there's ActiveMessaging, a small library I have a certain knack for. I wanted to see support for storing messages in case the message broker is down. JMS has something similar, so why can't ActiveMessaging? I built the initial code in February, but didn't get around to actually finishing it until recently. What it does is save your message through ActiveRecord if there's an error that indicates your broker is down. That way your application most likely won't be harmed if your messaging infrastructure has a problem. To recover them you can simply run a daemon that will regularly poll the storage and try to deliver the messages. The code didn't make it to the trunk of ActiveMessaging yet, but you can have a go at it on GitHub.

I also read "The Ruby Programming Language", a highly recommended book. A detailed review will follow. But first I'm off to my well-deserved honeymoon.

While working with git-svn over the last week I ran into some minor things that weren't really problems, but still kept my mulling them over every time they happened.

After finishing work on a remote branch I did the usual chain of commands:

$ git checkout master
$ git svn rebase
$ git merge my_branch
$ git svn dcommit

Now, while this works beautifully I had two different experiences how git svn dcommit would put the changes from the branch into the Subversion repository. On one occasion it would just beautifully commit every single commit I did on my local branch. On the other it committed all the changes at once, in one single Subversion commit, using the message "Merged from 'my_branch'".

While it's all no big deal I couldn't put my finger on why it works that way. Either the man page isn't fully clear on the matter, or I just didn't fully understand it. I dug a little deeper through the internets and found out that it will only commit your merged changes as a whole when you did git svn rebase and there were actually changes pouring in from the Subversion repository.

If noone else submitted while you were working everything's fine. Knowing that difference can work out as an advantage, especially if not all of your local commits were clean.

Other than that you can just do the whole procedure from your branch.

$ git checkout my_branch
$ git svn rebase
$ git svn dcommit
$ git checkout master
$ git svn rebase
$ git branch -d my_branch

By the way, you can change the message "Merged from 'my_branch'" by calling git like so:

$ git svn dcommit -e

After more than two days of removing deprecation warnings, adding plugins, fixing some custom additions, going through the whole application, it's finally done. We're running Rails 2.0. Nothing more gratifying than seeing this, well except for the application running without problems:

Picture 1

There were some minor annoyances, but in all it was straight-forward work. One thing was that actsasferret 0.4.0 does not work with Rails 2.0.2, but the upgrade to 0.4.3 doesn't go without any pain either. In 0.4.1 index versioning was introduced which will happily start indexing your data when you first access the index.

Be sure to have the exception notifier plugin ready, that will help you find some of the bugs you might have overlooked.

Rails 2.1, we're ready for you!

It's a classic. You want to return random rows from a table, say a collection of random users in your social network. Easy, MySQL's ORDER BY RAND() to the rescue. After all, everybody's doing it. At least on my last search on that topic, all the PHP kids did it.


There. Does what it's supposed to.

Your social network keeps growing and growing and after about a year and 50000 new users you realize a slow-down on the page where you show random users. You're thinking of caching it, but what's the point? It wouldn't be random.

You break it down with EXPLAIN and realize with horror that your fancy query doesn't use the nice index you placed on the table ages ago. And why would it? It's calling RAND() for every row in your table, there's just no way around it. So what to do?

One alternative is to fetch random IDs first and then join the IDs found with the USERS table to fetch the real data. How do we do that? Why, using ORDER BY RAND(), of course. Wait, didn't I just say you're not supposed to use it? Well, I did say that, but the difference is that we'll run the ORDER BY RAND() on the best-indexed column there is, the primary key. Let's break it down and get our hands dirty:


And with a little bit of thinking we got ourselves a nice and fast way to fetch random data. Most likely there are other ways out there (sometimes I do miss Oracle's ROWID), but this one worked fairly well for me. It probably won't scale forever though, so be prepared to get back to it every once in a while.

Jenifer Altman, a very talented Polaroid and Hasselblad (did I mention I want one of these?) shooter, is working on a project to celebrate and honour the art of Polaroid photography before it completely dies (I still have high hopes that's not gonna happen) within the next year. The project's titled "For The Love of Light", and I was invited to take part in that project, and I'm rather thrilled about that. Around mid-July the project will eventually be turned into a book which will be available to the public. The artists include awesome photographers from 10 different countries, and the result will, no doubt, be awesome.

If you want to be up-to-date about the book, there's a mailing list over at the project's website.


The hardest part for me (as for everyone else involved I'm sure) will be to pick two Polaroids that can state the love for the most unique kind of photograph.

In other news, I got hitched in Sydney. Getting married in Australia was easier than we first thought, so we did it. A small ceremony in the park, summer, sun, beach, good friends. What's not to like? It was definitely worth it. Both the wedding, and the two weeks in Sydney.

just married

I ran across a weird bug the other day that seems to have been fixed in Ruby 1.8.5. It's nonetheless quite an interesting one. When you use a hash as a method parameter, and that hash happens to contain the key :do and you call the method without parentheses, like so:

def my_method(opts)

method :do => "commit"

It works when you put parentheses around the parameter:

method(:do => "commit")

Putting it in front of other entries doesn't work though. Ruby seems to think I want to start a block where it's not allowed. Putting the do into a string works just fine, of course.

Funny stuff. No mention in the Ruby changelogs, but it does work in later versions.

For a recent project I had the pleasure to work with Paypal, especially with the Instant Payment Notification API. I haven't heard a lot of things before I tried to marry it with Rails, but what I'd heard made me assume it wouldn't be a piece of cake. And I was right.

I'd love to share some code with you, but Vasillis Dimos beat me to it. He wrote two posts on Paypal IPN and Rails, one dealing with the basics and the other about mocking IPN, which you really need to do to test your code. Really.

Personally I did the testing a little differently, since all my payment handling logic was in the model. I didn't use ActiveMerchant either, but just the Paypal gem. But in general things are similar. Outside of the US and the UK you're pretty much out of choices for payments, since there's no Web Payments Pro available here, so IPN is (sadly) the way to go. It's a real PITA and here's why:

  • Paypal needs to reach your development machine from the outside. For testing this is not an issue of course, but when you need to do testing with the Paypal sandbox (which is painfully slow) and, god forbid, the real Paypal, there's no way around that.
  • The payment flow is unnatural. You have to handle the payment outside of the user's page flow. You have to rely solely on the stuff you get from Paypal, no session, no cookie, no user. It takes a lot of care to handle all that and there still might be a hole in your code that could be exploited.
  • IPNs might come in late, sometimes only after the user already got back to your site. Now you want to present him with a nice success message, but that's not gonna happen then. That's a rare case though. The IPN come in slower from the sandbox, that's for sure. It's up to you how to handle that. You can act in the favor of the user, or you can just make him wait till everything fell into place.
  • In rare cases you won't get an IPN from Paypal, for whatever reason. I've seen this happen. Be prepared to create the successful payment by hand or have something like a small GUI at hand to do it.
  • For subscriptions six different notification types need to be handled. And their even spread out over two different fields in the notification.

Some advice on how to get it right: * Log everything. Store the IPNs in the database, in the log files, wherever. Just log them. Their your proof of things that happened. Just storing them with their raw post data should do while leaving the most important fields separately in different columns. * Use mocks. It's not hard. But it's totally worth it. When you want to test all events that Paypal might send you, which is a lot for subscriptions, it's a painful development cycle. And some events aren't even fully testable by hand. * Decide on strategy to handle fraud. While your IPN URL is not really public (nothing should link here, and it's hopefully transmitted to Paypal encrypted) it's not exactly safe to just accept everything. * Don't return errors in your IPN handler. Paypal will try it again. * Store a pending payment and make it a full one when the corresponding IPN arrives.

All that said, it was an experience, and while not always pleasant, at least I learned something. But Paypal is far from being a pleasant way to handle payments, if you want to make it secure and protect your the integrity of your application and prevent fraudulent users from abusing your services, all of which should be your primary interests.

I just spent the last hour banging my head on my desk trying to get any kind of date type (whether java.util.Date or a simple timestamp) from the current time and a timezone identifier (something along the lines of Etc/GMT+12). You'd think this is an easy task. Obviously the GregorianCalendar takes a timezone as a constructor argument, so it really should be.

It is until you've called getTime() on the calendar object and wonder why you're still getting your local time. And on further inspection you realize that GregorianCalendar doesn't even care about the timezone object you've just given it. Only setting it through setTimeZone() makes it recognize that you actually want it to use a different timezone for date and time calculation.

This should work now, right? Of course it should, but getTime() and getTimeMillis() still returns the local time and doesn't mind that you don't want it to. Only if you use a date formatter like SimpleDateFormat will Java remotely start to understand what you really want. But it stays awefully quiet about the fact that then you can't get a simple timestamp anymore.

There's an article on ONJava with more detail on this.

And there I thought timezone handling in Rails would be complicated.

I'm currently working with a proprietary framework. Which is not bad per se. Compared to others I've worked with it's a nice framework to work with. It uses Spring heavily which is a plus and makes working with it quite flexible.

All that said there's one thing that bugs me about it, and that bugs me about Java in general. The overuse of final. Not for constants, mind you, but for methods and classes.

Why on earth would someone want to impose this restriction on developers? And by someone I don't especially mean the framework creator, but also the Java creators. Is it really worth it sacrificing flexibility and extensibility to ensure that nobody overwrites your methods or extends your classes to customize them? Do they really think that people are that stupid that they can't decide for themselves what to do with a framework? It's just beyond me. What are classes, inheritance and all that object-oriented mumbo jumbo for anyway.

If something is so important that it shouldn't be overwritten, then please, document it. Don't leave the developer asking why he has to go and reimplement everything himself just to have a certain part of your framework's functionality available to him.

First up: RailsConf Europe 2008 will be in Berlin again. Woot!

As I wrote yesterday, Marcel Molina and Michael Michael Koziarski did a little Best Practices session for a starters. Other than that, day two was only okay.

Ola Bini repeated in his JRuby talk pretty much what Charles Nutter and Tom Enebo said on the first day, plus some shameless ThoughtWorks pluck.

I did enjoy the talk on Selenium by Till Vollmer. It's been on my list for a while, and it does look quite interesting. The questions that pop up in my head as a friend of continuous integration is of course how to automate this. But I'll just have to read up on that.

Ben Nolan (creator of Behaviour.js) showed some neat tricks using functional programming with JavaScript. He brought up some ideas and showed code, which I very much appreciated. Nothing new for the JavaScript cracks really, but still interesting.

Jay Fields talked about the presenter pattern in Rails. I bet a lot of people thought after the talk: wtf? To sum up his findings on the presenter pattern in Rails were rather negative and probably not what a lot of people expected. I found his talk to be a change to the others. It's not always the success stories that make you push hard, but also down-falls, even if they're small ones. He put up all the details in his blog. Definitely worth checking out.

In all I would've wished for more detail in the presentations. A lot of the presenters spent too much time introducing things, presenting theory, and so on. More code please, people! When people come to the RailsConf I take it for granted they know Rails enough to get down and dirty immediately.

As DHH wrote on his blog I too was quite impressed by the engagement of Sun in Ruby and Rails. Craig McCanahan (of Struts fame) talked about it and said he can't imagine going back to Java for web development after having worked with Rails. Amen to that.

I got some nice ideas and things to look into out of it, but I have hoped for more. But still I'm looking forward to next year.

Day one of the RailsConf Europe is over (for me anyway), and so here's my summary of what I've seen and heard today.

It all really started yesterday with Dave Thomas' keynote on "The Art of Rails". The talk was inspiring. It wasn't really any new stuff, basically a nice speech with visuals about what the Pragmatic Programmers have already written about. The comparison to art sounds far-stretched for a lot of people, and it might even be. Still, there's a lot to learn from art that can be applied to software development. Casper Fabricius published a nice summary.

The morning keynote by David Heinemeier Hansson was okay. It wasn't great. It pretty much summed up all the neat new features of Rails 2.0. There's another nice summary over at the blog of Casper Fabricius.

My sessions schedule started out with Benjamin Krause's "Caching in Multi-Language Environments." He actually patched the REST-routing in Rails to support the content language as a parameter for a resource URI, e.g. /movies/ Neat stuff. He also implemented a language-based fragment cache using memcached. Both will be available later this week on his blog.

Next up was Dr. Nic's talk on meta-programming with Ruby and Rails. My word, I love his Australian accent. His talk was highly entertaining, I was laughing a lot. But it was also inspiring. He's very encouraging about trying out the meta-programming features of Ruby and doing some weird, funny and useful stuff with it. He already put up his slides for your viewing pleasure.

The afternoon was filled with the wondrous joys of JRuby and Rubinius, held by their respective maintainers Charles Nutter, Thomas E Enebo and Evan Phoenix on both of which I'm hooked now. Especially Rubinius impressed me a lot.

Roy Fielding's talk on REST was something I was really looking forward too, but it turned out to be more of a summary of his dissertation. The part on REST was good, but he spent an awful lot of time telling history and the theories behind REST.

The smaller diamond-sponsor keynotes by Jonathan Siegel of ELC Tech and Craig McClanahan were short, but pretty good I'd say.

In all, the day was pretty good, and I'm looking forward to tomorrow.

I can safely say that "Bratwurst On Rails" was a success. A lot of people showed up at the Kalkscheune, ate Bratwurst and had a good time.

Some statistics: - ca. 400 guests - 800 Bratwursts - 140 chicken sausages - 125 vegetarian sausages - 1100 bread rolls - 150 cupcakes (courtesy of Cupcake Berlin) - 100 brownies (courtesy of Misses & Marbles)

Thank you to all the people who helped organise and run the event, and thanks to our sponsors. Without you, the event wouldn't have been possible.

The first pictures are showing up on Flickr, so keep an eye on the "bratwurst on rails" tag. Here are some of the photos my girlfriend snapped yesterday.

Bratwurst On Rails 2007 Bratwurst On Rails 2007Bratwurst On Rails 2007 Bratwurst On Rails 2007Bratwurst On Rails 2007 Bratwurst On Rails 2007Bratwurst On Rails Bratwurst On Rails 2007

I'm still a little bit exhausted, but it was all worth it.

Says Steven Frank (of Panic Software, of Transmit fame):

A good bug, I mean a really good, pound-your-head-on-the-desk-for-a-week bug, is exactly like a magic trick in that something impossible appears to be happening.

Spot on, Steven. If you had one of these yourself, you know how true it is.

Several friends tried out Ruby and Rails over the last months. Apart from the fact that most of them like it, but have to get used to the different syntax, there's one question that popped up several times, and that I've already discussed with several long-time Rails users: What IDE are you using?

The answer I give them is always: I use TextMate. I know, I know, it's not an IDE you're saying. I'm well aware of that and I didn't imply it would be. That statement just implies that I don't feel the need to use one.

I'm aware that NetBeans seems to be the king of the hill right now, when it comes to Ruby and Rails support for a Java-style IDE. The latter is what bothers me about it though. I don't need all the fancy assistants, the dialogs to generate code, and I can live without the code completion. I know it's something that's missing when you're starting with Rails and come from a code-completed background, but the conventional approach makes it easier to get used to the way you deal with the framework. That's my experience at least.

The weird thing about that? When I work with Java that's the stuff I use all that stuff. A powerful IDE takes the pain out of Java. And that's the reason why I don't fancy one for Rails development. There is no pain. If there is one, it's very different from the pain of Java development, and it's not the IDE that could help me then. I work fluently with TextMate and the command line, so I have no urge to from typing to clicking to get things done.

I love seeing how the community is pushing the Java IDEs to be usable for Rails development, but right now it's just not for me.

MarsEdit 2.0 has been released recently. It's been my blog editor of choice for more than two years now, and the UI facelift it got was long overdue. No more drawers, just like Apple Mail, and best of all, Flickr integration.

MarsEdit 2.0 Flickr Browser

Since I use SimpleLog (which I can highly recommend, by the way) which doesn't support file uploads, I use two options: Skitch (for which I have two invites left, if you're up for it) and Flickr. The former is useless other than for quickly dumping a snapshot, but that you can do pretty quickly. Flickr on the other hand is my personal dump of photos, so when posting something about these (like last week, for example), the integration comes in handy.

By the way, while writing your posts in MarsEdit you can get a perfectly usable preview of how it will look like in your blog. I already edited my template accordingly before MarsEdit 2.0 came out, but Daniel Jalkut (author of MarsEdit) posted a blog entry on how to do that.

MarsEdit is among my favourite tools on the Mac, and I highly recommend giving it a go.

Okay, maybe that statement is a slight exaggeration. I started getting into photography about a year ago, and that was mainly thanks to my girlfriends diploma thesis. I started with a simple point & shoot, but it soon gets annoying to be held back by its restrictions, especially when there's a Nikon D80 in the same household.

While it's still fun to shoot with it from time to time, there's something I enjoy more: photographing with film. In March I got a Polaroid which kind of started that new obsession. Polaroid photos have a quality and a uniqueness that can't be found in digital photography.

mix and match recently on the red carpet

It's an expensive hobby, but it's good fun. Recently I got an SX-70, a single-lens reflex Polaroid, from a friend in Boston. That one definitely takes Polaroid photography to the next level. It's thirty years old, but is still good for excellent shots. The one on the right was taken with it.

The other affinity I have now is an analogue SLR, a BX20 made by Praktica. Almost twenty years old and one of their last products before the wall came down. My sister had one lying around unused. So there was a nice opportunity to get going with an SLR and I took it. I'm quite glad I did. That's what comes out when using an expired Lucky Film (from China) with a 50mm lens:

the devil rides a beach cruiser

I'm quite hooked to film right now. There's nothing like that excitement to watch a Polaroid develop or picking up prints after development. Sure, you get a lot of weird looks from people, especially with a Polaroid, but who cares.

Since I've started freelancing I worked less and less in a Java environment. Which is quite a good thing for me, since I get to do what I enjoy. I worked in a J2EE environment for three years. Though I never got to feel the pain of using entity beans, I still started loathing it over time. It just felt too heavy-weight. Testing is a pain, deployment takes ages, and it just can get frustrating what with all the waiting and the complexity involved.

I'm not going to discuss the pros and cons of J2EE here. It has its setting and that won't change for a while. There are some things that can make Java if not worthwhile, then at least a little bit fun. One of them is the Spring Framework which I've been using on several projects now, and which impresses me every time.

The other one is IntelliJ IDEA. I wrote about my switch from Eclipse a while ago. I'm still not looking back. I'm currently using IDEA 7.0 Milestone 2, and the integration of Spring and Hibernate impresses me every time I run across a new neat feature or just use it. It's been well worth its money till now.

There are some downsides though. I only have to look at my memory meter to see them. IDEA easily takes up 400 MB real memory. Throw in Tomcat (though a lightweight joy compared to JBoss) with another 150, maybe Oracle running on Parallels and it's not fun anymore. Swapping galore. Thanks to Hibernate I can work with MySQL most of the time, otherwise it would be a pain. It's a little shocking to see that 2 GB of memory are almost not enough for Java development on the Mac anymore.

Memory usage is most likely the reason why I'm still not very fond of using a full-blown IDE to develop with Rails. TextMate is still my number one choice, and I don't see that changing soon. Though I have yet to try out the Ruby and Rails plug-in for IDEA>