I'm not religious about anything, but if there's one thing I adopted and applied rigorously over the last years it's testing. Out of that habit I've developed using some principles that drive my writing of tests.

  • You don't have to avoid stubbing and mocking everything, but you can try

    Reducing the number of collaborators will result in smaller methods which are in turn easier to test. In the end stubbing things out is better than having to have complex dependencies in place to run a test suite.

  • Test everything that's important

    And if it's not important, why not just throw it out? If you can't throw it out, then it's probably important. So make sure you write tests for it. Simple as that.

  • If you don't know what else to do, write a test

    Stop thinking about how the code should look, start thinking what you want it to do. Some people call it working test-first, I just use it to find out how I want the code to work from the outside. When you put every aspect of the method in question into test cases, the way it should work will take shape naturally.

  • The testing framework is not your biggest problem

    While I prefer the style of RSpec and Shoulda for writing tests, it's usually not the problem of which of them you actually use. The point is to write tests, choose whichever framework will make that the easiest for you. You can write tests with all of them.

  • Don't DRY out your tests

    Tests are about readability. They're supposed to tell the reader what the code is supposed to do, without having to browse up and down in the test file to find methods that are used to create objects, or worse yet, through fixtures. Don't try too hard to put common code to setup scenarios into separate methods. It's just not important in tests. Create your scenarios where you need them, or use tools like factory_girl, where you collect object scenarios in a single file.

    Don't put too much code into separate methods for reuse in multiple test cases. If you do, make sure you name them in a way that will ensure readability, and think twice before you move code away from the point where other developers will look first.

  • Keep your tests small

    Instead of testing everything in one test method, create a separate test for every aspect of the method under test. The tests are easier to read and errors are easier to find.

Writing good tests is an art in itself, but trying your best to learn and constantly improve it is the best thing you can do for yourself in my opinion.

For Bratwurst on Rails I implemented a small application to allow for easy signup. We could've used something like upcoming or wevent, but first we wanted to give people an opportunity to tell us what they like to eat and second, we needed to have room for our sponsors.

As you might imagine the application doesn't do a lot, it comes along with a flashy design, two simple controllers and one model. Nonetheless I wouldn't have felt comfortable to put the application out in the open without some testing. While I'm sure I didn't test every aspect of it, it tests enough of the functionality to ensure it really works.

Bratwurst Tests

Testing is the part of Rails that has really grown on me, and that drew me to Rails in the first place. The integration of the different types of tests, the fixtures, the ease of writing tests, all that should be enough reason to write tests even for the smallest projects.

If you're just getting started with Rails or haven't looked into testing yet, I suggest you do so soon. It can save a lot of trouble. I don't want to say time, because in the beginning that might not be true. It takes a while to get into testing and to get so comfortable with it that it's only natural to start writing a test for your controllers before you even think about opening the browser. But in the long run it's definitely worth it.

The funky Growl notification is courtesy of autotest, by the way. A shame it took me so long, but I just started using it, and right now I can't remember how it was like without it. Notification icons and notification code can be found on the blog of internaut

Tags: rails, testing

I recently started picking up RSpec for a current project. Not too far along the way I found myself wanting to test a SOAP web service written with Active Web Service with it.

Given that these map to controllers it's pretty easy to do that. You can basically use the same things that you can use in Rails' function tests. All you need to do is include the correct helper that defines the methods, in your specification.

require 'action_web_service/test_invoke'

And that's it. From then on you can test your web services like controllers, given that you have an Service::SearchApi that uses the Service::SearchController, you can just do this:

describe Service::SearchController do
  it "should find users with a valid input" do
    users = invoke :find_users, "quentin"
    users.should have(2).items
    users.first.should be_instance_of(User)
  end
end

There. It's that easy.

Tags: rspec, testing

When using CruiseControl.rb for continuous integration, and RSpec for testing, the defaults of CruiseControl.rb don't play that nice with RSpec. However, that can be remedied pretty simply.

By default CruiseControl.rb runs its own set of Rake task, which invoke a couple of Rails' tasks, including db:test:purge, db:migrate and test. You do want the first ones, but you'd rather have CruiseControl.rb run your specs instead of your the (most likely non-existing) Test::Unit tests.

Before running its default tasks it checks whether you have a specific task configured in your project settings, and if you have a task called cruise in your project's Rakefiles. You can use both, but I just declared a task cruise in the Rakefile, and I was good to go.

That task can to pretty much the same as the original CruiseControl.rb implementation, and even be shorter since it can be tailored for your project, but invokes spec instead of test. One caveat is to set the correct environment before running db:migrate. I split out a tiny prepare task which does just that, and can do a couple of other things, if necessary, like copying a testing version of your database.yml.

desc "Task to do some preparations for CruiseControl"
task :prepare do
  RAILS_ENV = 'test'
end

desc "Task for CruiseControl.rb"
task :cruise => [:prepare, "db:migrate", "spec"] do
end

Simple like that. The task spec will automatically invoke db:test:purge, so you don't need that.

For a new project I wanted to try some new things, the moment just seemed right, so let me just give you a quick round-up.

  • Machinist - Now I liked factory_girl, but after looking into Machinist it still seemed too tedious. It took me a while to replace the fixtures with Machinist, but it was totally worth it.

  • resource_controller - Way to DRY out your controllers. It abstracts away a lot of the tasks you repeat in RESTful controllers, but in a way that doesn't feel like it's totally out of your hand. Just the right amount of abstraction.

  • Cucumber - When I first saw the PeepCode on RSpec user stories I was a little bummed, but that was mainly because the PeepCode itself didn't really show the power of stories for integration tests. Quite the opposite, it used the stories to directly work with the model, and to test validations. Not really what I fancied, I already had a tool for that.

    But Cucumber, where have you been all my life? I started working with it today, and just after a few hours it already felt so natural to put the things you expect from your application on the user level into sentences, and to write or reuse the according steps. If you haven't already, do give it a go. It's been the missing tool for integration testing in my toolbox, and I'm in love with it already.

    It integrates nicely with a lot of things, for me right now, Webrat is sufficient, but if you fancy it, use Selenium, Celerity, Mechanize or whatnot.

In other news, I gave actsassolr a new home, it's not fancy yet, but at least there's an up-to-date RDoc available.

It's test-awareness month over at RailsTips. If you still need reason, motivation or general tips on testing, head over there immediately. If you're a fan of new year's resolutions, this is your chance. John Nunemaker is spot-on with this series.

So this month's motto is: Test Or Die!

Tags: testing

Pat Maddox recently published a blog post on mocking called "You Probably Don't Get Mocks." I wanted to write something on my experiences with mocks for a while now, so here's good reason to finally do so. I'm a recovering mock addict, if you will, so this is my retribution of things I learned over the last 18 months, and how my testing workflow changed with them.

When you dive into RSpec, mocking will be put in your face right from the start. For some it might even be the thing that makes BDD what it is, testing how your code behaves. I followed that path for about a year. I wouldn't say religiously, but I did use it wherever it made sense. The project started out with no tests at all, so mocking actually did come in handy for some of the bigger controllers, in the beginning, that is. It also made the whole test suite rather speedy. When I was finished with the project, it had about 2200 specs which took less than two minutes. A good run still, certainly with room to improve performance.

But towards the end some things changed, my testing workflow changed. Pat might argue that I probably don't get mocks, but the tests felt brittle. That's not all the mocks' fault, the code still was very brittle in some areas, so the tests had to do quite a lot to get a decent setup. It just didn't feel right to use mocks for controller tests.

These days I use Cucumber to do full-stack testing, so for one I agree with Pat. If you have a decent acceptance test suite, the brittleness mocks and stubs bring into your code gets less annoying. But in all that got me thinking. If I need integration tests to secure that my mocks still do what they're supposed to, something isn't right. On said project I only had a good suite of controller and unit tests (yeah, their specs, I know, I'm not religious about the name, in the end they all test the code), so I had to rely on the tests to give good and reliable results, at least on that level.

Write Code That Doesn't Suck

When I watched Yehuda Katz' talk at RubyConf 2008, it hit me. Who cares what your controller does on the inside, how it get things done? Does it really matter? Is that a defined business value? No, because what matters is what comes out in the end, what the user sees. I also watched the talk called "Testing Heresies" that Pat mentions, but here I agree with Pat, it was not at all convincing. But for me, Yehuda's talk hit right home.

It pretty much was that realization that made me testing framework agnostic. It just doesn't matter to me what you're using to write tests, as long as you write them, and as long as they're a reliable base for working with your code. Sure I appreciate RSpec's syntax and it trying to embrace BDD to the fullest, but in the end, it just doesn't matter. If you don't care about all the glory internal details of a test, then it doesn't matter if you use a fancy should syntax or plain old Test::Unit tests.

With Cucumber, the question for me is: If I have a decent acceptance test suite, then what do I need to do big testing on the controller level for? The acceptance test suite should cover most of what the controller does anyway. I started using resource_controller a while ago, and it takes away a lot of the "complexity" you usually have in your controllers. It reduces the controllers to what models in the Java world used to be: Dumb classes, and therefore reduces the amount of testing code for controllers. While that feels right to me, you could still argue that mocks and stubs make sense to test those cases where you still write tests on the controller level. Sure, but even in these cases I'd rather rely on real data, even if that'll slow down my test suite. The slowdown can't be that bad, if it is, then your controllers are simply doing too much.

Mocks Don't Fix Slow Tests

Jon Dahl recently posted a post on slow tests being a bug. Interestingly he didn't mention mocks or stubs as a measure to speed up tests. Maybe he forgot, but to me that's pretty interesting. It used to be an compelling reason to use mocks for me, but these days I'd rather have a slower test suite, but one that's more reliable. Using mocks to speed up your tests is just wrong, you exchange reliability for speed. Call me paranoid, but that doesn't seem right to me.

The isolation of controller and model for me just is a no-brainer. Whether you like it or not, the controller doesn't work without the model. Even stubbing out the model doesn't hide that fact. I embrace it, and by keeping my controller as small as possible, using a tiny abstraction like resource_controller and factories for test data, I just don't care about using mocks anymore. It gives me a much better sense of safety. If you get mocks or not, that's what it boils down to. If you feel like mocks give you a false sense of security, stop using them, if you're comfortable with them, for the love of BDD, keep using them, but don't expect me to use them when I'm writing tests.

All that said, there is still one good reason to use mocks and stubs for me. It's to stub out external services. So yeah, if you want to say that I probably don't get mocks, feel free. Personally I just feel a lot more comfortable without them, and to me that's way more important. Keep your controller tests as small as possible, and the whole reasoning for mocks and stubs vanishes. Mocks dumb down your tests, and that's exactly what should not happen under any circumstances. If anything needs to know all the facts about a piece of code under test, it should be the tests themselves.

Tags: testing, bdd, mocking

This post has been lying in my drafts folder for a while now, and since I'm trying out new approaches to shrink oversized controllers, it's about time to put this out, and get ready for describing alternatives.

One of the basic memes of Rails is "Skinny controller, fat model." And even though most examples, especially the ones using the newer REST features in Rails advertise it, there are still heaps of controllers out there that have grown too big, in both the sense of lines of code and the number of actions. I've had my fair share of super-sized controllers over the last months, and it's never a pleasant experience to work with them.

If you don’t believe me, look at the Redmine code. Whether it's laziness or lack of knowing better doesn't really matter. Fact is, if it keeps growing you'll have a hard time adding new features or fixing bugs. Error handling will become more and more cumbersome the more code you stuff into a single action.

And if it's pretty big already, chances are that people will throw in more and more code over time. Why not? There's so much code in there already, what difference does it make? Broken windows are in full effect here. If the controller is full of garbled and untested code already, people will add more code like that. Why should they bother writing tests for it or refactoring it anyway? Sounds stupid, but that's just the way it is.

On a current project I refactored several (and by that, I mean a lot) of these actions to merely four to ten lines of code. The project is not using RESTful Rails, but it wouldn't make much of a difference anyway. I've made some experiences that worked out pretty well for me, which would very likely help to make a controller RESTful. But that wasn't really the main objective on my current project. If they're still up to par when fully using REST I'll leave up to you to decide or, even better, update.

I’m not going to put code into this article, since most of the stuff is graspable without looking at code. If you see the sort of code I’m talking about you’ll understand.

It's actually just a few simple steps, but they can be both frustrating and exhausting, even when you take on step at a time (which you should, really).

Understand what it does

It's too obvious, isn't it? But still, a beast consisting of 160 of code should be approached carefully. Understand what each line does, and more importantly, why it does it. In an ideal world you can just read the code, and understand what it does, but we both know that’s wishful thinking. If you’re lucky you can just ask someone who knows their way around. But oftentimes you’re out of luck, and just have to gather as much information from the code as you can, or from playing around with the application itself.

Don't just look at the code, run it from the application, look at the parameters coming in from the views. Annotate the code if it helps you, it might help others as well.

Look at the view code too. This also isn't always a pleasant experience, but you will find hidden fields, parameters set and handed through in the weirdest ways for no apparent reason.

Test the hell out of it

Most likely the action at hand does not have any tests available to ensure your refactoring will work out well, otherwise you very likely wouldn't be in your current position. If it does they might've been abandoned a long time ago, and it's not even safe to say if the tests are still testing the right thing. If you have a big controller with a good test suite in place, even better. Check if they're testing all aspects of the code about to be refactored.

If not, take this opportunity to write as much tests for it as possible. Test even the simplest features, with as much detail as you can or as the time available allows for. You don't want even those features to break, do you?

I easily end up with 50 new test cases for a bigger action during such a run. Resist the temptation to refactor while you write tests. Mark parts of the code if you get ideas what to do with them, and get back to them in the refactoring phase.

Avoid testing too much functionality at once in a single test case. Keep them small and focused on a single aspect of the code in question. It doesn't have to be tests with just one assertion, but keep it focussed on a specific aspect of the method in question.

Basically it's now or never. This is the chance to improve test coverage, so do it. You'll be happy you invested the time, it will give you a better understanding of the code, and will ensure that the code still works.

It’s a painful process, but it also helps you to really understand what the code does.

Avoid complex fixtures

I don't use a lot of fixtures anymore in my functional tests. They're not just a pain in the ass, they're hard to set up, especially for complex actions, and they kill test performance. Try to use mocks and stubs instead. If you test your action line by line you can easily identify the methods that need to be stubbed or mocked. If you prefer it the scenario way, use something like factory_girl to setup objects for your tests. I'm a fan of stubbing and mocking, but too much of it will clutter your test code. I've been using it heavily for a while, but it tends to become a sign for bad design when you're using too much of it. So I've returned to using scenarios based on the task at hand, even if they hit the database.

If you turn to mocking/stubbing initially, make sure you untangle the potential mess afterwards. Even though the database can make your tests slower, in the end you want to test the whole thing.

You also want to stub out external collaborators, like web services, Amazon's S3 and the like. They don't belong into your controllers anyway, but moving them somewhere else might just open another can of worms (think asynchronous messaging), and introducing that level of complexity is just not what you need right now. Though you might want to consider it eventually.

Move blocks into methods

I'm not speaking of a block in terms of proc and lambda, but in the sense of conditional branching and the like. Longer if/else clauses usually are good candidates for getting code out of a long method into a new one, and you usually can go ahead and do just that. Once you've moved stuff out into methods, it's a lot easier to move them into modules or the model, but only if the blocks depend on parameters you can't or don't want to reach in your model.

Try to avoid the temptation to look for similar code to consolidate in the same or other controllers just yet. Make a note, and wait until you have tests in place for all the affected code. Then start to make the code DRYer by moving it into a place more common for all the classes that require it.

Break out new controllers and new actions

The same rule that applies to adding new actions to controllers also applies to working on existing ones: Having lot of actions in one controller usually means that it's doing more than it's supposed to. More and more actions usually mean there's code that's not really the controller's responsibility, solely speaking in terms of concerns. If the controller responsible for logins also takes care of a user's messages, then it breaks the separation of concerns. Move that stuff out into a new controller.

But if you can, it's also feasible to break out new actions. That's a good option when you have an action that responds differently based on input parameters or depending on the HTTP method, an old Rails classic. It will have the advantage that stuff like error handling will get a lot simpler. Big actions that do different things all at once tend to have a complex setup for catching errors. Several variables are assigned along the process, and at the end there's a long statement that checks if somewhere along the way an error occurred. If you separate the different tasks into smaller actions, you'll end up with much simpler error handling code, since it can focus on one thing, and one thing only.

The same goes for all classes really. Although with a model it's not always easy to break out a new class. But what you can do is break out code into modules and just include them.

Extract filters

Filters are a nice fit for parts of the code where objects are fetched and redirects are sent in case something isn't right, especially since the latter always require that you step out of your action as soon as possible, before doing any further logic. Moving that code out into methods, checking their return code and returning based upon that will make your code look pretty awkward. Filters are also nice to set pre-conditions for an action, pre-fetch data and the like. Whatever will help your controller actions do their thing with less code, and doesn’t fit into the model, try fitting it into a filter.

Try to keep them small though. It's too easy to just break out filters instead of moving code into the model, and it will slightly improve the code for sure. But what you really want is a small and focussed controller with a couple of lines of code in each action, and a few supporting filters around them.

Move code into the model

This is where it gets tricky, but now you can get all that business logic where it belongs. To get code out of the controller and into the model, you have to make sure it doesn't rely on things that's only available in it. params, session, cookies and the flash are the usual candidates here.

But there's always a way to work around that. Oftentimes you'll find code that assigns an error message or something similar to the flash. That kind of stuff is sometimes easier to handle in validations in the model, if it's dealing with error messages. I've seen that a lot, and it's just not the controllers responsibility to do all that work.

If your controller code is heavily dealing with stuff from the params hash, you can usually just hand that over to the model. Given of course that you properly cleaned it up first into a before_filter, or ensured that proper validations are in place.

You’ll usually find lots of constructed finders in controllers. Go after those too. Either use named scopes if you can, or create new finders in the model. It's already a lot easier on the eye when all that hand-crafted finder code is out of the way, and tucked neatly into a model.

Code that checks model objects for errors, validity, etc. belongs into validations or callbacks in your model. Just like any other code that’s none of the controllers’ business. Which basically is to mediate between the view and the model, and to do everything required to get that task done fast and without hassle. So that's the next step. A lot of times you'll find controllers setting arbitrary instance variables based on the state of the model. Rings a bell? Sure, why should the controller store the state of the model? It just should not. That's what the model is for, right?

When you’re done moving code into the model, move the according test code from functional to unit tests. Tests that used to test the business logic from the controllers perspective can now do so from a unit test. That way your functional tests can solely focus on what your web layer does.

Over time you will get an eye for code that just belongs into the model, and code that could be moved into the view, or that needs to stay in the controller. It takes practice, but the more the better. Working with legacy code is oftentimes an opportunity, not a punishment.

Know when to stop

Now that you have a skinny and tested controller, why not just keep going? It’s easy to fall into the refactoring trap. It’s just such a nice feeling of accomplishment. If you look at it that way, you could just keep refactoring your application’s code forever. But who will build the new features? Who will fix the bugs?

Avoid refactoring for the sake of it. Refactoring is an important part of the development life-cycle, but you need to find a balance between your role as a programmer, where you add new features and write tests for them, and the refactoring part.

So I could say “rinse and repeat”, but when you’re done with the controller at hand, leave it be. Get a coffee, and bask in the glorious feeling of just having done your code and the developers to come, a big favor. Unless of course, you have a lot of time. In that case, for the love of code, keep going. But that’s usually a luxury. What you can do instead is plan in time for more refactorings when you’re adding features on a controller that’s another mess. Clean it up first, then get going with the new code. When you see code that could need a refactoring while working on something different make a note (and by note I don't mean TODO, FIXME, and the like, they will get lost in the code, and never be looked at again), and get cracking on it later.

This was just a beginning though. There's still things that surprise me when I work with legacy Rails code, and they want to be dealt with in their own specific ways. As I mentioned earlier, I'm still trying out new things, and I'm going to keep posting about them.

Please, share your experiences on the matter. I'm pretty sure I'm not alone with the pain of oversized controllers.

Sometimes it's easy to forget, just how important testing has become in the development lifecycle. I recently had to remind myself and others that there are no reasonable excuses not to write tests. I would go as far as saying you're jeopardizing the quality of your software, just because you had no time, were pushed by management, or were just plain lazy.

To some extent I blame Rails for the lack of proper testing in projects using it. It's just too easy to add a new feature here, a line of code there, and hit reload. Works, right? After all, lots of projects have been developed that way.

But for me, there are no excuses. Even though I didn't learn anything about testing at the university, the advantages became crystal clear as soon as I had first contact with JUnit. The ease of writing tests for your applications for me was the killer feature in Rails. It was what attracted me to it in the first place. Writing tests for model, controller, and the whole app? Come on, how painful was (is?) that in the Java world (where I came from, by the way)?

Fret not, I've been falling into the trap of not writing tests for an easy two, three lines of code every now and then. I pretty much agree with Jay Fields, when he says that 100% test coverage is just not something you should try to achieve under any circumstances. But when you find yourself writing action after action, method after method, without writing a single test, it's time to step back and look at the repercussions.

I recently wrote a method that required a slightly more complex setup of objects than usual. It took me a full day to test all the possibilities, and to write the code (about 10 lines). I used factory_girl to build the test setup (pretty awesome by the way), wrote down the required collaborators, and what the method was supposed to do.

One day might seem like a long time, and it usually doesn't take that long, but that piece of code was crucial to ensuring data consistency. Without data consistency, your application becomes fragile, you end up with a database full of dangling references, or your application just ends up in an invalid state and blows up. Worst case scenario of course, but depending on the part of your application, it might just happen.

I didn't want to take that risk, so I took the time, and I felt much better having these tests in place, when I finally finished it. I also got a taste of factory_girl's code and Shoulda, so it was worth it.

Ensuring your code works isn't the only benefit you get. Sooner or later, code needs to be refactored. In my experience, it's a lot harder with untested code. Not only because you don't know, if the code still works after you're done. But because untested code usually ends up being a tangled piece of code glued together bit by bit and over time into something that you just don't want to touch anymore. It just doesn't feel right to change something. And it shouldn't. That's what testing is about. It should make you feel uncomfortable to not have any tests in place for the code you're working on.

With tests in place, refactoring and taking care of legacy code is a piece of cake. You can focus on the task at hand, and stop worrying about if the code will still work. You'll know immediately.

Why should that matter to you? After all, you're building fresh and shiny Rails applications. The plain truth is, every line of code turns into legacy code as soon as you write it, check it in, and put it in production. Someone will have to take care of it at some point, if not you then someone else. If it's you, you're likely going to ask yourself "What the heck was I thinking writing that code?" Look at the tests, and you'll know. If you know a better way to do it, you can just rewrite it, and rest assured that it still works.

It's sometimes hard to explain the business benefit to someone in a management position. Sometimes it makes me wonder why there's still a need to argue about it. But there's still millions of projects working their way onto the surface without a decent test suite.

If you're working on one of them, I advise you to step back for a moment and ask yourself why you're not writing test cases. I bet you can't find any reasonable explanation, because there simply is none. I know, I know, there are always some parts which are harder to test, but there are always ways to get tests into place, and then untangle them. See Michael Feathers' most excellent "Working Effectively with Legacy Code" for more detail on that topic.

It doesn't matter what framework you use. RSpec, Test::Unit, Shoulda, xUnit, anything works that will help you ensure your application is working as it should. As long as you don't use any of them there's a good chance it might just not work at all.

Tags: testing

I don't use RSpec a lot any more these days. I much prefer Shoulda, heck I even started using Rails integration tests again (using Shoulda of course), because sometimes the additional abstraction of Cucumber is just too much. Any way, there's some things I liked about RSpec, and they were not related to the features of the testing DSL itself, but more to the tool RSpec. It has a neat formatter that'll output the ten slowest-running tests. I also found the colored list of full test names to be very helpful.

So I scratched my itch last weekend and brought that goodness to Shoulda. Our test suite is starting to get a bit slow, and it already served us well to find the slowest one. I like the approach of rinse and repeat to squeeze some valueable dozens of seconds out of a test suite with that technique.

So without further ado, I give you shoulda-addons, my little patch set to bring both test profiling and a colored list of full test names to you Shoulda test suite. I'm sure it'd work with normal Test::Unit or MiniTest without much effort, but for now it's made for Shoulda, and it looks like this:

Screen shot 2009-10-19 at 22.11.07

While adding in the profiling was pretty straight forward, getting the colored output was pretty messy, and I'm not proud of it, especially considering that Test::Unit and MiniTest go different routes of outputting the little dots F's and E's.

The package is up on GitHub, can be installed from Gemcutter via gem install shoulda-addons, and should work with Ruby 1.8 and 1.9. I also tested it with Mocha included, so let me know if something doesn't work for you.

Mocking is a great part of RSpec, and from the documentation it looks insanely easy. What had me frustrated on a current project is the fact that the mocks and stubs wouldn't always do what I'd expect them to do. No errors when methods weren't invoked, and, the worst part, mocks wouldn't be cleaned up between examples which resulted in rather weird errors. They only occurred when run as a whole with rake spec but not when I ran the specs through TextMate.

I was going insane, because noone on the mailing list seemed to have any problems, same for friends working with RSpec. Then I had another look at the RSpec configuration.

Turns out, the reason for all of this is that Mocha was used for mocking. Switching the configuration back to use RSpec's internal mocking implementation, everything worked like a charme from then on.

So what you want to have in your SpecHelper isn't this:

Spec::Runner.configure do |config|
  config.mock_with :mocha
end

but rather

Spec::Runner.configure do |config|
  config.mock_with :rspec
end

or no mention at all of mock_with which will result in the default implementation being used which is, you guessed it, RSpec's own.