The guys over at Pivotal Labs wrote a small piece on a neat tool called XRay. It hooks into your Ruby code to provide Java-like signal handlers which dump the current stack trace into whatever log file seems fit. Using Passenger that'll be your Apache's error log file.

Say what you want about Java and it's enterprisey bloated-ness, but some features of it come in quite handy, especially when they allow looking into your processes without immediately having to turn to tools like NewRelic or FiveRuns.

Just add the following line to your application.

require "xray/thread_dump_signal_handler"

From then on you can use kill -QUIT <pid> to get a trace dump in your log file. Neat! The website says you need to patch Ruby, but for me it worked with the Ruby that comes with Leopard, and Ruby Enterprise Edition.

Tags: ruby, monitoring

On a recent project we ran into a situation where we needed a more advanced way of parallelizing Capistrano tasks than just using the parallel method it already sports. To jog your memory, parallel can run arbitrary shell commands in parallel on different servers. So if you wanted your webserver to already restart the processes while you restart your background processes, you can do it like this:

parallel do |session|
  session.when 'in?(:web)', 'script/spin'
  session.when 'in?(:daemon)', 'script/daemons restart'

You can even use it to run tasks in parallel on the same hosts, but it looks ugly, and it only works for shell commands. So we came up with a neat extension that lets you run arbitrary blocks of code in parallel. Obviously you'd usually call tasks in those blocks, but who knows. Let's just have a look at an example.

parallelize do |session| { deploy.restart } { daemons.restart }

Neat! We're aware that the name might not be perfect, suggestions welcome, but for now it'll do. What it does is run each block in a different thread. Due to some internal Capistrano limitations it also opens a new SSH session for each thread. That's a bit of a bummer, but we'd have to rework a lot of the command code of Capistrano to get that to work with multiple threads. It also means that you should usually limit the amount of tasks you run at any one time, but thankfully you can either do that by setting the parallelize_thread_count variable. It defaults to ten concurrent thread. parallelize will run all the blocks in chunks corresponding to that size, and will only run the next chunk when all blocks in the first finished successfully.

You can also hand in the chunk size directly, since it sometimes just doesn't make sense to run as many tasks in one go as possible, especially when they're run on just one host. It might just take longer to run them all in parallel than to ran two batches one after the other. So it's easy to specify the chunk size for specific tasks.

parallelize(5) do { deploy.restart } { daemons.restart }

If one of the tasks in the specified blocks causes a rollback or raises an error, then parallelize will run the rollback on all the other threads and on the main thread. Now, in general I wouldn't recommend running too many tasks in parallel that require big rollback procedures, but just in case you're into that sort of thing, knock yourself out.

We could see a considerable improvement in deployment speed, especially during the time-critical tasks. The project is up on GitHub, and it might just be moved into the capistrano-ext module in the future. Install using

gem install mattmatt-cap-ext-parallelize -s

And then in your Capfile, insert this line

require 'cap_ext_parallelize'

Let us know if you have any problems.

Pat Maddox recently published a blog post on mocking called "You Probably Don't Get Mocks." I wanted to write something on my experiences with mocks for a while now, so here's good reason to finally do so. I'm a recovering mock addict, if you will, so this is my retribution of things I learned over the last 18 months, and how my testing workflow changed with them.

When you dive into RSpec, mocking will be put in your face right from the start. For some it might even be the thing that makes BDD what it is, testing how your code behaves. I followed that path for about a year. I wouldn't say religiously, but I did use it wherever it made sense. The project started out with no tests at all, so mocking actually did come in handy for some of the bigger controllers, in the beginning, that is. It also made the whole test suite rather speedy. When I was finished with the project, it had about 2200 specs which took less than two minutes. A good run still, certainly with room to improve performance.

But towards the end some things changed, my testing workflow changed. Pat might argue that I probably don't get mocks, but the tests felt brittle. That's not all the mocks' fault, the code still was very brittle in some areas, so the tests had to do quite a lot to get a decent setup. It just didn't feel right to use mocks for controller tests.

These days I use Cucumber to do full-stack testing, so for one I agree with Pat. If you have a decent acceptance test suite, the brittleness mocks and stubs bring into your code gets less annoying. But in all that got me thinking. If I need integration tests to secure that my mocks still do what they're supposed to, something isn't right. On said project I only had a good suite of controller and unit tests (yeah, their specs, I know, I'm not religious about the name, in the end they all test the code), so I had to rely on the tests to give good and reliable results, at least on that level.

Write Code That Doesn't Suck

When I watched Yehuda Katz' talk at RubyConf 2008, it hit me. Who cares what your controller does on the inside, how it get things done? Does it really matter? Is that a defined business value? No, because what matters is what comes out in the end, what the user sees. I also watched the talk called "Testing Heresies" that Pat mentions, but here I agree with Pat, it was not at all convincing. But for me, Yehuda's talk hit right home.

It pretty much was that realization that made me testing framework agnostic. It just doesn't matter to me what you're using to write tests, as long as you write them, and as long as they're a reliable base for working with your code. Sure I appreciate RSpec's syntax and it trying to embrace BDD to the fullest, but in the end, it just doesn't matter. If you don't care about all the glory internal details of a test, then it doesn't matter if you use a fancy should syntax or plain old Test::Unit tests.

With Cucumber, the question for me is: If I have a decent acceptance test suite, then what do I need to do big testing on the controller level for? The acceptance test suite should cover most of what the controller does anyway. I started using resource_controller a while ago, and it takes away a lot of the "complexity" you usually have in your controllers. It reduces the controllers to what models in the Java world used to be: Dumb classes, and therefore reduces the amount of testing code for controllers. While that feels right to me, you could still argue that mocks and stubs make sense to test those cases where you still write tests on the controller level. Sure, but even in these cases I'd rather rely on real data, even if that'll slow down my test suite. The slowdown can't be that bad, if it is, then your controllers are simply doing too much.

Mocks Don't Fix Slow Tests

Jon Dahl recently posted a post on slow tests being a bug. Interestingly he didn't mention mocks or stubs as a measure to speed up tests. Maybe he forgot, but to me that's pretty interesting. It used to be an compelling reason to use mocks for me, but these days I'd rather have a slower test suite, but one that's more reliable. Using mocks to speed up your tests is just wrong, you exchange reliability for speed. Call me paranoid, but that doesn't seem right to me.

The isolation of controller and model for me just is a no-brainer. Whether you like it or not, the controller doesn't work without the model. Even stubbing out the model doesn't hide that fact. I embrace it, and by keeping my controller as small as possible, using a tiny abstraction like resource_controller and factories for test data, I just don't care about using mocks anymore. It gives me a much better sense of safety. If you get mocks or not, that's what it boils down to. If you feel like mocks give you a false sense of security, stop using them, if you're comfortable with them, for the love of BDD, keep using them, but don't expect me to use them when I'm writing tests.

All that said, there is still one good reason to use mocks and stubs for me. It's to stub out external services. So yeah, if you want to say that I probably don't get mocks, feel free. Personally I just feel a lot more comfortable without them, and to me that's way more important. Keep your controller tests as small as possible, and the whole reasoning for mocks and stubs vanishes. Mocks dumb down your tests, and that's exactly what should not happen under any circumstances. If anything needs to know all the facts about a piece of code under test, it should be the tests themselves.

Tags: testing, bdd, mocking