describe User do it "should be in any roles assigned to it" do user = User.new user.assign_role("assigned role") user.should be_in_role("assigned role") end
it "should NOT be in any roles not assigned to it" do user.should_not be_in_role("unassigned role") end end
Aside from the test name being a string, you can see the actual assertions in the format ‘variable.should be_some_value’, which is kind of more readable than it might be in C#:
Admittedly, the second example is nicer to read – the trouble with that is that you’re checking a true/false result, so the feedback you get from NUnit isn’t great:
Test ‘Access_Tests.TestAddUser’ failed: Expected: True But was: False
Fortunately, there are a few tools coming out for .net now which address this situation in a bit more of a ruby-like way. The one I’ve been using recently is Shouldly, an open source project on GitHub.
And so on. Not bad, not bad, not sure if it’s worth learning another API for though. However, the real beauty of Shouldly is what you get when an assertion fails. Instead of an NUnit-style
But was: 2
- which gives you a clue, but isn’t terribly helpful – you get:
should be 3
but was 2
See that small but important difference? The variable name is there in the failure message, which makes working out what went wrong a fair bit easier – particularly when a test fails that you haven’t seen for a while.
Even more useful is what you get with checking calls in Rhino Mocks. Instead of calling
fileMover.AssertWasCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));
and getting a rather ugly and unhelpful
Rhino.Mocks.Exceptions.ExpectationViolationException : IFileMover.MoveFile(“test.txt”, “processed\test.txt”); Expected #1, Actual #0.
With Shouldly, you call
fileMover.ShouldHaveBeenCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));
and end up with
0: MoveFile(“test1.txt”, “unprocessed\test1.txt”)
1: MoveFile(“test2.txt”, “unprocessed\test2.txt”)
As you can see, it’s not only much more obvious what’s happening, but you actually get a list of all of the calls that were made on the mock object, including parameters! That’s about half my unit test debugging gone right there. Sweet!
Shouldly isn’t a 1.0 yet, and still has some missing functionality, but I’d already find it hard to work without it. And it’s open source, so get yourself on GitHub, fork a branch, and see if you can make my life even easier!
..Or ‘Extract and Override’ as it’s otherwise known
Not that it’s specific to CRM, or any particular type of development – indeed, Roy states that he always tries to use this technique first and only defers to other methods of testing if it isn’t possible. I had certain reservations when I first read about it, but having given it a try over the last few weeks I am now a convert.
The purpose of this technique is to insert a ‘seam’ into your code – a place where you can replace dependencies on other classes & services with fake versions, so you can test one class at a time. Previously, I’d always done this using constructor injection, whereby you pass the dependencies into the class constructor:
This way, when we want to test the robot mood logic in KillHumans(), we can create fake versions of IWeaponSystem, ICommunication and IMoodManager, and just test the Robot logic. In the actual system, the real versions of these things would be supplied by a Service Locator/IOC container or similar.
This is all very well, but doesn’t work with CRM because you don’t control creation of classes. Basically you get passed a service and have to use that for all of your communication with CRM. To stay with the robot example, it would look something like this:
Because all communication is going through RobotService.Execute(), it’s very difficult to fake this, especially when it’s called multiple times and you need different return values.
To get over this, we can use Extract and Override. This works by putting the calls to the external services in their own methods, and then overriding them while we’re testing. This way, we don’t have to create fake services at all! Just a single fake version of the class we’re testing. The code in Robot would now look something like this:
Notice that the extra methods are ‘virtual’ so we can override them. they are also ‘protected’ so they’re accessible from classes which inherit from them. So now instead of creating fake services, we’d create a testable version of the robot class, like this:
We now have full control over what gets returned from those methods, and we can record when got sent into them. As such, we can happily test the mood logic in KillHumans() without worrying about those dependencies.
Now, this isn’t limited to the CRM situation described – you can use this technique any time you have a dependency on another service, as an alternative to Dependency Injection. You can read a bit more about it in one of Roy’s blog posts, which probably explains it a bit better! Although sadly without the use of robots.
I’ve been getting more into Test Driven Development (TDD) recently, so I’ve been building up my unit testing toolset as I go along. I’ve got the basics covered – the industry-standard NUnit as my testing framework & test runner, and Castle Windsor if I need a bit of IOC, are the main things.
But I’m at the stage now where I know the basics so I’m trying to improve my code and my tests, as because anyone in TDD will tell you, if your tests are no good, they may as well not be there. So the next obvious step was a code coverage tool. At the basic end, this is a tool that you use to find out how much of your application code actually gets tested by your unit tests, which can be a useful indicator of where you need more tests – although it doesn’t say whether your tests are any good or not, even if you have 100% coverage.
Open Source Tools
Being Scottish by blood, I’m a bit of a skinflint, so like most of my other tools I wanted something open source, or at least free. This actually proved more difficult than I first imagined – most of the code coverage tools seem to be commercial, often part of a larger package. Some also required that you alter your code to use them, by adding attributes or method calls – definitely not something I wanted to do. The main suitable one I came across was PartCover, an open source project hosted on Source Forge. Thankfully, it was easy to install and set up, and did everything I wanted.
There are two ways of running PartCover – a console application, which you would use with your build process to generate coverage information as XML, and a graphical browser, which lets you run the tool as you like and browse the results. I should point out I haven’t tried the console part, or using the generated XML reports, which you would probably want to do in a larger-scale development environment along with Continuous Integration etc.
Either way, the first thing you have to do is configure the tool – you need to define what executable it will run, any arguments, and any rules you want to define about what assemblies & classes to check (or not):
You can see here that I’m running NUnit console – you don’t actually have to use this with unit testing. If you want, you can just get it to fire up your application, use your app manually, and PartCover will tell you what code was run – which I imagine could come in quite handy in itself in certain situations. But as I want to analyse my unit test coverage, I get it to run NUnit, and pass in my previously created NUnit project as an argument.
I also have some rules set up. You can decide to include and exclude any combination of assemblies and classes using a certain format with wildcards – here I’ve included anything in the ‘ContactsClient’ and ‘ContactsDomain’ assemblies, apart from any classes ending in ‘Tests’ and the ‘ContactsClient.Properties’ class. It can be useful to exclude things like the UI if you’re not testing that, or maybe some generated code that you can’t test – although you shouldn’t use this to sweep things under the carpet that you just don’t want to face!
With that done, just click ‘Start’ and you’re away – NUnit console should spring into life, run all of your tests, and you’ll be presented with the results:
As you can see, you get a tree-view style display containing the results of all your assemblies, classes and methods, colour coded as a warning. But that’s not all! Select ‘View Coverage details’ as I’ve done here, and you can actually see the lines of code which have been run, and those which have been missed. In my example above, I’ve tested the normal path through the switch statement, but neglected the error conditions – it’s exactly this type of thing that code coverage tools help you to identify, thus enabling you to improve your tests and your code.
At this point, I feel I have to point out a potential issue:
Warning! Trying to get 100% test coverage can be addictive!
If you’re anything like me, you may well find yourself spending many an hour trying to chase that elusive last few percent. This may or may not be a problem depending on your point of view – some people are in favour of aiming for 100% coverage, but some think it’s a waste of time. I like the point of view from Roy Osherove’s Book, that you should test anything with logic in it – which doesn’t include simple getters/setters, etc.
But 100′s such a nice, round number, and I only need a couple more tests to get there..