describe User do it "should be in any roles assigned to it" do user = User.new user.assign_role("assigned role") user.should be_in_role("assigned role") end
it "should NOT be in any roles not assigned to it" do user.should_not be_in_role("unassigned role") end end
Aside from the test name being a string, you can see the actual assertions in the format ‘variable.should be_some_value’, which is kind of more readable than it might be in C#:
Admittedly, the second example is nicer to read – the trouble with that is that you’re checking a true/false result, so the feedback you get from NUnit isn’t great:
Test ‘Access_Tests.TestAddUser’ failed: Expected: True But was: False
Fortunately, there are a few tools coming out for .net now which address this situation in a bit more of a ruby-like way. The one I’ve been using recently is Shouldly, an open source project on GitHub.
And so on. Not bad, not bad, not sure if it’s worth learning another API for though. However, the real beauty of Shouldly is what you get when an assertion fails. Instead of an NUnit-style
But was: 2
– which gives you a clue, but isn’t terribly helpful – you get:
should be 3
but was 2
See that small but important difference? The variable name is there in the failure message, which makes working out what went wrong a fair bit easier – particularly when a test fails that you haven’t seen for a while.
Even more useful is what you get with checking calls in Rhino Mocks. Instead of calling
fileMover.AssertWasCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));
and getting a rather ugly and unhelpful
Rhino.Mocks.Exceptions.ExpectationViolationException : IFileMover.MoveFile(“test.txt”, “processed\test.txt”); Expected #1, Actual #0.
With Shouldly, you call
fileMover.ShouldHaveBeenCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));
and end up with
0: MoveFile(“test1.txt”, “unprocessed\test1.txt”)
1: MoveFile(“test2.txt”, “unprocessed\test2.txt”)
As you can see, it’s not only much more obvious what’s happening, but you actually get a list of all of the calls that were made on the mock object, including parameters! That’s about half my unit test debugging gone right there. Sweet!
Shouldly isn’t a 1.0 yet, and still has some missing functionality, but I’d already find it hard to work without it. And it’s open source, so get yourself on GitHub, fork a branch, and see if you can make my life even easier!
Perhaps strangely for a developer, I much prefer reading from actual, physical books & magazines than a computer screen. This is a bit problematic these days, as what with the rise of the Internet, nobody seems to be printing programming magazines any more (in the UK at least).
As such, I was most pleased to hear about MagCloud, an online service which offers printed magazines in a FUBU (For Us By Us) style. Basically, anyone can upload a magazine in PDF format, and for a charge of around $0.20 a page, MagCloud will print it out and send it to you. This works out at around £5-6 for a magazine of 40 pages, which is a little expensive but not too bad. (There’s a delivery charge of $2-3 too, so you’re better off getting a couple at a time). While the authors can make money from this (by adding on an extra charge per page), none of the ones I was looking at did so.
The two I checked out are well written and informative:
Hacker Monthly, a general programming magazine, and
Rails Magazine, about Ruby/Rails. I couldn’t find any specific .Net ones when I looked, but it’s pretty new so I’m sure one will appear at some point. So if anyone fancies starting a .Net mag, here’s your chance!
Now, many of these are available as PDFs for you to download as well, so you could quite easily print them off yourself and save a few quid. Personally, I’m too lazy/disorganised for that, and I like to have a proper magazine and not a scruffy collection of stapled sheets, but that’s just me.
Check it out. What have you got to lose?!
One of my favourite ways of keeping up with development is listening to podcasts, which I usually do in my car – a stereo with a USB port comes in very handy here. (Note: Only do this while alone! The dulcet tones of Scot Hanseman might increase your knowledge, but they’re unlikely to impress the chicks..)
Anyway, audio tips and sexism aside, here are all of the podcasts I’ve been listening to and what I think of them, in order of greatness:
Technology: Mainly .Net
Sound Quality: Average
Hosted by four smart and fairly amusing guys, the beauty of this podcast is that you always get to hear all sides of the argument on any given topic. And unlike some of the other podcasts, the topics are nearly always interesting – I rarely skip an episode. They get some top guests too.
Sound Quality: Excellent
Probably the best-known and longest-running podcast out there, .Net Rocks is perhaps the most professionally produced show out there – you could easily mistake it for a radio show. The presenters are funny and knowledgeable, get good guests and always ask pertinent questions (although you don’t get the same level of discussion as with Herding Code).
Technology: Mainly .Net
Sound Quality: Poor – Average
Run by some of the ElegantCode bloggers, this podcast is similar to Herding Code in it’s style, usually with multiple presenters leading to interesting discussions. It seems to have died since David Starr left for the PluralCast (see below), but I’m hoping it might get resurrected.
Technology: Mainly .Net
Sound Quality: Average
Although this only lasted a short while, the few podcasts it created covered some really useful subjects, in good detail. And being Alt.Net (if you’re aware of that movement), there are also some interesting debates and disagreements in there. Well worth a listen.
Sound Quality: Good
Having recently taking over from the Elegant Code Cast, the Pluralcast has made a good start, covering some interesting subjects. Although run by a commercial company (PluralSight Training), the shows don’t feel like they’re trying to sell you anything. The interviews are sometimes a little one-sided, and the shows don’t always flow as well as they might, but definitely one to try out.
Sound Quality: Excellent
Alongside .Net Rocks, HanselMinutes is the other well-produced, long running podcast. Scott is currently a Senior Program Manager at Microsoft, which means he gets some great guests and inside scoops, but sometimes the interviews are again a little one-sided. Scott’s a really smart guy and knowledgeable developer, and I feel bad for only giving a 6, but the subject is often web development (which I’m not really into), and I often skip shows because I’m not that interested in the guest/subject.
Sound Quality: Good
I’ve really only just started listening to this so it’s a little early to give my opinion – but when did that ever stop a reviewer?! So, as you might imagine from the title, it covers general software engineering in all languages, often with clever academic people being interviewed by other clever academic people. Ok, perhaps I undersell it there, it’s not all academic – and it is good to break out of .Net every now and again.
Sound Quality: Average
A show run my a couple of Microsoft evangelists, this has some good content, but the shows are fairly infrequent and I often skip them. The subjects and interviews can be a little Microsoft-heavy.
Sound Quality: Good
Run (although not hosted) by The Pragmatic Programmers, this podcast consists of interviews with the authors of various books on the Prag Pub label. It’s kind of commercial in that respect, but interesting nonetheless.
Sound Quality: Good
Again I feel bad about giving this such a low rating, as it’s not a bad podcast, I just don’t like it personally. I only listened to the first few, but the topics are usually web-based, and were often covered fairly simplistically, and I didn’t really like the presenter. I think I’ll give it another try soon, as everyone else seems to rate it.
So there you go, plenty to keep you occupied on your lonely commute there! Obviously those ratings are just my opinion, so don’t take my advice – go and try them for yourself!
Handy for Unit Tests – and Much More!
I recently found myself having to type some code in repeatedly while unit testing. This sort of thing:
You can see here that I’ve been using Roy Osherove’s naming convention, which I like a lot. I’m also using the AAA (arrange-act-assert) layout for the test code, which I like to comment for increased readability. Although I’ve done something similar many times, I must have seen something about Code Snippets recently as I thought it might be a good idea to give them a try.
Code snippets are a part of Visual Studio (2008 certainly, not sure about before that). They are little code templates that you can get via intellisense, for example when you type ‘if’:
and then hit ‘Tab’ twice, you get the ‘If template’, with the section highlighted ready for you to put your condition in:
As it turns out, Vis Studio doesn’t ship with a snippet editor, but there is a very good open source one on CodePlexwhich is linked to from MSDN. Using this, I can put in my unit test template, and define the ‘tokens’ that I want to replace (in this case, ‘method name’, ‘state under test’ & ‘expected behaviour’):
I’ve also put in the $selected$$end$ bit, which defines where the cursor goes when you’re done. You can see in this one that I’ve put in a #region for the method name as well – the snipped editor’s smart enough to realise this is the same as the one in the method name. Here I’ve defined the shortcut as ‘ter’ (for ‘Test with Region’), because I don’t like typing.
So save that, and Bob’s your uncle! Go back into Visual Studio, type ‘ter’ (tab-tab), and as if by magic, you very own unit test template appears, awaiting your input!
It’s the type of thing you wish you’d found out about years ago, and I’m sure over the next few days I’ll make about a hundred of these things for increasingly pointless bits of code. But it’s another productivity increase, and they all add up, so if I find a few more of these I’ll be able to write a day’s worth of code in about two keystrokes!
Now I wonder what else Visual Studio does that I don’t know about..
As part of my learning drive, I decided to read some of the books I’ve been hearing about. The first one I decided to tackle was the often-quoted Mythical Man-Month (TMMM) by Frederick Brooks, originally written in 1975 based on Brooks’ experience managing development of systems including IBM’s Operating System/360. This was a time when mainframes roamed the earth, assembler was still the language of choice for many, and memory cost around $100,000 a Meg. Although I thought the book would be somewhat outdated, with it’s small size (at 300-odd pages) I figured I might actually be able to finish it this side of Christmas!
Well, some of it is indeed out of date – there is much talk of assembler, 5-foot tall system manuals, batch programming, lack of system space/time, and other problems associated with the huge expense of computer systems back then. Some is also applicable only to really big systems – at it’s peak, the OS/360 team employed over 1000 people, and over 5000 man-years went into it’s development & maintenance! And I thought our ACME work management system was big..
But the surprising thing is how relevant most of the book still is today – indeed, there are still active online discussions regarding many of the points. I would say of the original 15 chapters, at least 10 of them are still totally applicable today, as they concentrate on the human side of software management. More than that, they contain lessons that many of the IT managers and developers I’ve worked with would do well to learn. In fact, in some ways it’s quite disheartening to realise that the things we still struggle with today are the same as those we struggled with back then – in many ways, it seems we’re no better off now than we were more than 30 years ago. It’s not so much that there are no solutions to the problems, more that there are no easy solutions – software development is hard. And it’s also an essentially different beast to the kind of things we’ve had to manage before, such as building projects, and yet many people still approach it using the same techniques and expect the same results. Of course, it’s still a young industry, so perhaps if a few more people read this book and those like it, in another 30 years, we might be starting to get somewhere..
Anyway, below I’ve discussed a few of the more pertinent points raised in the book. I must warn you, this is a rather lengthy post – but then it is a book full of good ideas. I mean, how many books can you name that are still so well known nearly 35 years after they were published, or ones that have a law named after them for that matter?! So I hope you have a comfy chair.
As most developers would agree, software is inherently difficult to estimate – but it’s not always clear why. Early on in the book, Brooks gives some of the reasons behind this – as with points throughout the text, backed up by impartial research where possible:
- Developers are Optimists – we tend to imagine things going well. While this may happen for an individual piece of work, the chances of everything going well on a project are negligible, but instead of taking this into account we tend to add up all of the ‘best-case scenario’ estimates and arrive at an unrealistic overall figure.
- Developers Consider Isolated Development – on seeing some new application or website, you’ll often hear a dev say ‘that’s easy, I could code that in a weekend!’. I’ve heard this many times, even from experienced developers. The fact is, knocking up a quick program that works on a single machine is generally easy – it’s turning this into a polished, tested, documented product that works in a full-scale environment that takes the time.
- Men and Months are not Interchangeable – certain tasks can be split up amongst people and retain the same efficiency. The sort of tasks this applies to are those with no communication required, no overlap between the different partitions – sewing a field, for example. Writing software is not such a task. As more people get involved, more communication is required, and efficiency drops – in fact at a certain point, adding more people can start to increase the time taken to complete the overall task. This is often not taken into account, and projects are planned with a certain number of ‘man-months’ divided between an arbitrary number of workers, with the end result being missed deadlines.
- Productivity doesn’t increase Linearly – as studies into large-scale developments have shown, the rate of work decreases as systems grow. Data show an exponent of around 1.5, so as system complexity increases (as measured in lines of code), it takes increasingly more effort to add extra functionality. This isn’t something most people take into account, but can make a huge difference for larger developments.
- Not all Time is Used – in further studies, when time was recorded at a low level, it was found that only about half of a developer’s time was spent on project development, with the other half occupied by meetings, unrelated work, personal items, and so on.
While most people are aware of the general problems, if not the specific causes, I for one always seem to end up on projects with unreasonable deadlines. I think this is a combination of the factors above, along with what Brooks alludes to in the book – that it can be very difficult to explain to a customer why it’s going to take such a seemingly long time to create what appears to be a simple application. Until we have a more solid foundation for software estimation (which doesn’t seem forthcoming), software managers need to trust their experience and give realistic dates, and not bow to client pressure and give ridiculously optimistic dates that aren’t going to help anybody in the long run.
Related to the difficulties in estimation is the often-cited problem of software projects often going way, way, waaaaaay past their initial due date – usually with enormous increases in budget to boot. As well as estimation issues, Brooks gives several other factors which contribute to this problem – although he later points out that most of these only really apply to waterfall style development, which has been superseded by incremental development in many cases (which someone would tell my manager..)
One such factor, which is extremely common if my experiences are anything to go by, is that the deadlines are often set by the customer. This is likened to the cooking of an omelet – the customer might say he wants his omelet in two minutes. And after two minutes, if the omelet is not cooked, he can either wait – or eat it raw. The cook may try to help by turning up the heat, but you’ll just end up with an omelet which is burned on the outside and raw on the inside. Sadly, I’ve worked on a couple of projects where the customer ended up with a burned/raw application because ‘it has to be finished by X!’
The book also suggests that most projects don’t give enough time to testing, which can be a fatal mistake. A lot of issues are not found until the testing phase, by which time the deadline has nearly arrived – slippage here comes as a surprise to everyone, as things seemed to be going well, and once the wheels of enterprise have been put into motion, late delivery can have serious financial implications. Brooks even gives his breakdown of project times (again, this only applies to waterfall) – 1/3 planning, 1/6 coding, 1/4 component test, and 1/4 system test.
Also discussed is what happens when it’s realised that the deadline is slipping. Often, if it gets near the end of a project and there are schedule problems, more resource gets allocated to the project – but what with the man-month interchangeability problem, and the time it takes to get new people up to speed, such actions will only ever make things worse. It’s the old ‘using petrol to put out a fire’ theme, which is where we get Brooks’ Law:
Adding manpower to a late software project makes it later.
Conceptual integrity within a system is about keeping the way it works coherent – following a core set of principles that apply throughout the program, from a user’s perspective. Brooks states that conceptual integrity is the most important aspect of systems design – it’s more important to have a coherent model than extra functionality. This is difficult to achieve in software, which is usually designed and nearly always built by multiple people.
Cathedrals created over many generations have similar issues – often different parts are of different styles, and while each may be magnificent, the overall effect is jarring. Reims is the counter-example, where several generations have stuck to the original concept, sacrificing some of their own ideas to keep the overall design ideas intact. Conceptual Integrity can be achieved by having a single (or very few) Architects, who share a single philosophy for how the system works.
When Brooks talks of an architect, he doesn’t mean a technical architect as I would think, more a ‘user experience’ manager responsible for the UI and the user’s mental model of the software. The architect works on behalf of the user, making the most of the available systems to get the most benefit for the customer. This is an interesting concept and not something I’ve seen much of – typically, on the medium-sized business application I’ve seen, there is no-one really considering the user experience. The BAs will collect and define the user requirements, but will have little input into the actual interface of the system, which is usually left up to the developers.
I would think this may be more of an issue for larger systems, but Brooks states he’d have one architect role defined in teams as small as four, so it’s definitely something I’ll try to look further into in future projects.
Second System Effect
Another of the many phrases you’ve probably heard that comes from this book is the ‘second system effect’. The idea is that the second system anyone designs is the most dangerous and most likely to fail. In the first, ‘he knows he doesn’t know what he’s doing, so he does it carefully and with great restraint’. But all of those good ideas he has are saved up, and all piled into the second system.
The thing that surprised me the most in TMMM was the recognition back in ’75 of the need to adapt to requirements change as the project progresses. Although development was all done in a waterfall fashion, the language used to describe dealing with change sounds like something straight out of an agile book:
‘The first step is to accept the fact of change as a way of life, rather than an untoward and annoying exception. […] both the actual need and the user’s perception of that need will change as systems are built, tested and used.’
Brooks gives advice on designing for change, including modularisation, well-defined interfaces, and good documentation of these. Another piece of advice is to always Throw One Away – with new types of system, you are still working out the best way of tackling problems, so you should plan to use the first as a prototype and not use it at all in production.
The book explains that we need to be prepared for change, because as well as technical and business changes, the users will not know exactly what they want until they try the system – this is one of the main driving forces behind agile/iterative development, which Brooks now recognises as a better way of developing software. In a recent look back at the original articles, Brooks states:
‘The waterfall model, which was the way most people thought about software projects in 1975, unfortunately got enshrined into [the DoD specification] for all military software. This ensured it’s survival well past the time when most thoughtful practitioners had recognized it’s inadequacy and abandoned it. Fortunately, the DoD has since begun to see the light.’
When software is maintained, unlike maintenance of things such as mechanical or electrical systems, usually this ‘maintenance’ involves adding extra functionality. The problem is, this new functionality has a good chance (a reported 20-50%) of introducing new bugs – ‘Two steps forward, one step back’. In addition, bug fixes that are applied sometimes fix the local problem, but go on to create further problems in other parts of the system.
A study of OS releases showed that as time goes on, structure gets broken and entropy increases in the system, and more and more time is spent fixing the defects introduced by the releases. Eventually, adding new functionality is not worth the effort and the system must be replaced. As Brooks succinctly puts it,
‘Program maintenance is an entropy-increasing process, and even its most skillful execution merely delays the subsidence of the system into unfixable obsolescence.’
Most projects don’t suffer a catastrophic failure, they get slowly later as time goes on. ‘How does a project get to be a year late? …One day at a time.’
Brooks suggests one way to battle this is to have very specific, unambiguous milestones. People don’t like to give bad news, and if something has a fuzzy meaning, it’s all to easy to fool yourself and your manager that things are going Ok. Studies show that people gradually reduce over-estimates as time goes on, but underestimates stay low until around three weeks before the deadline.
Brooks talks about how small delays often get ignored, but as soon as you start accepting these delays, the whole project can slip. You need to stay on top of things – the team needs to have ‘hustle’, as baseball coaches would say.
PERT charts (apparently like GANTT charts today) can be useful to identify the slippages that really are problematic as they’re on the critical path.
No Silver Bullet (1986)
This being the anniversary edition, there are an extra couple of sections, including this well-known essay from 1986. The basic precept is this: there will be no magnitude (10x) increase in development productivity in the next 10 years. One of the core concepts here is that there are two kinds of complexity in a system – essential and accidental (or incidental). Essential is core to the problem and incidental is the complexity caused by our way of programming it.
The central argument is that unless the incidental complexity accounts for over 9/10 of the overall complexity, even shrinking it to zero wouldn’t increase productivity tenfold. Brooks thinks it accounts for way less that that, as the biggest problems (low-level languages, batch debugging, etc) have now been removed, so improvements will be fairly small from here on out. As such, we need to focus on attacking the essential complexity of software. To do this, Brooks suggests:
- Buying instead of building software
- Rapid prototyping in planned iterations to help get the requirements right
- Growing systems organically, adding functionality as they are used
These are all forward-looking views for ’86 which have largely been borne out in the intervening years. The last two points are related to the problems of capturing and identifying requirements properly, which Brooks sees as being the most difficult area, which is where we can make the most improvements:
‘The hardest single part of building a software system is deciding precisely what to build.’
30 Years On
At the end of the book, Brooks looks back on the assumptions and ideas presented those many years ago. Most have proved to be correct, although a few of them are based on Waterfall development, and while they are correct in terms of that approach, iterative development has rendered some of those ideas redundant.
The book ends on something of a high, with Brooks describing how when he finished college in the 50s, he could read all computer journals and conference proceedings. As time has gone on, he’s had to kiss goodbye to sub-discipline after sub-discipline, because there’s so much to know. So much opportunity for learning, research, and thought – ‘What a marvellous predicament! Not only is the end not in sight, the pace is not slackening. We have many future joys.’