Archive

Posts Tagged ‘Development’

Nice up those Assertions with Shouldly!

October 19, 2010 1 comment

One thing I noticed while having a play with Ruby was that the syntax for their testing tools is quite a lot more natural-language like and easier to read – here’s an example from an RSpec tutorial:

describe User do
  it "should be in any roles assigned to it" do
    user = User.new
    user.assign_role("assigned role")
    user.should be_in_role("assigned role")
  end
it "should NOT be in any roles not assigned to it" do
    user.should_not be_in_role("unassigned role")
  end
end

Aside from the test name being a string, you can see the actual assertions in the format ‘variable.should be_some_value’, which is kind of more readable than it might be in C#:

Assert.Contains(user, unassignedRoles);

or

Assert.That(unassignedRoles.Contains(user));

Admittedly, the second example is nicer to read – the trouble with that is that you’re checking a true/false result, so the feedback you get from NUnit isn’t great:

Test ‘Access_Tests.TestAddUser’ failed:  Expected: True  But was:  False

Fortunately, there are a few tools coming out for .net now which address this situation in a bit more of a ruby-like way.  The one I’ve been using recently is Shouldly, an open source project on GitHub.

Using Shouldly (which is basically just a wrapper around NUnit and Rhino Mocks), you can go wild with should-style assertions:

age.ShouldBe(3);

family.ShouldContain(“mum”);

greetingMessage.ShouldStartWith(“Sup y’all”);

And so on.  Not bad, not bad, not sure if it’s worth learning another API for though.  However, the real beauty of Shouldly is what you get when an assertion fails.  Instead of an NUnit-style

Expected: 3

But was:  2

– which gives you a clue, but isn’t terribly helpful – you get:

age

should be    3

but was    2

See that small but important difference?  The variable name is there in the failure message, which makes working out what went wrong a fair bit easier – particularly when a test fails that you haven’t seen for a while.

Even more useful is what you get with checking calls in Rhino Mocks.  Instead of calling

fileMover.AssertWasCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));

and getting a rather ugly and unhelpful

Rhino.Mocks.Exceptions.ExpectationViolationException : IFileMover.MoveFile(“test.txt”, “processed\test.txt”); Expected #1, Actual #0.

With Shouldly, you call

fileMover.ShouldHaveBeenCalled(fm => fm.MoveFile(“test.txt”, “processed\\test.txt”));

and end up with

*Expecting*

MoveFile(“test.txt”, “processed\test.txt”)

*Recorded*

0: MoveFile(“test1.txt”, “unprocessed\test1.txt”)

1: MoveFile(“test2.txt”, “unprocessed\test2.txt”)

As you can see, it’s not only much more obvious what’s happening, but you actually get a list of all of the calls that were made on the mock object, including parameters!  That’s about half my unit test debugging gone right there.  Sweet!

Shouldly isn’t a 1.0 yet, and still has some missing functionality, but I’d already find it hard to work without it.  And it’s open source, so get yourself on GitHub, fork a branch, and see if you can make my life even easier!

Programming Magazines – on Paper!

July 19, 2010 3 comments

Perhaps strangely for a developer, I much prefer reading from actual, physical books & magazines than a computer screen.  This is a bit problematic these days, as what with the rise of the Internet, nobody seems to be printing programming magazines any more (in the UK at least).

As such, I was most pleased to hear about MagCloud, an online service which offers printed magazines in a FUBU (For Us By Us) style.  Basically, anyone can upload a magazine in PDF format, and for a charge of around $0.20 a page, MagCloud will print it out and send it to you.  This works out at around £5-6 for a magazine of 40 pages, which is a little expensive but not too bad.  (There’s a delivery charge of $2-3 too, so you’re better off getting a couple at a time).  While the authors can make money from this (by adding on an extra charge per page), none of the ones I was looking at did so.

The two I checked out are well written and informative:

Hacker Monthly

Hacker Monthly, a general programming magazine, and

Rails Magazine

Rails Magazine, about Ruby/Rails.  I couldn’t find any specific .Net ones when I looked, but it’s pretty new so I’m sure one will appear at some point.  So if anyone fancies starting a .Net mag, here’s your chance!

Now, many of these are available as PDFs for you to download as well, so you could quite easily print them off yourself and save a few quid.  Personally, I’m too lazy/disorganised for that, and I like to have a proper magazine and not a scruffy collection of stapled sheets, but that’s just me.

Check it out.  What have you got to lose?!

Categories: Books Tags:

Podcast Rundown

One of my favourite ways of keeping up with development is listening to podcasts, which I usually do in my car – a stereo with a USB port comes in very handy here.  (Note:  Only do this while alone!  The dulcet tones of Scot Hanseman might increase your knowledge, but they’re unlikely to impress the chicks..)

Anyway, audio tips and sexism aside, here are all of the podcasts I’ve been listening to and what I think of them, in order of greatness:

Herding Code

Rating: 10/10

Technology: Mainly .Net

Sound Quality: Average

Hosted by four smart and fairly amusing guys, the beauty of this podcast is that you always get to hear all sides of the argument on any given topic.  And unlike some of the other podcasts, the topics are nearly always interesting – I rarely skip an episode.  They get some top guests too.

.Net Rocks!

Rating: 9/10

Technology: .Net

Sound Quality: Excellent

Probably the best-known and longest-running podcast out there, .Net Rocks is perhaps the most professionally produced show out there – you could easily mistake it for a radio show.  The presenters are funny and knowledgeable, get good guests and always ask pertinent questions (although you don’t get the same level of  discussion as with Herding Code).

ElegantCode Cast

Rating: 8/10

Technology: Mainly .Net

Sound Quality: Poor – Average

Run by some of the ElegantCode bloggers, this podcast is similar to Herding Code in it’s style, usually with multiple presenters leading to interesting discussions.  It seems to have died since David Starr left for the PluralCast (see below), but I’m hoping it might get resurrected.

Alt.Net Podcast

Rating: 7/10

Technology: Mainly .Net

Sound Quality: Average

Although this only lasted a short while, the few podcasts it created covered some really useful subjects, in good detail.  And being Alt.Net (if you’re aware of that movement), there are also some interesting debates and disagreements in there.  Well worth a listen.

Pluralcast

Rating: 7/10

Technology: .Net

Sound Quality: Good

Having recently taking over from the Elegant Code Cast, the Pluralcast has made a good start, covering some interesting subjects.  Although run by a commercial company (PluralSight Training), the shows don’t feel like they’re trying to sell you anything.  The interviews are sometimes a little one-sided, and the shows don’t always flow as well as they might, but definitely one to try out.

HanselMinutes

Rating: 6/10

Technology: .Net

Sound Quality: Excellent

Alongside .Net Rocks, HanselMinutes is the other well-produced, long running podcast.  Scott is currently a Senior Program Manager at Microsoft, which means he gets some great guests and inside scoops, but sometimes the interviews are again a little one-sided.  Scott’s a really smart guy and knowledgeable developer, and I feel bad for only giving a 6, but the subject is often web development (which I’m not really into), and I often skip shows because I’m not that interested in the guest/subject.

Software Engineering Radio

Rating: 6/10

Technology: All

Sound Quality: Good

I’ve really only just started listening to this so it’s a little early to give my opinion – but when did that ever stop a reviewer?!  So, as you might imagine from the title, it covers general software engineering in all languages, often with clever academic people being interviewed by other clever academic people.  Ok, perhaps I undersell it there, it’s not all academic – and it is good to break out of .Net every now and again.

The Thirsty Developer

Rating: 5/10

Technology: .Net

Sound Quality: Average

A show run my a couple of Microsoft evangelists, this has some good content, but the shows are fairly infrequent and I often skip them.  The subjects and interviews can be a little Microsoft-heavy.

The Pragmatic Bookshelf

Rating: 5/10

Technology: All

Sound Quality: Good

Run (although not hosted) by The Pragmatic Programmers, this podcast consists of interviews with the authors of various books on the Prag Pub label.  It’s kind of commercial in that respect, but interesting nonetheless.

Polymorphic Podcast

Rating: 4/10

Technology: .Net

Sound Quality: Good

Again I feel bad about giving this such a low rating, as it’s not a bad podcast, I just don’t like it personally.  I only listened to the first few, but the topics are usually web-based, and were often covered fairly simplistically, and I didn’t really like the presenter.  I think I’ll give it another try soon, as everyone else seems to rate it.

So there you go, plenty to keep you occupied on your lonely commute there!  Obviously those ratings are just my opinion, so don’t take my advice – go and try them for yourself!

Categories: Development Tags: ,

Visual Studio Code Snippets

October 1, 2009 3 comments

Handy for Unit Tests – and Much More!

 I recently found myself having to type some code in repeatedly while unit testing.  This sort of thing:

Typical Unit Test Layout

Typical Unit Test Layout

You can see here that I’ve been using Roy Osherove’s naming convention, which I like a lot.  I’m also using the AAA (arrange-act-assert) layout for the test code, which I like to comment for increased readability.  Although I’ve done something similar many times, I must have seen something about Code Snippets recently as I thought it might be a good idea to give them a try.

Code snippets are a part of Visual Studio (2008 certainly, not sure about before that).  They are little code templates that you can get via intellisense, for example when you type ‘if’:

If Snippet in Intellisense

If Snippet in Intellisense

and then hit ‘Tab’ twice, you get the ‘If template’, with the section highlighted ready for you to put your condition in:

If Template

If Template

As it turns out, Vis Studio doesn’t ship with a snippet editor, but there is a very good open source one on CodePlexwhich is linked to from MSDN.  Using this, I can put in my unit test template, and define the ‘tokens’ that I want to replace (in this case, ‘method name’, ‘state under test’ & ‘expected behaviour’):

Unit Test in Snippet Editor

Unit Test in Snippet Editor

I’ve also put in the $selected$$end$ bit, which defines where the cursor goes when you’re done.  You can see in this one that I’ve put in a #region for the method name as well – the snipped editor’s smart enough to realise this is the same as the one in the method name.  Here I’ve defined the shortcut as ‘ter’ (for ‘Test with Region’), because I don’t like typing.

So save that, and Bob’s your uncle!  Go back into Visual Studio, type ‘ter’ (tab-tab), and as if by magic, you very own unit test template appears, awaiting your input!

Unit Test Template

Unit Test Template

It’s the type of thing you wish you’d found out about years ago, and I’m sure over the next few days I’ll make about a hundred of these things for increasingly pointless bits of code.  But it’s another productivity increase, and they all add up, so if I find a few more of these I’ll be able to write a day’s worth of code in about two keystrokes!

Now I wonder what else Visual Studio does that I don’t know about..

The Mythical Man-Month

September 4, 2009 Leave a comment

As part of my learning drive, I decided to read some of the books I’ve been hearing about. The first one I decided to tackle was the often-quoted Mythical Man-Month (TMMM) by Frederick Brooks, originally written in 1975 based on Brooks’ experience managing development of systems including IBM’s Operating System/360. This was a time when mainframes roamed the earth, assembler was still the language of choice for many, and memory cost around $100,000 a Meg. Although I thought the book would be somewhat outdated, with it’s small size (at 300-odd pages) I figured I might actually be able to finish it this side of Christmas!

Mythical Man-Month Book Cover

The Mythical Man-Month Cover

Now, some of this book is indeed out of date – there is much talk of assembler, 5-foot tall system manuals, batch programming and debugging due to lack of system time, and other problems associated with the huge expense of computer systems. Some is also applicable only to really big systems – at it’s peak, the OS/360 team employed over 1000 people, and over 5000 man-years went into it’s design, construction, testing & documentation! And I thought our project was big..
But the surprising thing is how relevent most of the book still is today – indeed, there are still active online discussions regarding some of the points, and the book is even referred to by those in other industries (EXAMPLES). I would say of the original 15 chapters, at least 10 of them are still totally applicable today – and more than that, contain lessons that many of the IT managers and developers I’ve worked with would do well to learn. In fact, in some ways it’s quite dishartening to realise that the things we still struggle with today are the same problems we had back then, and while in some ways we’ve moved forward, in many ways we’re no better off now than we were more than 30 years ago. (Yeah I’m talking about you, estimating difficulties/maintenence nightmares/changing requirements/project slippage!) The most disturbing thing is that there are solutions described for many of these problems, and here in 2009 there are still many who don’t employ them!
Anyway, below I’ve listed a few of the more pertinent points raised in the book, as well as some thoughts on them. I must warn you, this is a rather lengthy post – but then it is a book full of good ideas. I mean, how many books can you name that are still so well known nearly 35 years after they were published, or ones that have a law named after them for that matter?! (Brooks law – see XXXX). So I hope you have a comfy chair..
Estimation
———-
Difficult to estimate – developers optimists by nature,tend to think in terms of exchangeable man-months, when falling behind often more people added (using petrol to put out a fire).
Developers tend to think about isolated development – turning into finished product, and part of a programming system, takes 9x longer.
Optimism – we tend to think that things will go well. For a single task, this may be true, as there are a range of chances. But for a whole system made of many tasks, the chance that all of them will go well is almost negligable.
Man-Months – men & months only interchangeable when the task requires no communication between workers, such as cotton picking. The more communication needed, and less the task can be partitioned, the more overhead there is for adding each new person. Some very complex tasks with high communication needs and very limited flexibility in partitioning, adding more people can actually take longer:
Brooks law – adding manpower to a late software project makes it later
Some tasks are also sequential:
‘Takes 9 months to make a baby regardless of how many women are involved’
Many IT systems suffer from both of these issues.
1/3 planning, 1/6 coding, 1/4 component test, 1/4 system test (based on waterfall)
Most projects don’t allow enough time for system testing, which is where many go wrong. Problems here at the end of the schedule particularly bad, as they are a surprise to everyone, and can be difficult to identify and fix. Also very expensive, as fully staffed, and not delivering could have large financial implications.
Deadlines often set by clients, but software will take as long as it takes whatever they say. Omlette example – customer wants omlette in 2 minutes. They can have it raw, or wait for it. (Or cook can turn up the heat, so it’s burned on the outside and raw in the middle).
Overall – we need to develop methods of estimating more accurately, including recording productivity rates, bug incidence counts, and creating estimation rules. Until then, managers must take these things into account and give realistic estimates, whether the client likes them or not.
What happens when project late – e.g. task with 4 milestones should take 4 months, but 1st task takes 2 months? People usually add manpower, but taking the communication overhead into account, they will need to add quite a lot more early on to realise any benefit, and ths will often not help anyway. This is taken into account in agile, with measurement of velocity.
Recent arguments against Brooks law – ‘cathedral and bazaar’, open source
Conceptual Integrity
——————–
Brooks states that conceptual integrity is the most important aspect of systems design – it’s more important to have a coherent model than extra functionality. Like cathedrals created over many generations – often different parts are of different styles, and while each may be magnificent, the overall effect is jarring. Reims is the counter example, where several generations have stuck to the original concept, sacrificing some of their own ideas to keep the overall design ideas intact.
Conceptual Integrity achieved by having a single (or very few) Architects – single philosophy for how the system works. Not a technical architect, more a ‘user experience’ manager responsible for the UI, and the user’s mental model of the software (including it’s interfaces with other systems). Architect works on behalf of the user, making the most of the available systems to get the most benefit for the customer.
While some think this leaves less freedom to the implementors and leads to an aristocracy with the architects ruling over the developer plebs, this has to be partly the case to achieve the integrity, but in reality the developers still get full reign on how the system works from a technical perspective.
Second System Effect
——————–
The second system is a designer’s most dangerous. In the first, ‘he knows he doesn’t know what he’s doing, so he does it carefully and with great restraint’. But all of the extra ideas are saved up, and put into the second system.
Productivity
————
Also, productivity doesn’t increase linearly as systems grow – data shows an exponent of around 1.5, so as system complexity (LOC) increases, man-months spent creating it increase at a greater rate.
There is also the problem of wasted time – in one study, estimates wree done carefully but the coding work still took around twice as long as expected. When time was recorded at a low level, it was found that only roughly half the time was taken actually coding, the other half was spent in meetings, unrelated work, personal time, etc.
Throw One Away – Accepting Change
———————————
In chemical engineering, you don’t go straight from a lab process to production, they build a ‘pilot plant’ first to try scaling up the process. We need to do this in software, but usually don’t. The first system we make is often crap, and we need to start over. This is almost inevitable, so we should plan for this.
This is in fact just one aspect of the fact of change – requirements change, technologies change, and you need to be prepared . ‘The first step is to accept the fact of change as a way of life, rather than an untoward and annoying exception. […] both the actual need and the user’s perception of than need will change as systems are built, tested and used.’ (Talking about the situation for hardware) ‘…the very existance of a tangible object serves to contain and quantize user demand for changes. Both the tractability and the invisibility of the software product expose it’s builders to perpetual changes in requirements.’ The designer
Brooks gives advice on designing for change, including modularisation, well-defined interfaces, and good documentation of these.
The book also discusses the importance of enabling change in the organisation, which is much harder. Cosgrove reccomends that all plans, milestones and schedules should be treated as tentative to facilitate change.
(Brooks later points out that this is only appropriate to waterfall-style development and not iterative).
Two Steps Forward, One Step Back – Software Maintenance
——————————————————-
When software is mainained, usually this ‘maintenance’ involves adding extra functionality. The problem is, this new functionality has a good chance (20-50%) of introducing new b=ugs themselves. In addition, bug fixes that are applied sometimes fix the local problem, but go on to create further problems in other parts of the system. A study of OS releases showed that as time goes on, structure gets broken and entropy increases in the system, and more and more time is spent fixing the defects introduced by the releases. Eventually, adding new functionality is not worth the effort and the system must be replaced. ‘Program maintenance is an entropy-increasing process, and even its most skillful execution merely delays the subsidence of the system into unfixable obsolescence.’
Testing
——-
A lot of bugs come from areas of the system that are under-specified.
Build scaffolding – there could well he half as much code in scaffolding as in actual code. (Some early TDD!) ‘One form of scaffolding is the <i>dummy component</i>, which consists only of interfaces and some faked data or some small test cases.’
Project Slippage
—————-
Most projects don’t suffer a catastrophic failure, they go wrong a bit at a time. ‘How does a project get to be a year late? …One day at a time’. ‘Usually, however, the disaster is due to termites, not tornadoes; and the schedule has slipped imperceptibly but inexorably.’
One way to battle this is to have very specific, unabmiguous milestones. People don’t like to give bad news, and if something has a fuzzy meaning, it’s all to easy to fool yourself and your manager that things are going Ok. Studies show that people gradually reduce over estimates as time goes on, but underestimates stay low until around three weeks before the deadline.
Brooks talks about how small delays often get ignored, but as soon as you start accepting these delays, the whole project can slip. You need to stay on top of things – the team needs to have ‘hustle’, as baseball coaches would say. PERT charts (apparently like GANTT charts today) can be useful to identify the slippages that really are problematic as they’re on the critical path. (This is like the ‘broken window’ anology discussed in TPP).
——
This being the anniversary edition, it included all 16 of the original essays from ’75, plus a couple of new ones and reflection on the ideas presented in that first edition.
No Silver Bullet (1986)
———————–
There wil be no magnitude increase in productivity in the next 10 years. Two kins of complexity in a system – Essential and Accidental (incidental). Essesntial is core to the problem and incidental is the complexity caused by our way of programming it. The central argument is that as essential complexity can’t really be changed, so unless the incidental complexity accounts for over 9/10 of the overall complexity, even shrinking it to zero wouldn’t increase productivity tenfold. Brooks thinks it accounts for way less that that, and also that the biggest problems with accidental complexity have now been removed, so those improvements will be fairly small from here on out. As such, we need to focus on attacking the essential complexity of software. To do this, Brooks advises:
– Buying instead of building
– Rapid prototyping in planned iterations to help get the requirements right
– Growing systems organically, adding functionality as they are used
‘I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labour of representing it and testing the fidelity of this representation.’
‘The hardest single part of building a software system is deciding precisely what to build.’ – This is really difficult, and causes big problems when it’s wrong. This is why Brooks advocates rapid prototyping to work out the user requirements.
He also talks about growing software as opposed to building it – developing increamentally, starting out with a skeleton system that goes from end-to-end, and fleshing out the details as you go along (Like ‘tracer bullets’ in PP). This aids team morale considerably, and it means you always have a working system.
Brooks isn’t being pessimistic as some suggest, and doesn’t say we can’t EVER achieve such an increase, or that we can’t do it by combining multiple techniques – just that there won’t be a single cure to end all development woes.
30 Years On
———–
In the last chapter, Brooks goes over some of the suggestions. He states that a few of them are based on Waterfall, and while they are correct in terms of dealing with that approach, iterative development has now taken over and negates the need for some of those suggestions.
‘The waterfall model, which was the way most people thought about software projects in 1975, unfortunately got enshrined into [the DoD specification] for all military software. Thsi ensured it’s survival well past the time when most thoughtful practitioners had recognized it’s inadequacy and abandoned it. Fortunately, the DoD has since begun to see the light.’
The book ends on something of a high, with Brooks describing how when he finished college in the 50s, he could read all computer journals and conference proceedings. As time has gone on, he’s had to kiss goodbye to sub-discipline after sub-discipline, because there’s so much to know. So much opportunity for learning, research, and thought. ‘What a marvellous predicament! Not only is the end not in sight, the pace is not slackening. We have many future joys.’

Well, some of it is indeed out of date – there is much talk of assembler, 5-foot tall system manuals, batch programming, lack of system space/time, and other problems associated with the huge expense of computer systems back then. Some is also applicable only to really big systems – at it’s peak, the OS/360 team employed over 1000 people, and over 5000 man-years went into it’s development & maintenance! And I thought our ACME work management system was big..

But the surprising thing is how relevant most of the book still is today – indeed, there are still active online discussions regarding many of the points. I would say of the original 15 chapters, at least 10 of them are still totally applicable today, as they concentrate on the human side of software management. More than that, they contain lessons that many of the IT managers and developers I’ve worked with would do well to learn. In fact, in some ways it’s quite disheartening to realise that the things we still struggle with today are the same as those we struggled with back then – in many ways, it seems we’re no better off now than we were more than 30 years ago. It’s not so much that there are no solutions to the problems, more that there are no easy solutions – software development is hard. And it’s also an essentially different beast to the kind of things we’ve had to manage before, such as building projects, and yet many people still approach it using the same techniques and expect the same results. Of course, it’s still a young industry, so perhaps if a few more people read this book and those like it, in another 30 years, we might be starting to get somewhere..

Anyway, below I’ve discussed a few of the more pertinent points raised in the book. I must warn you, this is a rather lengthy post – but then it is a book full of good ideas. I mean, how many books can you name that are still so well known nearly 35 years after they were published, or ones that have a law named after them for that matter?! So I hope you have a comfy chair.

Estimation

As most developers would agree, software is inherently difficult to estimate – but it’s not always clear why. Early on in the book, Brooks gives some of the reasons behind this – as with points throughout the text, backed up by impartial research where possible:

  • Developers are Optimists – we tend to imagine things going well. While this may happen for an individual piece of work, the chances of everything going well on a project are negligible, but instead of taking this into account we tend to add up all of the ‘best-case scenario’ estimates and arrive at an unrealistic overall figure.
  • Developers Consider Isolated Development – on seeing some new application or website, you’ll often hear a dev say ‘that’s easy, I could code that in a weekend!’. I’ve heard this many times, even from experienced developers. The fact is, knocking up a quick program that works on a single machine is generally easy – it’s turning this into a polished, tested, documented product that works in a full-scale environment that takes the time.
  • Men and Months are not Interchangeable – certain tasks can be split up amongst people and retain the same efficiency. The sort of tasks this applies to are those with no communication required, no overlap between the different partitions – sewing a field, for example. Writing software is not such a task. As more people get involved, more communication is required, and efficiency drops – in fact at a certain point, adding more people can start to increase the time taken to complete the overall task. This is often not taken into account, and projects are planned with a certain number of ‘man-months’ divided between an arbitrary number of workers, with the end result being missed deadlines.
  • Productivity doesn’t increase Linearly – as studies into large-scale developments have shown, the rate of work decreases as systems grow. Data show an exponent of around 1.5, so as system complexity increases (as measured in lines of code), it takes increasingly more effort to add extra functionality. This isn’t something most people take into account, but can make a huge difference for larger developments.
  • Not all Time is Used – in further studies, when time was recorded at a low level, it was found that only about half of a developer’s time was spent on project development, with the other half occupied by meetings, unrelated work, personal items, and so on.

    While most people are aware of the general problems, if not the specific causes, I for one always seem to end up on projects with unreasonable deadlines. I think this is a combination of the factors above, along with what Brooks alludes to in the book – that it can be very difficult to explain to a customer why it’s going to take such a seemingly long time to create what appears to be a simple application. Until we have a more solid foundation for software estimation (which doesn’t seem forthcoming), software managers need to trust their experience and give realistic dates, and not bow to client pressure and give ridiculously optimistic dates that aren’t going to help anybody in the long run.

    Project Overrun

    Related to the difficulties in estimation is the often-cited problem of software projects often going way, way, waaaaaay past their initial due date – usually with enormous increases in budget to boot. As well as estimation issues, Brooks gives several other factors which contribute to this problem – although he later points out that most of these only really apply to waterfall style development, which has been superseded by incremental development in many cases (which someone would tell my manager..)

    One such factor, which is extremely common if my experiences are anything to go by, is that the deadlines are often set by the customer. This is likened to the cooking of an omelet – the customer might say he wants his omelet in two minutes. And after two minutes, if the omelet is not cooked, he can either wait – or eat it raw. The cook may try to help by turning up the heat, but you’ll just end up with an omelet which is burned on the outside and raw on the inside. Sadly, I’ve worked on a couple of projects where the customer ended up with a burned/raw application because ‘it has to be finished by X!’

    The book also suggests that most projects don’t give enough time to testing, which can be a fatal mistake. A lot of issues are not found until the testing phase, by which time the deadline has nearly arrived – slippage here comes as a surprise to everyone, as things seemed to be going well, and once the wheels of enterprise have been put into motion, late delivery can have serious financial implications. Brooks even gives his breakdown of project times (again, this only applies to waterfall) – 1/3 planning, 1/6 coding, 1/4 component test, and 1/4 system test.

    Also discussed is what happens when it’s realised that the deadline is slipping. Often, if it gets near the end of a project and there are schedule problems, more resource gets allocated to the project – but what with the man-month interchangeability problem, and the time it takes to get new people up to speed, such actions will only ever make things worse. It’s the old ‘using petrol to put out a fire’ theme, which is where we get Brooks’ Law:

    Adding manpower to a late software project makes it later.

    Conceptual Integrity

    Conceptual integrity within a system is about keeping the way it works coherent – following a core set of principles that apply throughout the program, from a user’s perspective. Brooks states that conceptual integrity is the most important aspect of systems design – it’s more important to have a coherent model than extra functionality. This is difficult to achieve in software, which is usually designed and nearly always built by multiple people.

    Cathedrals created over many generations have similar issues – often different parts are of different styles, and while each may be magnificent, the overall effect is jarring. Reims is the counter-example, where several generations have stuck to the original concept, sacrificing some of their own ideas to keep the overall design ideas intact. Conceptual Integrity can be achieved by having a single (or very few) Architects, who share a single philosophy for how the system works.

    When Brooks talks of an architect, he doesn’t mean a technical architect as I would think, more a ‘user experience’ manager responsible for the UI and the user’s mental model of the software. The architect works on behalf of the user, making the most of the available systems to get the most benefit for the customer. This is an interesting concept and not something I’ve seen much of – typically, on the medium-sized business application I’ve seen, there is no-one really considering the user experience. The BAs will collect and define the user requirements, but will have little input into the actual interface of the system, which is usually left up to the developers.

    I would think this may be more of an issue for larger systems, but Brooks states he’d have one architect role defined in teams as small as four, so it’s definitely something I’ll try to look further into in future projects.

    Second System Effect

    Another of the many phrases you’ve probably heard that comes from this book is the ‘second system effect’. The idea is that the second system anyone designs is the most dangerous and most likely to fail. In the first, ‘he knows he doesn’t know what he’s doing, so he does it carefully and with great restraint’. But all of those good ideas he has are saved up, and all piled into the second system.

    Accepting Change

    The thing that surprised me the most in TMMM was the recognition back in ’75 of the need to adapt to requirements change as the project progresses. Although development was all done in a waterfall fashion, the language used to describe dealing with change sounds like something straight out of an agile book:

    ‘The first step is to accept the fact of change as a way of life, rather than an untoward and annoying exception. […] both the actual need and the user’s perception of that need will change as systems are built, tested and used.’

    Brooks gives advice on designing for change, including modularisation, well-defined interfaces, and good documentation of these. Another piece of advice is to always Throw One Away – with new types of system, you are still working out the best way of tackling problems, so you should plan to use the first as a prototype and not use it at all in production.

    The book explains that we need to be prepared for change, because as well as technical and business changes, the users will not know exactly what they want until they try the system – this is one of the main driving forces behind agile/iterative development, which Brooks now recognises as a better way of developing software. In a recent look back at the original articles, Brooks states:

    ‘The waterfall model, which was the way most people thought about software projects in 1975, unfortunately got enshrined into [the DoD specification] for all military software. This ensured it’s survival well past the time when most thoughtful practitioners had recognized it’s inadequacy and abandoned it. Fortunately, the DoD has since begun to see the light.’

    Software Maintenance

    When software is maintained, unlike maintenance of things such as mechanical or electrical systems, usually this ‘maintenance’ involves adding extra functionality. The problem is, this new functionality has a good chance (a reported 20-50%) of introducing new bugs – ‘Two steps forward, one step back’. In addition, bug fixes that are applied sometimes fix the local problem, but go on to create further problems in other parts of the system.

    A study of OS releases showed that as time goes on, structure gets broken and entropy increases in the system, and more and more time is spent fixing the defects introduced by the releases. Eventually, adding new functionality is not worth the effort and the system must be replaced. As Brooks succinctly puts it,

    ‘Program maintenance is an entropy-increasing process, and even its most skillful execution merely delays the subsidence of the system into unfixable obsolescence.’

    Project Slippage

    Most projects don’t suffer a catastrophic failure, they get slowly later as time goes on. ‘How does a project get to be a year late? …One day at a time.’

    Brooks suggests one way to battle this is to have very specific, unambiguous milestones. People don’t like to give bad news, and if something has a fuzzy meaning, it’s all to easy to fool yourself and your manager that things are going Ok. Studies show that people gradually reduce over-estimates as time goes on, but underestimates stay low until around three weeks before the deadline.

    Brooks talks about how small delays often get ignored, but as soon as you start accepting these delays, the whole project can slip. You need to stay on top of things – the team needs to have ‘hustle’, as baseball coaches would say.

    PERT charts (apparently like GANTT charts today) can be useful to identify the slippages that really are problematic as they’re on the critical path.

    No Silver Bullet (1986)

    This being the anniversary edition, there are an extra couple of sections, including this well-known essay from 1986. The basic precept is this: there will be no magnitude (10x) increase in development productivity in the next 10 years. One of the core concepts here is that there are two kinds of complexity in a system – essential and accidental (or incidental). Essential is core to the problem and incidental is the complexity caused by our way of programming it.

    The central argument is that unless the incidental complexity accounts for over 9/10 of the overall complexity, even shrinking it to zero wouldn’t increase productivity tenfold. Brooks thinks it accounts for way less that that, as the biggest problems (low-level languages, batch debugging, etc) have now been removed, so improvements will be fairly small from here on out. As such, we need to focus on attacking the essential complexity of software. To do this, Brooks suggests:

    • Buying instead of building software
    • Rapid prototyping in planned iterations to help get the requirements right
    • Growing systems organically, adding functionality as they are used

    These are all forward-looking views for ’86 which have largely been borne out in the intervening years. The last two points are related to the problems of capturing and identifying requirements properly, which Brooks sees as being the most difficult area, which is where we can make the most improvements:

    ‘The hardest single part of building a software system is deciding precisely what to build.’

    30 Years On

    At the end of the book, Brooks looks back on the assumptions and ideas presented those many years ago. Most have proved to be correct, although a few of them are based on Waterfall development, and while they are correct in terms of that approach, iterative development has rendered some of those ideas redundant.

    The book ends on something of a high, with Brooks describing how when he finished college in the 50s, he could read all computer journals and conference proceedings. As time has gone on, he’s had to kiss goodbye to sub-discipline after sub-discipline, because there’s so much to know. So much opportunity for learning, research, and thought – ‘What a marvellous predicament! Not only is the end not in sight, the pace is not slackening. We have many future joys.’

    Amen.