Thursday, January 24, 2008

Writing unit tests for an MVC Framework controller sucks

Over the Christmas holidays I had a lot of fun writing a web application using the Castle Project's Monorail. One of the joys of doing Monorail development is the outstanding support for unit testing that it provides. That's not really surprising when you read some of the Castle Project guys' blogs, they are all thoroughly Test Infected, and indeed one of the main driving forces behind adopting any framework that allows a separation of concerns is the ease which such a framework can support Test Driven Development.

During the last couple of weeks I've been embarking on a new commercial project. I've decided to use the new Microsoft MVC Framework simply because it's much easier to justify using a tool when Microsoft are behind it. And looking to the future it's highly likely that most professional MS shops will have MVC Framework skills to support such an application whereas Monorail skills are inevitably always going to be niche.

Monorail and the MVC Framework are quite similar, and having come from a Monorail project, I've found it pretty easy getting up to speed with the Microsoft offering. Some things are nicer: there's the typically good tool support, especially intellisense in the views and the strongly typed view data is an improvement over NVelocity (I haven't tried Brail). The routing engine is very nicely thought out and much better than the afterthought bolted onto Monorail.

The one thing that's really surprised me though is the lack of thought which has gone into support for unit testing controllers. Monorail supplies a BaseControllerTest class that you can inherit from for your controller tests that allows you to mock or access all the controller infrastructure. For example, say I want to test that my controller calls Response.Redirect passing a specific URL.

  • Make my controller test inherit from BaseControllerTest
  • Override the virtual method BuildResponse and provide a mock response
  • Pass my controller to the PrepareController method in BaseControllerTest. This sets up a mock infrastructure for the controller and at some point will call my overriden BuildResponse method.
  • In the test itself simply set an expectation for Response.Redirect to be called with the expected URL.
[TestFixture]
public class PosterLoginControllerTests : BaseControllerTest
{
    MockRepository mocks;
    PosterLoginController posterLoginController;

....

    protected override IMockResponse BuildResponse()
    {
        return mocks.CreateMock<IMockResponse>();
    }

    [SetUp]
    public void SetUp()
    {
  ....
  
        PrepareController(posterLoginController);
    }

    [Test]
    public void IndexShouldRedirectLoggedInPosterToAddJobTest()
    {
        string expectedRedirectUrl = "/Job/JobForm.rails";

        using (mocks.Record())
        {
   ....
   
            Response.Redirect(expectedRedirectUrl);
        }

        using (mocks.Playback())
        {
            posterLoginController.Index();
        }
    }

Now you can't do anything like this with the MVC framework. There's no BaseControllerTest and it's impossible to provide mocks for all the hooks in the Controller class. It's not completely decoupled from the infrastructure. The only way to successfully test a controller is to write a test subclass that inherits from the controller under test and overrides all the important methods that deal with the infrastructure. Because you have to write a new one for each controller it's a real overhead with lots of duplicated effort. Here's an example that I've had to write for my current project:

public class TestUserController : UserController
{
    string actualViewName;

    public string ActualViewName
    {
        get { return actualViewName; }
    }

    object actualViewData;

    public object ActualViewData
    {
        get { return actualViewData; }
    }

    Dictionary<string, object> redirectValues = new Dictionary<string, object>();

    public Dictionary<string, object> RedirectValues
    {
        get { return redirectValues; }
    }

    public TestUserController(
        IUserRepository userRepository,
        IRepository<Role> roleRepository,
        IRepository<InstructionType> instructionTypeRepository,
        IAuthorisationBuilder authorisationBuilder)
        : base(
        userRepository,
        roleRepository,
        instructionTypeRepository,
        authorisationBuilder)
    {

    }

    protected override void RenderView(string viewName, string masterName, object viewData)
    {
        this.actualViewName = viewName;
        this.actualViewData = viewData;
    }

    protected override void RedirectToAction(object values)
    {
        PropertyInfo[] properties = values.GetType().GetProperties();
        foreach (PropertyInfo property in properties)
        {
            redirectValues.Add(property.Name, property.GetValue(values, null));
        }
    }

    NameValueCollection form = new NameValueCollection();

    public override NameValueCollection Form
    {
        get
        {
            return form;
        }
    }
}

As you can see it's quite extensive and not something that I relish repeating for each controller. Phil Haack describes injecting mocks into the controller, but then in an update to the post says that it's something that now broken for the CTP and basically admits that the only way of doing it is to write test specific subclasses.

The more I do IoC and dependency injection (which works well with the MVC framework BTW) the more I wish framework builders did the same. It would make all these issues go away. I guess it's a request too far as long as IoC Containers remain a niche thing; supporting them is one thing, requiring them quite another.

Update

John Rayner from Conchango recently pointed out a post on Ayende's blog, showing how you can mock controllers and use extension methods to provide mocks of protected methods on the controller. It's a neat trick and gets over the need to build test sub-classes of your controller. But, it still doesn't remove the main point which is: we shouldn't have to do these crazy tricks; the controller should be testable.

Friday, January 11, 2008

DDD6 Feedback

I got my feedback from my DDD6 talk today. Ben's already blogged about his, so I feel honor bound to do the same. Me a copycat blogger? Surely not :-)

I was very pleased with it; most people gave me 4 or 5 (unless that's out of 20?) Some of the comments were very kind:

"best session by miles"

"This one was great. I've already started looking at Spring.NET and Windsor."

"Really enjoyed this session. I got the impression that Mike was a 'doer' rather than an academic."

Some of the other comments pointed out failings in my presentation that I need to work on if I'm going to do more in the future:

"A great topic because it was more about a concept rather than being "here is a product and this is how you might use it". There was however, too much jumping about between PP and code giving me the impression that maybe the session wasn't rehearsed."

"The presenter would have benefitted from using two screens. One to show the powerpoint slides and one to show the code demo."

"Buy a proper laptop :-)"

I definitely agree about 'jumping around between PP and code'. I think I must have looked like a yoyo because of my compulsion to stand up when addressing the slides and the need to sit down when showing the code. As for the laptop, I now have a VGA cable.

For some people I just didn't present it at the correct level:

"I am completely new to the subject and as such did not find this enough of an introduction."

I think this is the trickiest thing when doing technical presentations; do you assume your audience know nothing at all about the subject, or do you assume they are experts? When I've done workshops and presentations for my clients, I usually have a good idea of the competence of the audience, but at DDD, you just have to take a guess. In fact it's worse than that because you can guarantee there will be both complete newbies and battle hardened experts on your chosen topic. With the IoC container talk I tried to aim it at myself about two years ago; someone who's tried TDD, knows a bit about patterns, but hadn't really grepped the full power of IoC and hadn't yet encountered IoC containers.

All in all, it was a very satisfying day and a great opportunity to share ideas. I think it's wonderful that there's a thing like DDD where total nobodies like me can get a chance to rant about their favorite obsessions. Long may it continue.

You have to stay up late for ALT.NET

I was very excited to hear that we would be having our very own ALT.NET conference here in the UK. I waited with mounting anticipation to spend all day talking about ReSharper. But I was thwarted; unfortunately they could only book a room for ten people. It was full 0.05 seconds after the registration opened. I waited all night with my finger poised above my mouse, ready to click, but in a moment of weakness I had to answer a call of nature. By the time I got back Ben had made his announcement and the room was full.

Update

Sometimes it pays to write sarcastic blog posts. Alan Dean, one of the organisers of ALT.NET  very kindly got in touch and managed to squeeze me in. Thanks Alan!

Thursday, January 10, 2008

Rob Conery on choosing your data access method

A while back I asked 'Why are you still hand coding your data access layer?' I briefly covered a few of the choices available for automating data access, but Rob Conery does a much better job in this blog post: ASP.NET MVC: Choosing Your Data Access Method. He weighs up Linq-to-SQL, Subsonic (his own creation) and NHibernate. I've gone with NHibernate via the Castle project's Active Record for Jobvertizer, but there's a good chance that I'll be using Linq-to-SQL for my next commercial project. It'll be interesting to note the differences.

More on source repository hosting: What about Continuous Integration?

My previous post 'Source repository hosting: Why do it yourself?' talked about the possibility of using an off-site source repository for hosting commercial code. I got a great comment from Dave Verwer:

"This is exactly the decision I made when I started my company about 18 months ago and I have never looked back. I use CVSDude (who also do SVN, of course) and their service has been outstanding while I have been with them."

Well, if it's good enough for Dave... But how do you do continuous integration when your source control is off site? Continuous integration tools like Cruise Control rely on knowing when a source control update has taken place in order to kick-off a build. Apparently there is a way; here's an email reply I got back from Mark Bathie at CVSDude:

"Your can use our post-commit call back facility to call a URL on your
server, which passes variables relating to the last checkin (variables
detailed in our specification). Your CGI script will these variables and
perform whatever tasks are required i.e. updating Cruise Control, etc."

I'm keen to try it out now. I'm kicking off a new project in the next few days, so CVSDude say hello to your new customer :)

Wednesday, January 09, 2008

Source repository hosting: Why do it yourself?

My last post was about a little open source monorail project I've been playing with. One of the cool things I discovered while looking for somewhere to put it was google code project hosting. I really like subversion which is one of the two options for source control on google code, the other being CVS. With the tortoise svn client which integrates with windows explorer, handling source is a really nice experience.

tortoise_svn

All this made me think that source code hosting might make sense for commercial projects too. Most people no longer host their web servers, so why host your source repository? Of course you've got to trust your hosting company if you're asking them to look after the company's crown jewels, but imagine the savings and lack of hassle. Quite a few projects I've worked on have had 'issues' with Source Safe and often there's quite an overhead to manage the source repository itself. Now that's probably got something to do with the fact that Source Safe is a crap product. But why do something you don't have to? I don't have any recommendations for subversion hosting companies, but there seem to be plenty out there. It's something I'm certainly going to look into next time I'm responsible for putting in place a source control strategy.

Tuesday, January 08, 2008

Building my first Monorail application

Jobvertizer Front Page

If you've read my blog you'll know I'm a big fan of the Castle project. I even spoke at DDD6 in November about the Castle Project's IoC container, Winsdor. The castle project is probably best know for it's rails-a-like MVC framework for ASP.NET, Monorail. Indeed recently Microsoft have focused an awful lot of attention on Monorail by announcing their own MVC framework. Apparently they even invited Hammett, the leader of the Castle Project up to Redmond to pick his brains.

I've been itching to have a go at building an application based on Monorail for months now and with no work to distract me since the end of November I've been able to indulge myself. I've taken an example that I've worked on commercially, a job board, and started building a complete application from scratch with the Castle project stack. I'm using Monorail as the application framework, ActiveRecord for data access and Windsor as the IoC container. The project's called Jobvertizer and you can download the code from the Google code project here. I need to put together some instructions for deploying it, but I probably won't do that until I get around to putting it on a public server, hopefully not to far in the distant future.

Jobvertizer Solution

I've really enjoyed playing with Monorail. The philosophy behind it resonates with my own beliefs about how code should be written. There is a full suite of unit tests and everything is abstracted, which allows you to insert your own behavior into the framework at will. Because it's written for TDD there are mock versions of many of the core classes ready to be used and even abstract test base classes for testing controllers and view components. That's really nice. Probably the main thing that's struck me though, is how easy web development is when your framework's model is sympathetic to the HTTP protocol. What do I mean by that? Well, I've been doing Web Forms for many years now, and even with much experience, I still find it hard to grep sometimes. The whole event driven model simply doesn't mesh with the stateless request/response pattern of HTTP. When your framework explicitly divides handling the request from rendering the response, everything suddenly becomes much clearer.

The amount of code I've had to write for this application is only a fraction of what a similar Web Forms project would have required. It almost seems that at every step there's a convenient shortcut. I especially like the data binding and form helpers; they make it a sinch to rapidly develop CRUD interfaces. Using Active Record and NHibernate means that I've rarely even opened SQL Management Studio. I haven't had to write a single line TSQL or create a single table. Being able to take all the data access stuff for granted is a major productivity win. I've also used scaffolding for much of the admin interface, and while this wouldn't be suitable for an end user facing application, as a quick win to provide admin access to lookup tables, it's fantastic.

What is there not to like? Well I've gone along with ActiveRecord (with a few changes, like separating the model from the repository) and it's "data objects" which are simply collections of properties with a one-to-one mapping to the database tables. Indeed, I've used the shortcut of generating my schema from my model. But I've noticed that not being able to write a fully encapsulated domain model with enforced business behavior does introduce more bugs than I would have liked. Not being sure that a model object I was playing with was properly formed has tripped me up several times.

I'm also not sure about the profusion of string literals and dictionary based property bags. I'm very much a strongly typed kind of guy and it scares me when I know the application is relying my ability to spell and remember what I've called things. This same unease extends to the NVelocity view engine. It's fine, it works, I haven't found any real gotcha's and I've been able to do pretty much everything I wanted, but I'm so used to intellisense and a compiler checking everything for me that I can't help feeling nervous.

Finally there's the documentation. If you're a dyed-in-the-wool MS developer like me, you're probably not that used to working entirely in an open source framework. It's a very different experience from working with an MS toolkit. Firstly it evolves rapidly. I find I'm constantly reading stuff about Monorail that's out of date. Stuff gets added and changed from month to month and I've found the best way of finding out what's really going on is to look at the latest source. Of course that's fine if know generally what you're looking for, but when you need advice about basic approaches, it's awkward if the documentation you're reading shows a much older way of doing things. Secondly, the documentation that is there tends to be pretty basic; maybe a general outline of how to go about things, but very little in depth. But, I must say that having the code to look at, especially the unit tests makes a fantastic difference. Of course I've been using reflector for years, but it's not quite the same. Microsoft's recent announcement that it's publishing the source code for the .NET framework libraries is an effort to compete with this obvious benefit of open source software.

I'll be continuing to develop this application and learn more about Monorail as time allows. I'd like to tart it up a bit and get it publicly deployed next. Stay tuned!