Is Classic WebForms More Mature Than ASP.NET MVC?

In my last post about ASP.NET MVC, Jeff Gonzalez referred to WebForms as “Classic ASP.NET”.  I had to take notice since when ASP.NET can out, we spoke about “Classic ASP”.  This mere reference is interesting.

His comment goes on to talk about how ASP.NET MVC isn’t mature and not as “rich” as WebForms, but it also mentions the risks around using beta software for enterprise software.

My comments around this relate to the risks of using new, unproven, software.  SQL Server 2008 is still not widely adopted even though it is fully supported by Microsoft because folks are very leery about upgrading something as critical as a database server.  The same with server operating systems.  I hoster I like very much, Rackspace, still promotes and sells many, many Windows Server 2003 licenses (including some for my clients). 

Software tools, libraries and frameworks are a bit different, however.  First, when you have a new project, it is wise to consider what version of libraries will be supported when the new project goes into production.  To some extent it pays to attempt to avoid being a version behind even before the software launches.  This is the case with ASP.NET for me.  I know where this framework is going, and I trust the reliability of the proven ASP.NET pipeline of handlers and modules.  I also trust the stability of IIS, on which it runs.  The new paradigm is when ASP.NET needs a handler to handle a particular request.  This is where the change in development is.  For me, this mitigates risk around scalability and robustness because the foundation is already proven by 8 years of production use.

The other risk is around bugs.  When your enterprise application goes into production, you want to be able to depend on your chosen libraries.  If your test feedback cycle is long and expensive, there is not easy way to accomplish this.  If, however, you have test automation, you can have high confidence that the software works as intended every step of the way.  Regardless if the libraries in question are from a huge software vender like Microsoft or a single open-source developer, I consider it a must to have high test automation coverage to mitigate not only the risks of local software defects but also those within libraries on which your software depends.

Another consideration about maturity is documentation, and this is nothing to sneeze at.  With 6 books on ASP.NET MVC slated to come out this year, the information will be there, but it is not there now, and many teams will need training on how to make pages with the new version of ASP.NET.  This is definitely something to consider.  Also consider that many projects successfully use log4net, NHibernate, StructureMap, and other open source libraries without full books written on the topic.  In these cases, local documentation is sufficient.  As always, you have to decide for yourself given immediate constraints.

ASP.NET MVC RC Released!

The new version of ASP.NET is very, very close.  Scott Guthri just announced that the RC was publicly available.  I’m pumped about this release not only because of my book, but also because this new release makes delivering with ASP.NET sooooooooooooo much easier than WebForms.  One piece of functionality that is not spoken about so much is the ability for a partial view to use a layout (Master Page).  That’s right, folks.  You can have a partial that only displays a snippet of a shopping cart, and that partial can use a master page.  Then, the resulting markup is within the main view, which may also be using a master page.  All in all, we just use ".aspx" views.  We don’t even bother with ".ascx" views since they don’t provide any additional advantage, and they don’t work with master pages.

Great job to the ASP.NET MVC team. 

Prediction:  ASP.NET MVC will supersede WebForms for new project development within 2 years time.  Web Forms isn’t going away by any stretch, but for new projects, I think MVC is going to edge it out, even if just 51% to 49% of new ASP.NET projects.

Practical Agile Is Not Just Notecards, Flowers, and Fairies

In January, I had the privilege of participating in a panel at Agile Austin with executives leading other software organizations around Austin.  David Anderson also dropped in while he was in Austin.

Throughout the question & answer session, one theme became clear.  The audience wasn’t clear on what agile is and what agile is not.  So many folks have attended Scrum training that I think I can comment on what is being taught (caveat:  each Scrum trainer does it a bit differently).

The most popular agile training is Scrum training.  In scrum training, there is a lot of talk about the development team, the product owner, and the scrum master.  There isn’t, however, a lot of talk about management.  The scrum master is not a manager.  It is intentionally defined as a facilitator who has to facilitate without direct authority.  Every organization has management, however, but Scrum doesn’t really speak to that.  I think this is a big gap with it.  Extreme Programming doesn’t either, but it doesn’t pretend to be a project management methodology.

Agile is all about delivering frequently and refining as you go.  Agile doesn’t have a set of dogmatic practices that you must adhere to.  Every agile team will have a process that is unique because each has different business goals and different people.  The team needs process.  Larger projects need larger processes.  Whole Foods is a very agile organization, and it has tons of processes to make it all work seamlessly.  I have encountered some who feel that agile teams have to light on process.  I believe that the team needs just enough process, and it’s probably just a bit more process than the average “Certified ScrumMaster” would think.

By now, we have all heard stories about the “agile” teams who adopted scrum and then subsequently burned through a year’s worth of sprints and still delivered nothing.  The magic fairy dust of scrum is wearing off, and the industry is starting to realize that in any given project, success depends on the good and passionate work of the people involved.

All in all, it doesn’t matter where good ideas come from:  waterfall, scrum, extreme programming, lean, FDD, or the next methodology that comes along.  Good ideas are good ideas, and I strive to find them wherever I can.

Practical agile software delivery is hard.  HARD.  It requires confronting problems head-on, even when those problems are organizational.  Short iterations expose the problems.  Solving the problems makes them go away.  The absence of problems leads to effectiveness.  Rinse, repeat.  A project will succeed or fail based on the people involved, not the “official methodology” used in the status report.

Points For Stories and the Perplexing Nature of Estimating Software

For four years, I’ve been using the point system for estimating software effort.  This post is an attempt to convey all the variables involved in estimating software effort.  I’ll also touch measuring effectiveness of a software project.

First, as a manager, I want to know if my software organization is being effective.  Furthermore, I want to be able to measure each team in my organization and do some comparison between them.  What is the metric by which I should measure them?  Also, how do I measure and communicate improvement or degradation in effectiveness?

Many people have wrestled with this, and I am just one of them.  I reached the conclusion that points cannot be used as a metric of effectiveness, and I’ll illustrate why.

A “point”, as used for software estimation is a measure of effort.  It is a relative measure of effort, and it’s context is completely within a single development team.  One team’s point is not the same size as another team’s point.  Because a point is relative within a single team, it can only be used to infer information about that team.

It is tempting to use a point as a measure of value delivered.  As a manager, I want to know how much value is being delivered because my goal is to maximize the value delivered within a given iteration.  I’m still left with the problem of how to measure value.  This post will not answer that question since that is a very holistic topic and not isolated to software development.

Sometimes I would like a point to be a measure of value.  If it were, then I could just mandate more points delivered.  As a manager, I can do that.  I can mandate that the team deliver more points per sprint.  The team, in self defense of an non-actionable request, will deliver more points in the sprint. . . but I will see the project progressing at the same rate.  Because points are relative and not rooted in anything concrete, their size is controlled by the team using them. 

So I’m left without a measure of value.  All I have is estimates of effort.  

If I accept that, I will still want some way of knowing if the team is speeding up or slowing down over time.  This industry is plagued with teams that slow down.  As defect lists grow, the team seems to slow to a crawl.  If I’m rigorous about automation, I can keep my teams from suffering that fate (I’ve proven that).  However, I still want to know how much my teams are speeding up over time.  I know from past experience that agile engineering practices cause software development to accelerate over time as opposed to speed up; however, how can I quantify that?

I could turn to mapping points delivered per iteration.  If I measure that over many months, shouldn’t I see an upward trend?  That would seem only logical, right?  As the team finds better ways to do things. . . as the team forms solid standards around the project and refactors common functionality into standard components within the software. . . shouldn’t the team be able to deliver more software each iteration?  The answer is YES, but oddly enough, the number of points stays mostly flat over time.

How can the above be true if the team is actually speeding up?  This is because the point is a measure of effort.  As time goes by the team finds ways to make delivering similar functionality easier.  Because the standards and componentization makes things easier, the effort for a similar feature actually decreases.  The effort required to deliver a similar feature is less than it was before.  This causes the team to estimate less than they had previously estimated.  At the end of the iteration, the team has delivered more, but my graph of points stays flat.

This can be very frustrating, since I’m still grasping for my metric.  I need a graph.  I need a report.  I need statistics about how well my software teams are doing.  If a point is a measure of effort, and I have finally accepted this, then it will stay flat over time.

The whole point of this post is that worrying about points delivered is futile in the big picture.  Because it is so consistent, it is a great predictor, but it is not open to suboptimization.  Points remaining to the next milestone is a better measure because that metric is at least relative to the effort metric.  When effort decreases per unit of value, the points remaining to the next milestone will also decrease.

This whole scenario is a bit frustrating, but if you, dear reader, have found the magic metric for measuring project effectiveness, please let me know in the comments.

Mitch Fincher blogs about the Agile Boot Camp, part 1

Mitch Fincher was a student in our Agile Boot Camp, part 1, taught by Matt Hinze, CodeCampServer and MvcContrib committer.  Mitch gives a good student’s perspective on the curriculum.  For lots of folks, all the buzzwords sound like:

“Blah, Blah, Blah, Dependency Injection, Blah, Blah, Blah, Program to Abstractions, Blah, Blah, Blah, RhinoMocks, Blah, Blah, Blah, StructureMap, Blah, Blah, Blah, Inversion of Control.”

I think this is pretty common, and I think Matt did a good job of explaining how to use these concepts when coming from a place where none are used.

Party with Palermo – March 1, 2009 – RSVP Now

I put up the website last week, and 25 people have already found it and RSVPed, even before this announcement.  They must have been using Google Alerts to notify them by email whenever a website popped up on the Internet with their chosen keywords.

If you always want to be kept up-to-date with all things surrounding Party with Palermo, please sign up for my email newsletter.


Party with Palermo: Alt.Net/MVP Summit 2009 Edition

March 1, 2009 – Seattle, WA – 7:00PM – 10:00PM

731 Westlake Ave N, Seattle, WA 98109
Ph: 206-223-0300

Cover charge is 1 business card.  This will get you in the door and register you for the grand prize drawings.

  • Free to attend
  • Free fingerfood
  • Free drink
  • Free swag

Jimmy Bogard Spawns His AutoMapper OOM (object-object mapper)

Although all we have to go on is his twitter announcement, Jimmy Bogard has put the project is up on CodePlex.  It is version 0.1, so prepare to bleed if you want to use it (bleeding edge), but this library tries to fill the gap in the midst of object-relational mapper explosion.  This is an object-object mapping library.  It is very opinionated and is intended to be used with domain object on one side of the mapping and dtos on the other side.  It is heavy on convention, and dtos will have to conform a bit.  I’ve used it, and the library is solid.  Here’s a clip from the website:

A convention-based object-object mapper.
AutoMapper uses a fluent configuration API to define an object-object mapping strategy. AutoMapper uses a convention-based matching algorithm to match up source to destination values. Currently, AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer.

UPDATE:  Jimmy has blogged about the highlights of AutoMapper.

Giving my "TDD, DI, and SoC with ASP.NET MVC" talk at Houston TechFest this Saturday

I’m giving my “TDD, DI, and SoC with ASP.NET MVC” talk at Houston TechFest this Saturday.  This will be the 6th time I’ve given this same talk.  I gave this talk the first time at Tech Ed 2008 when I pinch-hit for Phil Haack when he found out he wasn’t going to make it. 

This talk has had great-reviews my several user groups, such as the ADNUG, TRINUG, NWANUG, SDNUG, and WINUG, and it is code-based, not powerpoint-based.  One lucky audience member will also get to come up on stage and write some code during the presentation.

I will be using the almost-RC CodeCampServer code base as a platform to demonstrate these techniques.  Come by and say hi, and if you aren’t already registered, sign up for the Houston TechFest.

Title: TDD, DI, and SoC with ASP.NET MVC

Description: Spelled out, it is test-driven development, dependency injection, and separation of concerns with Active Server Pages .Net Model View Controller.  This talk will dive into how to design a presentation layer using ASP.NET MVC.  In today’s industry, TDD, DI, and SoC are proven concepts that lead to more maintainable applications.  Along with demonstrating how to use these techniques with ASP.NET MVC, we will discuss just what concerns should be separated.  This talk provides a unique perspective on separation of concerns and uses TDD and DI to make it happen.  MvcContrib is used in all the demos.

Presenter: Jeffrey Palermo is a software management consultant and the CTO of Headspring Systems in Austin, TX. Jeffrey specializes in Agile coaching and helps companies double the productivity of software teams. Jeffrey is an MCSD.Net , Microsoft MVP, Certified Scrummaster, Austin .Net User Group leader, INETA speaker, Christian, husband, father, motorcyclist, Eagle Scout, U.S. Army Veteran, and Texas A&M University graduate.

Subscribe to Jeffrey’s blog feed here:

UPDATE:  An unforeseen event has occurred, and I’ll have to miss TechFest.  Jimmy Bogard has agreed to give this talk for me.  He is very capable and is an expert on ASP.NET MVC.  He’s also a co-author with me on ASP.NET MVC in Action.

Headspring Expanding Again – all positions apply within

I previously posted a specific position, but this is a general call for interest.  I have so many positions opening up at Headspring Systems that I can’t list them all here.  In general, we do C#, .Net, windows apps, web apps, services, etc. I prefer contract-to-hire since it allows us both to try each other out.

I’m looking for analysts, testers, and programmers who want to be a part of this expanding company.  Now is the time to join and define your role.  We do 100% agile development using a lot of techniques discussed in Alt.Net circles.  We stay on the leading edge without bleeding (edge), and we give our people top-of-the-line hardware to work with.  I’m looking for people in Austin, TX.  All work must be done in Austin, TX working face-to-face with clients.

We do custom software delivery, agile coaching and agile developer training.  Please send cover letter with desired position and resume to jobs [at]  If hired, you would report to me directly.

Here is our current stack of techniques/tools, but it evolves daily:

  • .Net 3.5/C#
  • SQL Server 2005
  • NHibernate
  • AutoMapper
  • Castle
  • DevExpress reports
  • Gallio
  • MvcContrib
  • Naak
  • NAnt
  • NBehave
  • NCover
  • NDepend
  • NUnit
  • ReSharper
  • RhinoMocks
  • StructureMap
  • Subversion
  • Tarantino
  • WatiN