Code the Town! Investing in the next generation of programmers in Austin, TX

Code the TownAustin, TX is a hot-bed for technology.  You can find a user group for just about any technology and purpose meeting almost any day of the week.

And now, there is a group that intersects with giving back to the community and helping the next generation of programmers.  Code the Town is a group that does just that. Clear Measure, and other companies, are sponsors of the group.  The official description is:

“This is a group for anyone interested in volunteering to teach Hour of Code https://hourofcode.com/us in the Austin and surrounding area school districts. The goal is to get community volunteers to give the age appropriate Hour of Code to every student at every grade level. We want to have our own community prepare students for a technology-based workforce. We also want to build a community of professionals and students that have a passion for coding and teaching. We want to begin the Hour of Code in the high schools first. High school students would then be prepared to teach the younger students.  Once this group has momentum, it will be able to form motivated teams and use software projects done for local non-profit organizations to not only reinvest in our community but also to help our youth gain experience in software engineering.  Whether you are a student, parent, educator, or software professional, please join our Meet Up! This will be fun! And it will have a profound impact on the next generation.”

The long term vision is to create a sustainable community of professionals, educators, parents, and students that continually gives back to local community organizations through computers and technology while continually pulling the next generation of students into computer programming.
Simple codeIt all starts with some volunteers to teach students the basics of computer programming.  In the 1990s, the web changed the world.  Now, we have hand-held smartphones and other devices (TVs, bathroom scales, etc) that are connected to computer systems via the internet.  In the next decade, almost every machine will be connected to computer systems, and robotics will be a merging between mechanical engineering and computer science.  Those who know how to write computer code will have a big advantage in the workforce where the divide between those who build/create and those who service what is created might get bigger than it already has.
BlocklyCode the Town will focus on introducing students to computer programming and then pull them together with their parents, their teachers, and willing community professionals to work on real software projects for local non-profits.  In this fashion, everyone gets something.  Everyone gives something, and everyone benefits.  If you are interested in this vision, please come to the first meeting of Code the Town by signing up for the Meetup group.

Agile Coaching: The daily stand-up meeting

I’m starting a new blog series based on my experiences doing agile coaching at clients.  Along with agile projects in .Net, my company also offers agile coaching and training.  Right now, the agile coaching practice consists of me, but I’m actively working on finding people to expand that practice.  I started doing this March 2007, and since then, I’ve seen some of the same patterns repeated in very different businesses.  In this series of posts, entitled “Agile Coaching”, I’ll talk about some of the common solutions to the common problems I’m finding.  This first installment is about a daily stand-up meeting.

The daily stand-up meeting

At several clients, I’ve seen developers who aren’t co-located.  Many organizations value individual offices, and what I observed is that sometimes developer won’t communicate much day-to-day.  Perhaps there is a weekly development meeting where folks report on status.  I attended one of these, and one developer reported spending the entire last week on a single blocking issue.  A whole week!  I recommended the instituting of a daily stand-up meeting immediately.  This would give a daily sync-up opportunity for the development team.  There are plenty of other things to improve, but a daily stand-up meeting is low-hanging fruit.  It is easy to implement and returns immediate gains.

What is it?

Every morning (I like 0830 or 0900), gather the development team in the same area.  That area could be a hallway, a meeting room or whatever space is available for standing.  No chairs allowed.  The meeting should be over in under 10 minutes.  The agenda:

  • What I accomplished yesterday
  • What I plan to accomplish today
  • What issues are blocking progress

Every person in the development team reports on the three items to the rest of the team.  This is not a report to management or the coach/scrummaster/project manager.  This is so each person has a clear understanding of what is going on.  When issues are exposed early, others can help resolve them quickly.  I recommend this practice be used in every software organization. 

Objects have dependencies. Methods don’t

For more counter-intuitive banter, subscribe to my feed:  http://feeds.feedburner.com/jeffreypalermo

There is quite a bit of talk lately about unit testing using the extract method and override, or “inherit & override” method for stubbing out a method for unit testing.  Methods don’t have dependencies, and methods aren’t dependencies.

Objects are dependencies, and object have dependencies.  One object will depend on another object.  Objects are cohesive and present a contract of behavior so other objects know what behavior they can reliably leverage.  Because an object is cohesive, everything necessary to perform all its behavior can be passed into the object’s constructor at the beginning of the object’s life.  Then, that object can perform all of the operations.

In order to test a particular object, I might have to pass some fake objects (stubs, mocks, etc) into the constructor, but the object under test doesn’t care as long as the type is satisfied.  If I can’t test an object by passing in fakes to the constructor (and I can compromise sometimes by using setter injection), then the object is less than cohesive, and I should search the design of the object for an extra concern that is trying to get out.  Once I find the extra concern, I would separate the concern (separation of concerns) into its own object and have than as an explicit dependency by requiring it be passed into the constructor (dependency injection).

Michael Feathers outlines this extract method and override testing technique in his book, “Working Effectively with Legacy Code”, and I’ve used the technique many times to break nasty dependencies on legacy code in order to get it under test.  This is necessary because legacy code can’t be safely refactored until it has a safety net of test coverage.

If your unit test has to know that another method on the same object is being called, then the unit test knows too much about the implementation of the object.  The unit test should be doing black box testing on the object under test.  In other words, the unit test shouldn’t care about what code is written or if 1 method or 10,000 methods inside the object are actually called.  The unit test merely sets up explicit object dependencies and test state, interaction or both.

New drop of ASP.NET MVC Framework now available

My RSS feed:  http://feeds.feedburner.com/jeffreypalermo

You can download it here.  It’s public and announced at the MIX conference.  A new release of the ASP.NET MVC Framework.  The license allows  you go “go live” and use it in production.  You’ll have to uninstall the December CTP first.  The new install will only drop the assemblies in C:Program FilesMicrosoft ASP.NET MVC Preview 2Assemblies, and then you’ll have to copy them over to your application and drop them in the “bin” folder.

We’ll be upgrading MvcContrib and CodeCampServer to work with the new bits soon.

From looking at the bits, here are some notable changes:

  • Constructor of Route now takes more information (GOOD)
  • There are IDictionary arguments for many things, and that is a GOOD thing.  Before anonymous types were required.
    I like the way Monorail handles this by allowing querystring syntax like: RedirectToAction(“foo”, new string[] {“orderid=” + theOrder.Id});
  • [ControllerAction] is gone.  Public methods are actions by default. (GOOD)
  • RenderView() methods are still protected and only 1 of them is virtual (would like IViewEngine to be used more explicitly)
  • 6 (SIX) members of the Controller class are still marked internal (I’d like to be able to extend them)
  • RouteValueDictionary is just a wrapper for the anonymous type.  (I think we can work towards a better API)
  • Seven properties don’t have setters, such as IPrincipal User {get;} (I’d like to see setters)
  • I still see SEALED classes.  In an extensible framework, sealed is your enemy (I’ll be making not of the sealed classes I’d like to extend)
  • ViewContext is not usable in unit test scenario because of its dependency on HttpContextBase. (It can be refactored to help testability)
    Still can’t mock out the IViewEngine’s RenderView method and have it work in a unit test.  I’m told the team is tackling this next.
  • ComponentController is a welcome addition to enable nested controllers, but, sadly, non of the members are virtual.
  • Lots of view helpers (GOOD)
  • Routing is a separate assembly (GOOD)
  • System.Web.Mvc.dll can be deployed in the bin instead of the GAC (GOOD)
  • We can go live with this drop (and it appears to me to be stable enough for small applications)
  • The team is committed to roughtly 6 weeks in between drops (GOOD).  Release early.  Release often.

How to write stories _before_ the project kicks off

Recently, I was coaching a client who was embarking on a new project.  The project team was not formed, but key stakeholders were planning what the projected needed to be.  In Agile (specifically our Lean/Scrum/XP mix), we have iteration planning, release planning, and project planning.  This client is doing project planning, and the question came up about how to write stories for the project.  After all, we have to know the entire scope of the project in order to plan it, right?  Historical trends tell us that even if we think we know the full scope of the project, we’ll be wrong, so I helped this client identify the key components of the project to formulate the high-level project scope. 

If you like this post, you can subscribe to my feed at http://feeds.feedburner.com/jeffreypalermo

After we understood what the high-level requirements of the new software would be, we did a prioritization and created a backlog that we hoped would turn into a short-term first release.  With this rough release plan in mind, we did a small story-writing workshop.  We first wrote some stories together and then broke off into pairs to complete the remaining stories.  We concentrated on only the first release because that release contained the highest priority items (from the prioritized backlog).  We knew we were working on the highest value items first. 

Here is an example of a story (adapted from CodeCampServer as a realistic example):

"As an organizer I want to have a place to enter maps and driving
directions so that the attendees can find the event."

First, we have the persona of the organizer.  The organizer is in charge of the event and is doing the planning.  Often it’s helpful to give him a name.  We’ll call him Oscar the organizer.  Then, we can discuss what Oscar cares about to get a feel for how he’ll use the system. 

Your first thought is that this isn’t enough detail to formulate a release plan, and you’re right, but it’s a placeholder for a conversation with the team that’s going to make it happen.  I think a requirements document is also necessary.  Writing stories doesn’t take away the need to think about some level of detail while planning the overall project.  After all, when estimating the release, it’s necessary to know if driving directions merely means a link to Google Maps or if it means an interactive mapping system built into the application. 

Be careful about adding too much detail to the story because then people won’t read it.  If every story is a full page, it’s too long.  Story can act as headers in a requirements document, however, if you prefer to organize it that way.

Ok, buddy, but how will we know what the acceptance conditions are?  You need to know the high-level acceptance conditions, but I caution against laboring too much on small details of each story before the project has even kicked off.  You’ll paralyze yourself with analysis.  Define just enough detail to move to the next step, release planning.  By the time you move through iteration planning, and the team has asked all their questions, all the details will be fleshed out, and the team can turn the story into working software. 

Finally, get help from someone with Agile experience.  I’ve seen many people who have studies Lean/Scrum/XP struggle to move from use cases to stories with only academic knowledge.  There is no replacement for apprenticeship.

Introducing the SmartBag for ASP.NET MVC. . . and soliciting feedback

To get objects from an ASP.NET MVC controller to a view, you put the objects in a dictionary of sorts called ViewData.  ViewData is sort of a misnomer because it’s not meant to contain data at all.  It’s meant to contain objects.   By the way, if you like this post, you can subscribe to my RSS at http://feeds.feedburner.com/jeffreypalermo.

There are some challenges with ViewData as it stands now:

  • A key is required for every object, both in the controller and view.
    ViewData.Add(“conference”, conference);
  • A cast is required to pull out object by key.
    (ScheduledConference)ViewData[“conference”]
  • The ViewPage<T> solution discards the valuable flexibility of the objectbag being passed to the view.
    <%=ViewData.DaysUntilStart.ToString() %> where ViewData is of type ScheduledConference

Facts (or my strong opinions):

  • Repeated keys from controller and view increase chance for typos and runtime errors.
  • Casting every extraction of an object in the view is annoying.
  • Strong-typing ViewPage only works for trivial scenarios.
    – For instance, suppose once logged in, every view will need the currently logged in user.  Perhaps the user name is displayed at the top right of the screen in the layout (master page).  Since the layout shares the viewdata with the page, we immediately have the need for a flexible container that supports multiple objects.  A strongly typed ViewPage<T> won’t work without an elaborate hierarchy of presentation object that are themselves flexible object containers able to support everything needed.  Once you get there, you are almost back to the initial dictionary.

My proposed draft solution, the SmartBag:  (and a follow-up here)

  • The SmartBag is currently only in CodeCampServer, but when it has proven it’s worth, I’ll move it into MvcContrib.  It’s used in the ConferenceController and the ShowSchedule view.  You can check out CodeCampServer at http://codecampserver.org.
  • public class SmartBag : Hashtable
    {
    public T Get<T>()
    {
    return (T)Get(typeof(T));
    }

    public object Get(Type type)
    {
    if (!ContainsKey(type))
    {
    string message = string.Format("No object exists that is of type '{0}'.", type);
    throw new ArgumentException(message);
    }

    return this[type];
    }

    public void Add(object anObject)
    {
    Type type = anObject.GetType();
    if (ContainsKey(type))
    {
    string message = string.Format("You can only add one default object for type '{0}'.", type);
    throw new ArgumentException(message);
    }

    Add(type, anObject);
    }

    public bool Contains<T>()
    {
    return ContainsKey(typeof (T));
    }
  • The smartbag implements IDictionary through HashTable, so it’s flexible, just like ViewData, but it has a powerful convention:  If you are passing one object/type to the view, we don’t need a key because the SmartBag can use the type as the key.  If you really need a Top conference and a Bottom conference, then you are back to using your keys, but for 80% (statistic I pulled out of my behind) cases, this works.
  • If we introduce a layer supertype for our views, then we can just use it:
  • public class ViewBase : ViewPage<SmartBag>
    {

    }
  • Then, all my views inherit from ViewPage<SmartBag>.

I’d appreciate feedback on this solution.  To me, it seems to be a good foundation and convention, but I’m sure it can be improved.  If this solution hasn’t been proven to be worthless, I’ll port it to MvcContrib so it can be used by MvcContrib clients.

Easily extend post-commit hook in Subversion using NAnt – email anyone?

There are several scripts floating around the Net regarding publishing an email to the development list every time someone commits a revision to Subversion.

In other cases, you may want to post the commit log entry directly to your project tracking system, such as trac, Rally, Gemini, Mingle, etc.  Anything under the sun can be coded into a post-commit action with Subversion.

Here’s how it works.  Here is how to send post commit email from Subversion while running on Windows:

In your subversion repository, there is a hooks folder.  This contains some template files for you.  On a Windows system, you’ll want a file called post-commit.bat.  This batch command will be run immediately after Subversion commits a transactional revision to the repository.  You can do work on other events, but I’ll focus on post-commit for this post.

Here is my post-commit.bat file that will call NAnt (which is placed inside the hooks folder for simplicity):

 

pushd .
cd DRIVELETTER:svnrepositoriesrepositorynamehooks
nantnant.exe -buildfile:postcommitemail.build -D:path.repository=%1 -D:revision=%2 > lastpostcommitrun.txt
popd

 

Notice how we are merely calling nant.exe.  The rest of the interesting work is done by NAnt.  Now that we are within NAnt, we can script out any action we might need.  In this case, I’m going to use some SVN command line tools to build up a simple email that with send developers what was included in the last commit:

 

<?xml version="1.0" encoding="utf-8"?>
<project name="commit" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd">
    <target name="build">

        <exec program="svnlook" commandline="author ${path.repository} -r ${revision}" output="author.txt"/>        
        <exec program="svnlook" commandline="info ${path.repository} -r ${revision}" output="message.txt"/>
        <echo message=" " file="message.txt" append="true"/>
        <exec program="svnlook" commandline="changed ${path.repository} -r ${revision}" output="message.txt" append="true" />
        <exec program="svnlook" commandline="diff ${path.repository} -r ${revision} --no-diff-deleted --no-diff-added" output="message.txt" append="true"/>
        
        <loadfile file="author.txt" property="author"/>
        <loadfile file="message.txt" property="message"/>
        <mail tolist=jeffrey@mydomain.com
            from=build@mydomain.com 
            subject="SVN ${path.repository} - ${author}" 
            message="${message}" 
            mailhost="smtp.mydomain.com"/>
    </target>

</project>

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

That’s all it takes.  I can write anything I need to in order to perform custom actions when things happen with my Subversion repository.  In this case, I’m sending email to a list, and the email contains the last commit.

Tips for immediately using R# 3.0.2 with VS 2008

There is a bit of confusion out there because R# 3.0.2 doesn’t immediately support code analysis and intellisense for the new C# 3.0 features like LINQ queries, etc.  What I’ve heard is that in Q12008, we’ll have a R# that will support all the new stuff.  While it’s a shame that it’s not ready to go immediately (because VS 2008 RTM is available as of this week), we can still us R# 3.0.2 with VS 2008 to get most of the value and ease of use we’ve come to expect from a productivity add-in.  Here are the steps:

First, the big problem is that when we start using new features in VS 2008, R# doesn’t cooperate because it doesn’t know about the new syntax that’s now available.  The on-the-fly code analysis goes haywire as depicted in the screenshot below:

Not only does the code analysis not work properly, but the R# intellisense we get doesn’t show the proper options with extension methods.  We’ll go to Resharper>Options and change two settings.  First, let’s turn off R# code analysis:

Next, we’ll tell R# to let VS give us intellisense since VS knows about the full options:

Now, if we look back at the code, all the red goes away, and our intellisense window has full support, like the “where” keyword in the LINQ query below:

With these two changes, we get rid of the immediate annoyances, but we can continue to leverage all the other great features R# gives us like file/type navigation, NANT/NUNIT support, etc.

If you have any more tips that would be helpful during the interim for using R# 3.0.2 with VS 2008 until R#4 comes out, please post a comment. 

In software consulting, low cost consulting can be real. . .

. . .or it can be a lie.  In my last post, Evan commented that low cost only goes so far.  I see where he is coming from.  Let me illustrate.  If a consultant puts together a system very quickly and doesn’t pay attention to the quality and structure of the system, he can deliver quickly in the short term.  The client initially pays less money, and he is happy.  What actually happened was that the client received a lemon.  The software meets the initial requirements, but it has a handicap with intolerance to change (which is inevitable).  There are probably cobwebs lurking in every corner.  The same consultant will probably be asked to add features at a later time “because of the great job he did”, but the second revision will cost significantly more because he will be faced with the poor decisions made in the past.  The second project is unlikely to be as inexpensive as the first, and it is very likely to be bug-riden and unstable.  The initial software delivered was a pig with lipstick on it.  I compare it to a bridge that works great at first but starts to crack and shake as the years go by.

The above is not providing the lowest cost to the client.  It is front-loading cost savings only to defer the actual bill to a later time.  Credit.  Technical debt.  The first release incurred technical debt the client will ultimately have to pay back.

This is not my model for consulting, and in this post, I hope to educate you on real low cost consulting.

What is real low-cost consulting, and how do we maximize ROI for the client?.

First, we want consistent clients.  We want to serve the client so well that they come back for the next project.  That doesn’t happen if all we deliver is technical debt.  Here is the plan.  First, we start the project with very high standards.  No bugs.  We don’t plan for bugs.  We don’t allocate time in the project for bug fixes at the end. In fact, we give a warranty for the software.  If there is a bug found later, it’s covered under warranty.  We keep the software bug-free at all times.  When a bug is found (because, yes, it happens), it becomes the highest priority.  We fix bugs right away.  If a requirement/use case/story is found to be flawed or lacking analysis, then that is a separate issue that’s taken up with the client to find an acceptable course of action.  We are able to achieve this by building quality in.  All the fundamental principles come in to play: separation of concerns, cohesion, depending on abstractions, etc.  We use extensive automated testing throughout the project along with continuous integration to provide rapid feedback that everything that works continues to work.  After all, after the first feature is added, that feature is in maintenance mode from then on.  By concentrating on maintainability, the software serves the client well, both now and in the future.

But doesn’t that end up costing MORE?

No.  On the surface, it might seem so since our standards are so much higher, but the end result is a lower cost and, consequently, a higher ROI for the client.  Let’s examine how that works:  First, we ensure we are building the right thing.  That eliminates the waste of building the wrong thing.  Next, we start testing immediately, not at the end of the project.  That means that bugs are found immediately, not right before release.  When bugs are found/fixed immediately, there are never more than a handful of outstanding bugs, and many days there are none.  The software always works as expected.  Next, the architecture is of the highest quality, and it evolves over time.  With refactoring, the software is always well-suited to the problem. The code depends on abstractions and is therefore resilient to change.  New types can be added to the system for new features, and working code doesn’t need to be modified (open/closed principle).  There are so many interlocking practices that go into achieving this that it is hard to list them all.  We keep the build fast and clean so it provides valuable feedback quickly.  With quick feedback, we can course-correct to ensure we are always on the right track.

End result

The end result is a system delivered at the lowest cost possible, not just for the first release, but when the client asks us back for a follow-on release (which they do), we are able to repeat the process and deliver a 2nd, 3rd revision with the same results as the first.

But why doesn’t everyone do it that way?

It’s hard.  Very hard.  It hard to get the quality people that can make it happen, and it’s hard to find management that has the experience to make it happen.  

I don’t claim to have all the answers (because I don’t), and we are constantly improving over time, but I can say that delivering the highest in quality also ends up costing the least. 

altnetconf – Scott Guthrie announces ASP.NET MVC framework at Alt.Net Conf

asp.net mvc framework 

Note:  Much more MVC information coming.  Subscribe to my feed: http://feeds.feedburner.com/jeffreypalermo Subscribe to email feed.

Scott Guthrie proposed a topic at the Alt.Net Conference today, and the topic was an overview of the MVC Framework his team is working on.  His topic is actually the first timeslot of the conference at 9:30am tomorrow morning.  Just about everyone showed interest, so I wouldn’t be surprised to see most of the folks just listening.

Scott and I had supper after the opening, and I received a personal demo of the prototype.  First, here are some of the goals:

  • Natively support TDD model for controllers.
  • Provide ASPX (without viewstate or postbacks) as a view engine
  • Provide a hook for other view engines from MonoRail, etc.
  • Support IoC containers for controller creation and DI on the controllers
  • Provide complete control over URLs and navigation
  • Be pluggable throughout
  • Separation of concerns
  • Integrate nicely within ASP.NET
  • Support static as well as dynamic languages

I’m sure I missed some of the goals, and more people will blog their takeaways since this all is public information.

The first question might be: Is webforms going away?  Do I have to rewrite my web applications?  Some people might wish, but no.  Both models will be supported and even be supported within the same web application.  I, for one, after seeing this, think it is very good, and my company will be recommending it to our clients.

We might get a public CTP by the end of the year, and it will be released in a similar fashion as ASP.NET AJAX was, as an add-on after the Visual Studio 2008 release some time next year.

URLs

The default URL scheme will look something like this:

/<RouteName>/<Action>/<Param1>/<Param2>

where RouteName is configured to map to SomeController.  Multiple routes can map to the same controller for the purpose of providing more URLs (think SEO).

The developer can completely override the URL processing also by providing an implementation of an interface. 

Controllers

Controllers will inherit from a base class by default, but it doesn’t hinder unit testing, and it’s not even required.  I’ll probably use the option of implementing the IController interface instead and creating a controller factory to support controller creation using my IoC container of choice (currently StructureMap).  In this manner, I implement an interface with one method that accepts IHttpContext (yep, we have an interface now), and RouteData, a simple DTO that includes the action and parameters for the web request (parsed from the URL scheme you are using).

Views

Like I said before, NVelocity, Brail, etc can be bolted on as view engines, and ASPX is provided as a view engine (the only thing that has been changed is that the code-behind will inherit from ViewPage as opposed to the current Page).  Controllers can either using IViewEngine (I think that’s the name) to request a view by name (key) or using a helper method on the optional controller base class RenderView(string, viewData).  The default model uses a DTO for the objects passed to the view, so it is similar to MonoRail’s property bag except it’s a strongly-typed DTO (using generics for that), so when you rename stuff with Resharper, you don’t have to worry about any string literals lying around.

My impressions

I first saw the early prototype in March of this year, and I was encouraged.  I was able to give some early feedback, which has already been encorporated into the product.  I’m not one to promote a Microsoft offering just because it’s there (I have never recommended the use of MSTest over NUnit, for instance), but I will say this: As soon as I can get my hands on a build, I will be building something with it.  I am very encouraged by this, and I think they are going in the right direction.  While they have chosen a model to use with demos, they have broken down the walls.  Interfaces abound, and none of it is sealed.  I will start by swapping out the controller factory so I can get my IoC container in the mix, but it’s easy to do.  For testing, there is no coupling of the controller.  The views are decoupled.  The httpcontext is decoupled with the new IHttpContext interface.  The actions are simple public methods with an attribute attached to them ([ControllerAction], I think).

Isn’t it just like MonoRail?

Someone using MonoRail for more serious project than me can comment more intelligently, but here goes.  MonoRail is MVC.  This is MVC, so yes, it’s very similar but different.  This gives us a controller that executes before a view ever comes into play, and it simplifies ASPX as a view engine by getting rid of viewstate and server-side postbacks with the event lifecycle.  That’s about it.  MonoRail is much more.  MonoRail has tight integration with Windsor, ActiveRecord and several view engines.  MonoRail is more than just the MVC part.  I wouldn’t be surprised if MonoRail were refactored to take advantage of the ASP.NET MVC HttpHandler just as a means to reduce the codebase a bit.  I think it would be a very easy move, and it would probably encourage MonoRail adoption (even beyond its current popularity).

 

[tags:  altnetconf, asp.net mvc]