Ron Jacobs discusses NHibernate on the MS Patterns and Practices Arcast – level 200

The lates Arcast from Ron Jacobs is about NHibernate, and open-source Object-Relational mapping tool.  This podcast was particularly interesting to me because my team is using NHibernate for a project, and we are likely to use it for most data access going forward.  The data access tier is the target of many debates.  My team has weighed the pros and cons, assessed our security and performance requirements, and we have decided that using NHibernate to automate the persistence and loading of our business objects is the direction we want to go.  In has saved so many developer hours of writing boring CRUD sprocs and sql statements.  We use an xml file to map the properties of our business object to the database table, and we’re done.  We have a test-bed of integration tests to ensure that the mapping is correct.

Ron brought up an argument that some inside Microsoft have on OR mappers.  I’m paraphrasing, but this is the idea:  Developers may shoot themselves in the foot if Microsoft provides an OR mapper and endorses it. 

I’ll let that sink in.  I can’t remember a development tool that someone hasn’t managed to abuse.  Hmm.  That doesn’t seem to be a very strong argument.  You obviously don’t give an M1 Abrams tank to a novice, but in the hands of a trained professional soldier, it can be very effective.

Another point discussed was that you no longer have complete control over performance with NHibernate.  That’s true because you would be trusting the component to generate and run the sql for you.  You obviously don’t use the exact same tool for every job, and it was mentioned that the Amazon.com(s) of the world would need more control over data access than most enterprise applications.  Most internal enterprise applications only have a few hundred users (if that).  What performance do they really need?  Now, NHibernate is NOT slow, but  if you need to go 400 MPH instead of a measley 397MPH, then you have some pretty heavy traffic and strict performance requirements.  For the rest of applications, NHibernate probably offers more speed than required.

Another topic of great interest to me:  The cancelled Ojbect Spaces project.  It was mentioned that it was cancelled because of the DLinq project that was developing, and Microsoft didn’t want to have two models for OR mapping.  I’m not sure about the details of this, but it was mentioned on the show.

All in all, I like podcasts from Ron Jacobs.  He’s a great personality for a radio show, and he pulls in some interesting topics.

How to explicitly fail a FIT test while using the new DoFixture – level 300

The FitLibrary has been recently ported to .Net, and I love using the DoFixture.  One problem is that the normal DoRow and DoCell methods of the Fixture class don’t run in the DoFixture.  With other fixtures, I’m used to hooking into to these places in order to spread logic throughout my FIT test.  DoFixture uses two new methods: 
System.Collections.IEnumerable ParameterCells(CellRange theCells)
and
System.Collections.IEnumerable MethodCells(CellRange theCells)

This is how the DoFixture implements it’s special behavior.  For example:
|MyDoFixture|
|Run System|
|Make Sure File|taxes.txt|Was Saved|

This is a simple fixture that makes sure the system saves a file “taxes.txt”.  Here, I’m not using the built-in Check method.  I’m implementing one myself.  Either the file was saved or it wasn’t.

My underlying fixture will look like this:

using fit;

using fit.exception;

 

public class MyDoFixture : DoFixture

{

    public MyDoFixture()

    {

    }

 

    public void RunSystem()

    {

        //hook into system

    }

 

    public void MakeSureFileWasSaved()

    {

        //hook into system to make sure file was save.  if it wasn’t. ..

        throw new fit.exception.FitFailureException(“File should have been saved.”)

    }

}

This works, but will throw an exception instead of failing the test.  I need to register a FIT failure and not an exception.  FIT treats these two differently.  In order to fail a FIT test, I have to use the Wrong(Parse theParse) method.  I have to have the current Parse object in order to call this method.  The Parse object is the current cell/row/table.  The FIT table is composed of a Parse hierarchy.  I don’t really like the way it’s set up, but that’s not the point of this post.

In order to call the Wrong(…) method to fail the test correctly with a reason to output to the screen, I must obtain the current Parse object.  Because the DoFixture doesn’t fire the virtual DoCell or DoRow method, I can’t use these.  Instead, I’ll override a DoFixture method that fires for every row and modify my fixture like this:

using fit;

using fit.exception;

 

public class MyDoFixture : DoFixture

{

    private Parse _currentParse;

 

    public MyDoFixture()

    {

    }

 

    public void RunSystem()

    {

        //hook into system

    }

 

    public void MakeSureFileWasSaved()

    {

        //hook into system to make sure file was save.  if it wasn’t. ..

        this.Wrong(_currentParse, “File wasn’t saved.”);

    }

 

    protected override System.Collections.IEnumerable MethodCells(CellRange theCells)

    {

        _currentParse = theCells.Cells;

        return base.MethodCells(theCells);

    }

}

With this code, I have my fixture maintain the current Parse (I’m tracking the current row) so that when a line fails, I can properly fail the test with an appropriate message.  This is a much better solution that throwing an exception.  Throwing an exception means that the test environment blew up and needs to be fixed.  Test failures mean the code broke.  An exceptioin means the FitNesse server broke..

I’m booked for Tech Ed. Fly in Saturday morning for “Party with Palermo” 2006 – level 100

It’s done.  I’m booked for Tech Ed 2006.  I attended Tech Ed 2005 where I threw a pre-conference party dubbed “Party with Palermo”, and I’m doing it again this year.  Last year we held it at the Peabody restaurant the day before the conference started.  I haven’t determined this year’s location yet, but it will be the evening before the conference starts (Saturday), so plan now to attend and fly in at least by Saturday morning.  I’m arriving in Boston at 1PM on Saturday, and I’m staying at the Hilton Boston Logan:

Hilton Boston Logan

85 Terminal Road

Boston, MA 02128

Phone: (617) 568-6700

Fax: (617) 568-6800

This hotel is $145/night with the Microsoft rate, so it’s the closest/cheapest combination I went with.  I’d love to know who’s booked at this hotel as well.

If you will be at Tech Ed and would like to help plan Party with Palermo 2006, let me know.  I’ll have an RSVP as the date draws near so that we will know how many people to expect.
Go to the Tech Ed site to see hotels and rates. . . and book at the Hilton Boston Logan. 🙂

If you haven’t been to Tech Ed, I’d highly recommend it as a chance for some 1st-rate developer (and IT pro) training.  Other benefits are huge as well.  I personally love the chances to network with other people in the industry around the country and around the world.  I met so many great people last year, and Party with Palermo served as a great ice-breaker for introductions.  Even if your employer won’t pay for it, I recommend paying for it yourself.  I consider it an investment in my career.

Benefits of FIT tests for systems without a UI – level 200

Sam Gentile reminds us of the value of FIT when requirements might be vague _and_ complicated.  His team is using it to help the customer understand what they want.  His post reminded me of the value I’m already starting to take for granted.

My team is still early on in our FIT implementation.  We have a few components covered so far, but recently, we created a new service that runs all the time and does stuff.  Do you love the vague description?  It responds to a queue:  One component drops off some work, and it picks it up and does it’s thing.  We have this implemented as a Windows service, so it would be very difficult to test by manual means.  The tester would have no UI at all, and they would have to arrange for work to be dropped off, wait a bit and check that the work was completed. 

FIT enables a whole new dimension to testing this service.  Our tester has been creating his test cases even while we’ve been creating the FIT fixture for it.  We hashed out what the fixture would look like on the whiteboard, and then started.   We have our first draft of the FIT fixture working, and it’s a big eye-opener to be able to test a component (that has no UI) with a GUI.  These are automated tests as well (that’s FIT’s nature), and we’ll run them forevermore.  Once we have these tests passing, they will serve as a regression tests suite for this component.  When a bug arrises, we’ll create a FIT test to isolate the bug and then fix it. 

We’re using the FitNesse wiki to organize our FIT tests.  We’ve set up a special CC.Net build that detects wiki changes and commits those to Subversion.  Our CC.Net build for the component pushes the latest bits to the FitNesse wiki so the FIT tests are always run against the current build.  Read how we integrated FIT into our build here.  Read how we versioned the wiki here.  Mike Roberts has done something similar as well.
 
For this work, we’re using the newly-ported FitLibrary‘s DoFixture.  The DoFixture allows a very intuitive layout of the test.  For more information, I’d recommend reading the FIT book by Ward Cunningham and Rick Mugridge.

MCPD upgrade exams ready. Time to upgrade that “old” MCSD for .Net – level 200

Matt is already on his way to MCPD, wich is Microsoft Certified Professional Developer.  For those who have the MCSD.Net certification, the upgrade requires two exams.  I hadn’t been keeping up with when the upgrade exams would be available, but they are now, so I’d better hustle!

Here’s the MS information on the topic:  http://www.microsoft.com/learning/mcp/mcpd/entapp/

RAD kills. . . software – level 200

I figured the title would get someone’s attention.  I’d like to share this piece of art that I created. 

By now, everyone has been educated about the dangers of cigarettes and
tobacco.  Cigarettes can be addictive and even deadly.  While
they may be soothing in the short-term, they ultimately cause
irreparable damage. 

I’d like to compare this to the RAD style of development.  RAD is
quick and satisfies in the short term.  Sometimes it’s amazing how
many features can be created just by some dragging and dropping. 
You could whip out an entire system in a week.  It’s almost too
good to be true.

But what about next month, and the month after that?  The attempts
at enhancing that system are futile.  You desperately try to get
some productivity out of the RAD approach, and you may be able to slap
on a new feature quickly, but when you have to modify or enhance an
older feature, you can’t get the same satisfaction.  It takes
forever to debug through to see what’s really happening.

RAD is quick, but it incurs huge technical debt.  RAD is like
consumer credit.  You can have it now, but will you really have
time to clean it up later?  If you buy that big-screen HDTV now,
will you really have $3000 extra dollars lying around later?  RAD
can be very tempting, but in the end, it leads only to death. . . of
your software.  At some point, the only recourse will be a total
rewrite.  At that point, will you use RAD again for the rewrite or
use another _sustainable_ process?

Orthogonal to RAD is sustainable development:  Creating interfaces
at key areas of your system to ensure flexibility.  Adequately
testing your software to ensure you don’t break old features when you
introduce new ones.  Ensuring your software never becomes legacy
code, etc.

RAD isn’t the answer.  Creating sustainable software takes work
and is not for the faint-hearted.  It takes good software (dare I
say?) engineers – critical thinkers – disciplined craftsman.

I hope you have enjoyed the satire of this post, but satire can’t exist without a grain of truth.

Palermo’s rules for Pairing (pair programming, that is) – level 200

Palermo’s rules for Pairing:
Pair programming is a great way for real-time code review and collaboration.  Chances are that the end goal will be reached more quickly by working with a partner when writing software.  This might sound counter-intuitive at first if you think that 2 people splitting the work would accomplish the goal faster.  What actually happens very often is that, without communication, work is duplicated, or people code is slightly different directions.  I’m won’t pander to the benefits of pair programming too much, but when pairing, here are my rules:

  • Talk while you code.  Explain the thought process that is leading you to type what you are typing.
  • Play the TDD game.  Have one person write a test, then pass the keyboard so that other other person makes the test pass.  The person who gets the test to pass decides if he wants to write a test next or pass the keyboard.
  • If both programmers aren’t on the same page for the task, whiteboard it to form a common understanding.
  • Commit the code to source control after each task is completed.
  • When working in a team of more than 2, rotate pairs so that knowledge flows.
  • If one of the programmers is a clear novice on the task, have him drive first until he’s no longer the novice.
  • Don’t steal the keyboard.  If you know of a better way to accomplish a task, explain your thought process so that your partner understands the better way.
  • Defer driving.  If you are completely comfortable with the code that is about to be written, pass the keyboard to your partner.  Writing the code will benefit him more than you.
  • Communicate.  It’s tought, and there will be disagreements.  No one has overruling authority.  If you haven’t convinced the other of your viewpoint, you haven’t communicated it effectively.  If it’s important, then everyone on the team must understand why.
  • Communicate.
  • Communicate.

A Microsoft provider _is_ a singleton – always – level 200

I’ve done quite a bit with Whidbey and .Net 2.0 since Beta 1 hit in mid-2004.  I was one of the early adopters that submitted bug reports as well, and I’ve tech editted several .Net 2.0 books, and I’m mostly impressed with .Net 2.0, but there are some aspects that I’m dissapointed with (you can read past ones on my blog).

I’ve been wrapping some of my existing code with the built-in providers in ASP.NET.  I’m finished with the MembershipProvider, but all I needed was 4 of the methods, and the abstract class has over 20.  What a waste.  I’m in the process of wrapping my code with the SiteMapProvider, and that one looks more civilized.

Microsoft has touted it’s Provider pattern as a way to configure application behavior.  They’ve touted it as a custom mix of Strategy and Plugin.  It’s true that you can change the configuration file, and you’ll hook up a different provider, and the behavior of the application will change.

What’s not widely known is that _every_ provider is a Singleton.  There is no getting around it.  The biggest implication of this is that where before, only few had to worry about writing thread-safe code, now even the hobbyist has to be aware of threads when creating a provider.  There is not a way to configure them away from being singletons, either. 

Using them with ASP.NET is the biggest concern because every request runs on a different thread.  Handlers are per-thread, but IHttpModule(s) have AppDomain scope just like providers.  What this means is that providers are not pluggable components. . . they are services and must be treated as such.  They are entities that will serve multiple customers.  Each customer doesn’t get his own instance of the provider, but one instance is shared for the life of the application.  I’d prefer to have one instance per use. 

Early on in the Beta cycle, Rob Howard had the same thoughts as me:
“This is a disastor waiting to happen. We you have an API that is
funtionally wrapped up into a package a static methods (I know the Role
Provider class isn’t static, but it’s access is) you have a service. . . ” 4/16/2004

You can verify my facts here by reading this article on MSDN (search for “thread”).

Austin .Net User Group hosting an MSDN Code Camp March 4th – level 000

What: Austin MSDN Code Camp – register at www.adnug.org

When: March 4, 2006 – 8:30 AM – 5PM

Where: St. Edwards professional learning center, Austin, TX

NOTE:  3 Code Camp speakers are from CodeBetter.com

An MSDN Code Camp is a free
local conference for developers and by developers.  We, as a community are the speakers as well
as the attendees.  See the Code Camp
Manifesto for more information:  http://www.bostondotnet.org/codecamp/default.aspx/CodeCamp/CodeCampManifesto.html

Austin Code Camp is looking for ways to make its
first MSDN Code Camp the best it can be. 
The secret is you! This is an Austin
developer community based event that requires both speakers and attendees. The
goal of the Code Camp is to provide an intensive developer to developer
learning experience that is fun and technically stimulating. The primary focus
is on delivering programming information and sample code that can be used
immediately.
The
event is free and all slides, manuals and demo code are provided
free!  Every attendee will receive a free lunch courtesy of
Microsoft, free t-shirt, free programming book from Wrox, free 1-hour
gift certificate to CyberJocks, and other goodies.  CyberJocks
will even have an XBox gaming room set up at lunch.  This event is
open to all, and is going to be great!

This one-day camp is hosted
at the St. Edwards professional learning center. The Code Camp will contain 1 –
1.5 hours sessions on topics the local community values; these session depend
on the speakers and attendees.

Sessions up for voting now:

Jeremy
Miller                                     
Code Smells
Scott
Bellware                                    
Test-Driven Development Techniques
Jeffrey
Palermo                                   
Pragmatic ASP.NET 2.0 (which features do I need?)
Jeremy
Miller                                     
Dependency Injection – What is it, and what is it good for?
Scott
Bellware                                    
Domain-Driven Design Essentials
Cody
Powell                                       
Writing Maintainable, Multi-threaded Code
Scott
Bellware                                    
C# 3.0 and LINQ
Terry
Meyer                                       
Intro to CSS for .Net Developers
J
Sawyer                                          
Introduction to Windows Workflow Foundation
Eric
Pollitt                                      
Error Management in SQL Server 2005
Steve
Donie                                       
CruiseControl.NET – basic to advanced
J
Sawyer                                          
Building Activities for Windows Workflow Foundation
Blake
Caraway                                     
Smart smart-client architecture
Joe
Celko                                         
Trees in SQL
Ray
Houston                                       
Expanding ASP.NET
Chad
Myers                                        
Designing Extensible Applications
Anil
Desai                                        
Automating Development and Testing Through Virtualization
Paul
Jones                                        
Fault Management – Performance Monitoring