Today at lunch, I had a conversation with Kevin Hurwitz about developer tendencies. Kevin related a project he consulted on. He came in after over a million dollars had already been spent by the client, and he found the following:
Not a single feature had been delivered (save a few half-finished screens)
The team had spent all the time creating an application framework that, when finished, would make the application practically create itself.
The development manager was the one who instigated the framework frenzy, and was eventually fired.
Clients typically engage us for actual software. A client isn’t interested in software that can be used to create software (frameworks). They have a problem, and a software system can help them solve the problem.
I, along with Kevin, have encountered the common mindset of falling from application mode into framework mode. I find it’s common with really smart programmers who take pride in their technical ability. Creating frameworks is fun because the programmer is the customer of the framework. Writing software for yourself can be more fun than writing software for someone else. Writing a framework is writing software for yourself. On a client’s dime, this is typically not acceptable.
The customer needs to see the software come to life before his eyes. Every week, the customer need to see new features working.
Playing devil’s advocate: _If_ it were possible to create the magic framework that would make the application practically write itself, communication is key, and it’s a bet, a strategy, an approach. If this were possible, and you’d successfully done it time and time again for many clients, then it should be discussed with the client beforehand. Ultimately, the framework becomes property of the paying client, so the client ought to know what is happening, and the client needs to be ok with it.
Back to reality: Frameworks are better harvested from several applications that ended up using similar patterns. Writing a framework before an application is BDUF (big design up front). There are so many assumptions that go into the framework that some of them are bound to be wrong.
The real, repeatable satisfaction. With many programmers preferring framework development, a client-centered process can bring joy in working on the client’s application. Delivering new builds of the software frequently will please the client, and that level of customer service brings satisfaction back the other way. There is nothing better than an elated client who loves what he sees unfolding before him.
What about all the common junk that the application needs? Chances are that there are already libraries that satisfy your needs. Data access. logging, authentication, caching, indexing, builds, UI controls, etc. There are both commercial and open-source offerings that make great buy vs. build decisions. For the needs that don’t have offerings yet, do it first, then when the exact same need comes up again, it will be easy to create a library that satisfies this common need now that you know _exactly_ what the need is. Then, the library or framework can be created _without assumptions_ because you have experience to back up the uses.
We go to a doctor because we think we have a medical need and we need an expert to consult with and a provide a solution to what is ailing us. The doctor listens and asks questions by which to formulate some possible solutions to the problem we present. The doctor doesn’t expect us to even be able to talk in the language of medicine.
In software, we can’t expect our clients to know what they need. Much like the depth of a patient may be “I need to get rid of this infection” or “Give me some pills. I’m in pain”. Our clients know what is causing them pain, in the business sense, so the problem is from that experience perspective. Some clients might go so far as to ask for some “pills”, that is, they might ask for a specific software system that is assumed to solve the problem.
It is the our job to listen and ask pointed questions to determine need. Being software experts, we must also know how to ask the right questions to flesh out the need. If the clients asks for something specifically, and we deliver, but that delivery doesn’t solve the pain, it’s our fault. If I ask my doctor for a specific pill, he prescribes it, then I’m still in pain, it’s the doctor’s fault for allowing me to drive diagnosis. We, as software professionals, are on the hook for responsible analysis that ensures any solution we provide adequately addresses the root of the problem at hand.
The client can’t tell us what he wants. It our job to find out. A required skill of any consultant at Headspring is analysis. Analysis is something underrated, but analysis ensures we build the right thing. The risk of building the wrong thing is so great that we can not afford it. Building (prescribing) the wrong thing would be disaster for our reputation. Analysis is what ensures we build the right thing.
On July 23, 24, and 25, I will be teaching another Agile Boot Camp in Austin, TX. The Agile Boot Camp is software engineering training for .Net developers. Unlike most training courses about programming, this class doesn’t focus on particular technologies or APIs. This course is all about practices. The students in the class make up a software team for the duration of the course, and together, we leverage all the practices in Extreme Programming as we extend a real-work enterprise application.
“Man! we actually covered a lot! Jeffrey did a great job of making it seem like not much, but when I got back and started telling my boss about it, there were just TONS of things to talk about. We talked about Team Estimation and Design, NHibernate Mappings, Test-Driven Development, Domain Driven Design, Automated Builds and on and on and on. I have absolutely no idea how we managed to cram all that stuff in three days, but we did.”
“While a lot of software development training is often presented very academically, Headspring’s was very hands-on and really pushed everyone to pick up the processes and tools very quickly, just as you would in the real world.”
“This was an excellent training event, and I plan to steal be inspired to use some of his methodologies at Interface Technical Training.”
Brad Mellen-Crandell Rapidparts Inc.
“This was the best technical training course I’ve been to, period. No fluff here. The course was packed with information and best practices that I could start implementing immediately when I got back to work on Monday.”
Ken Jackson Catapult Systems
“Jeff is an excellent teacher and practitioner of Agile principles and methods. His integration of open source tools to boost productivity will surely help me be more successful and confident in my daily working regimen.”
It’s no secret that we at Headspring Systems use NHibernate for data access in the custom software systems we deploy. I, personally, have been using NHibernate since 2005 when version 0.8 was current. Now, we’re approaching the 2.0 version, which I’m very excited about. With version 2.0, NHibernate will be mostly on par with Hibernate 3.2.
If you are just getting started with NHibernate, some of the first questions you’ll need to answer are:
How do I configure NHibernate?
How do I manage the session factory?
When do I create and throw away sessions?
The answers are different based on the context. If you have a smart client application, you’ll need to decide what size you want your units of work (I suggest they be small). You’ll need to create a new instance of ISession for each unit of work and Dispose() of it at the end after commiting the transaction.
It seems the majority of enterprise applications these days are web applications, and since I run a .Net shop, we use ASP.NET. The unit of work is easy here. We have one web request be one unit of work. We create an instance of ISession at the beginning of the web request and dispose of it at the end.
I’ve created several wrappers for NHibernate over the years, and I think they have improved as my understanding of NHibernate has increased. Ironically, the wrapper started out much more complex than it is today. I’ve simplified it over time, and now, I think I’ve gotten it down to its simplest essence.
The following is a class called HybridSessionBuilder. It is appropriate for ASP.NET applications using NHibernate, and you can see it in action in the CodeCampServer codebase. There is a version that supports multiple databases within the Tarantino project (sometimes you can’t have just one).
The key to this is the interface:
using NHibernate;
using NHibernate.Cfg;
namespace CodeCampServer.DataAccess
{
publicinterface ISessionBuilder
{
ISession GetSession();
Configuration GetConfiguration();
}
}
Repository classes should require an instance of ISessionBuilder passed into their constructors, and for each operation, they should call GetSession(). GetConfiguration is there to facilitate SchemaExport, which we use to generate the database schema from the NHibernate mappings.
Below is the HybridSessionBuilder class. Feel free to use it in your applications.
NHibernate will automatically look for a file in the current AppDomain’s base location named “hibernate.cfg.xml”. This file in CodeCampServer is the following:
As many of you know, last year, I moved from being an independent consultant to joining Headspring Systems as the Chief Technology Officer. At the time, Headspring was a consulting company transitioning from websites and Internet marketing to building custom web applications. I joined to help complete the transformation as well as add a focus on Agile in all the project and how we do business in general.
We completed the transition, and we are now a custom software company. We focus on enterprise applications (software that helps companies do business). We specifically don’t do device drivers, embedded software or consumer applications (which are huge markets in themselves). We have derived a process by which we execute each project, and that process mixes the more well-known Agile processes together. Our process can be best described by taking Extreme Programming and mixing in a few project management artifacts from Scrum. The engineering process is key to providing the highest quality and planning predictability.
This year, we also became a Microsoft Certified Partner. We are a bit of an odd partner because we give both praise and criticism to Microsoft when recommending solutions and products to our clients. Some of our clients are a bit surprised when our solutions contain a mixed bag of Microsoft and non-Microsoft products. Often, other Microsoft partners use the full “Microsoft stack” and don’t recommend anything unless Microsoft doesn’t have an offering in that area.
Competencies
In the partner acceptance process, we had to demonstrate some competencies. There are many competencies, and our two are:
Custom Development Solutions. We create and maintain custom software. These systems can be quite complex. We push QA to the front of the development activities to keep the complexity under control. We don’t employ testers only to discover defects. Instead, we employ testers to work before development to prevent defects. Well-known research has concluded that preventing defects is much more cost-effective than merely identifying them.
Business Process and Integration. We chose this competency because it fits quite well with our Agile focus. Because we do projects on site with our clients, it’s only natural that we help them improve their business process before (and while) creating software that depends on a solid business process. We never do a “throw it over the wall” project. We always insist on understanding the client’s business process so that we feel comfortable that we are providing all the value we can. Custom software is expensive, and it would be a disaster for us if we delivered software the client asked for only to realize later that it was based on a faulty business process.
We (Clear Measure) are a client of PSD2HTML. We have a designer, but we have found it more cost-effective to have PSD2HTML take our initial UI designs and create XHtml and CSS out of them. From there, we will add these screens to the custom web application we are building.
PSD2HTML has several examples showing how a design is converted into XHtml and CSS. You can examine all the code on their examples page. It’s interesting to see the techniques used by a company whose core competency is XHtml and CSS.
In this post, I’ll talk about and demonstrate integration testing. If you are just starting out with integration testing, you want to test small before you test big. For instance, full-system tests are good, but if they fail, they don’t give much of a hint as to where the failure is. Smaller integration tests will help narrow the area where the failure lies. After designing a vertical slice of the application with my team, I like to test-drive the code (that is micro-test the code) into existence. Then, each scenario gets a covering integration test to prove that all the pieces fit together.
A few rules to live by
· An integration test must be isolated in setup and teardown. If it requires some data to be in a database, it must put it there. Environmental variables should not cause the test to fail randomly.
· It must also run fast. If it is slow, build time will suffer, and you will run fewer builds – leading to other problems.
· Integration tests should be order-independent. It should not matter the order you run them. They should all pass.
· Feel free to make up rules that objectively result in fewer defects.
Testing a repository class
Below, you’ll see an integration test for ConferenceRepository.cs. This code is in CodeCampServer, so you have full access to the whole system if that helps you understand what’s going on.
Every test needs to set up its own state, so in this test, we see that the beginning of the test is using the application’s data access layer to save two Conference objects to the database. CodeCampServer using NHibernate, so the test will use the same when setting up the database for the test. If you examine the source, you will notice that the base class for all the NUnit test fixtures runs a command that clears out every table in the local database. This is important because each test needs a known starting point, and the easiest starting point is an empty database. Note that we’re talking about the local developer’s database, which should be created and updated by the local build.
After the database has two records, our class under test runs the GetConferenceByKey() method and returns a Conference. Our assert statements can then verify the code did the right thing. This test goes all the way through the data access layer and to the database. If anything was awry along the way, the test would fail.
My hope is that this brief example will fill in some gaps that may exist in your understanding of integration testing. Even though I’m doing integration testing and not unit testing, I’m keeping the scope of the test small because the larger the scope of the test, the harder it is to pinpoint the cause of any failure.
We (Headspring Systems) are a client of PSD2HTML. We have a designer, but we have found it more cost-effective to have PSD2HTML take our initial UI designs and create XHtml and CSS out of them. From there, we will add these screens to the custom web application we are building.
PSD2HTML has several examples showing how a design is converted into XHtml and CSS. You can examine all the code on their examples page. It’s interesting to see the techniques used by a company whose core competency is XHtml and CSS.
I’m amazed that there is so much talk about object/relational mappers these days. Pleased, but amazed. I tend to be in the "early adopter" part of the Rogers technology adoption curve. (Subscribe to my feed: http://feeds.jeffreypalermo.com/jeffreypalermo)
In the .Net world, I didn’t hear much talk about O/R Mappers in the early 2000s. I started working with NHibernate in 2005 while on a project with Jeremy Miller, Steve Donie, Jim Matthews, and Bret Pettichord. I researched, but never used, other O/R Mappers available at the time. Now, in 2008, I find that O/R Mappers in the .Net world are still in the early adopter part of the adoption curve. We have not yet hit early majority, but we have left the innovators section.
Microsoft has single-handedly pushed O/R Mapping to the center of conversation, and we struggle to objectively differentiate between the choices. Arguments like "Tool X rocks", or "Tool Y sucks" are hard to understand. I’d like to more objectively discuss the basis on which we should accept or reject an O/R Mapper. As always, it depends on context.
Context 1: Small, disposable application: In this case, we would put a premium on time to market while accepting technical debt given the application has a known lifespan. For this type of of situation, I think it depends on the skill so of the team we start with. If the team already knows an O/R Mapper, the team should probably stick with it since the learning curve of any other tool would slow down delivery.
Context 2: Complex line-of-business application: Here, the business is making an investment by building a system that is expected to yield return on the engineering investment. The life of the application is unbounded, so maintainability is king. We still want to be able to build quickly, but long-term cost of ownership has a heavy hand in decisions. Here, we have to objectively think about the tools used by the system.
I’ll use O/R Mappers in this example. On the right is a common visual studio solution structure. We would probably leverage the O/R Mapper in the DataAccess project. I consider the O/R Mapper to be infrastructure since it doesn’t add business value to the application. It merely is plumbing to help the application function. By following the references, we find that our business logic is coupled to the data access approach we choose as well as infrastructure we employ. Often we can build the system like this, and we can even keep the defect rate really low. This is very common, and I’d venture to guess that most readers have some experience with this type of structure. The problem with this structure is long-term maintainability. In keeping with the O/R Mapper decision, five years ago, I was not using NHibernate. If I ask myself if I’ll be using NHibernate five years from now, I have to assume that I probably won’t be, given the pace of technology. If this system has a chance of maintainability five years from now, I need to be able to upgrade parts of the system that are most affected by the pace of technology, like data access. My business logic shouldn’t be held hostage by the data access decision I made back in 2008. I don’t believe it’s a justified business position to say that when technology moves on, we will rewrite entire systems to keep up. Sadly, most of the industry operates this way.
On the left is the general solution structure I’m more in favor of. You see that the core of the application doesn’t reference my other project. The core project (give it whatever name you like) contains all my business logic, namely the domain model and supporting logical services that give my application its unique behaviors. Add a presentation layer for some screens, and the system delivers business value. Here, you see I’ve lumped data access in with infrastructure. Data access is just not that interesting, and system users don’t give a hoot how we write to the database. As long as the system is usable and has good response times, they are happy. After all, they are most happy when they are _not_ using the system. They don’t spend their leisure time using our system.
I consider data access to be infrastructure because it changes every year or too. Also consider communication protocols like ASMX, remoting, WCF to be infrastructure. WCF, too, will pass in a year or 10 for the next wave of communication protocols that "will solve all our business problems". Given this reality, it’s best not to couple the application to infrastructure. Any application today that is coupled to Enterprise Library data access will likely have to be completely rewritten in order to take advantage of any newer data access method. I’d venture to say that the management that approved the budget for the creation of said system didn’t know that a rewrite would be eminent in just 4 short years.
How do we ensure the long-term maintainability of our systems in the face of constantly changing infrastructure? The answer: Don’t couple to infrastructure. Regardless of the O/R Mapping tool you choose, don’t couple to it. The core of your application should not know or care what data access library you are using. I am a big fan of NHibernate right now, but I still keep it at arms length and banished to forever live in the Infrastructure project in the solution. I know that when I want to dump NHibernate for the next thing, it won’t be a big deal
How do I ensure I’m not coupled to my O/R Mapper?
The project my domain object reside in doesn’t have a reference to NHibernate.dll or your O/R Mapper of choice
The unit tests for my domain model don’t care about data access
My domain objects don’t have specific infrastructure code specific to the O/R Mapper
The key is in the flipped project reference. Have the infrastructure project reference the core, not the other way around. My core project has no reference to NHibernate.dll. The UI project has not reference either. Only in the infrastructure project.
Keep it easy to dump NHibernate when its time has come
For now, NHibernate is the O/RM of choice in .Net-land. When it’s time comes, don’t go to management and recommend a rewrite of the system because it’s completely tightly-coupled to NHibernate. Keep NHibernate off to the side so you can slide in the next data access library that comes along. If you tightly couple to your O/RM, you’ll sacrifice long-term maintainability.
When choosing an O/R Mapper: The objective criteria I think is most compelling is to determine of the library allows isolation. If the tool forces you to create an application around it, move on for a better one. The good libraries stay out of the way. If your O/R M always wants to be the center of attention, dump it for one that’s more humble. Use an O/R M that plays well behind a wall of interfaces. Beware the O/R M that doesn’t allow loose coupling. If you tightly couple, it’s a guaranteed rewrite when you decide to change data access strategies.