What are the Alt.Net principles? – my answer

I’ll be attending the AltNetConf.  Convenient for me that it’s in Austin, TX.  It’s an open space conference, and I consider it the founding conference of a conversation that is “Alt.Net”.  I’ll be proposing the topic: What are the Alt.Net principles?.

The definition of Alt.Net isn’t defined yet.  It’s not even at a point where I can explain what it is and actually have other agree with me.  

Since I know the guy who originally coined the term, I have some insight into what David meant, but I’m going to propose some principles that, together, should be the definition of Alt.Net.

First, Alt.Net inherits Agile, and IS Agile.  Therefore:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan 

Extended principles:

  • Excellence and quality
    In a world of wizard-generated, disposable software (disposed 2 years later by necessity, not choice), Alt.Net focuses us on excellence in the software we create.  While the world may pay for and accept software that lives for 2 years before becoming utterly unmaintainable, we don’t accept shoddy work.  We know that we can deliver high quality software faster than others can deliver low quality software, so we accept no less than the highest in quality.  We strive for excellence through solid engineering practices and a high level of software education.  Coincidentally, Extreme Programming helps in this area, but Alt.Net does not specifically _mean_ XP.
  • Alternative Vendors
    A  common theme in many .Net shops is they are “Microsoft” shops.  In other words, if it doesn’t come from Microsoft, they won’t use it.  This makes no sense.  Microsoft is not a large enough company to be the vendor to the whole world.  .Net is a great platform, and we as a community have chosen the platform and choose to participate in this segment of the industry.  We strongly believe that 3rd party vendors complement the .Net platform in a way that can contribute to excellent working software.  In fact, some 3rd party offerings are superior to Microsoft’s offering.  For instance, in a strive for excellence in an e-commerce website, a team may choose a mature O/R Mapper like NHibernate to accelerate team speed and produce a flexible data layer; however, Alt.Net does not _mean_ ORM.  Open source software is a source of excellent 3rd party alternatives built on the .Net platform, and it should be used over Microsoft alternatives when they contribute to excellence; however, Alt.Net does not _mean_ open source.
  • Joy in our work
    We know that we will produce better software if a team is motivated and morale is high; therefore, we use libraries, tools, and practices that add joy to the working environment.  We abandon or correct libraries, tools, and practices that make it a drag to work on the software.  For instance, many find that Visual Studio is a bit slow to work with and that adding Resharper to the IDE adds a high level of joy while working with .Net code; however, Alt.Net does not _mean_ Resharper.
  • Knowledge
    We know that we will never know everything there is to know.  We strive for a greater understanding of software through studying successes and failures of the past as well as the present.  We educate ourselves through many sources in order to bring the most value to our clients.  We keep up with other platforms, such as Java and Ruby, so that we can apply their good ideas to .Net development and increase the quality of our .Net software.  The technology is always changing, but the knowledge accumulates.  We know that the knowledge applies no matter how the technology changes.  With knowledge comes humility because without humility, knowledge would pass us by.

The above are principles, so they are intentionally abstract.  Below, I’ll list some items that are concrete.  These items apply the principles but are more directly applicable:

  • Read more than just MSDN magazine and MS Press.  Authors like Feathers, Fowler, Martin, Evans, etc have a lot to give (Knowledge)
  • Use Resharper.  It makes working with Visual Studio a (Joy).  But if another vendor comes along that does even better than JetBrains, consider switching
  • Use NUnit  over MSTest,  Subversion over TFS SCC,  Infragistics/Telerik over in-the-box controls,  RedGate over in-the-box SQL tools.  Each of these is a better alternative to that which Microsoft provides (Alternative Vendors).  Use NHibernate over hand-rolled stored procedures and especially over DataAdapter/DataSet, but if EntityFramework proves to actually be superior to NHibernate in a meaningful way, consider using it.
  • Use a responsible application architecture.  Don’t put everything in Page_Load like you see demonstrated at MSDN Events.  Use knowledge to create an application that can stand the test of time and not be rewritten every 2 years.  Deliver (high quality and excellence). 
  • Automate every repetitive task; builds, tests, deployments, etc – excellence and joy

The concrete examples could go on and on, and I hope AltNetConf produces a long list.  I’ll be interested in having my proposed principles accepted by the community there or revised into something better.  Either way, I’d like to get to a point where there is an accepted definition of Alt.Net.

AgileAustin kick-off meeting a big success.

Last night a new user group, AgileAustin kicked off its meeting schedule with a presentation by Jim Van Riper from Troux Technologies.  

Background on AgileAustin:  AgileAustin was formed by an idea from Kert Peterson and manifested by many others.  The mission of AgileAustin is:

. . . to promote agile software development concepts such as those set forth in the Agile Manifesto (agilemanifesto.org), to create a public forum for the exchange of practice information, and to create opportunities for the professional development of members.

The group meets on the 2nd Tuesday of every month at 6pm at the Microsoft office. 

Mr. Van Riper related how in 9 months Troux restructured its product department to turn a failing product into a wildly successful one.  he relates how Troux adopted Agile as a means to transform the product group.  They overlaid another level over the Agile Manifesto:

Culture encompassing (Individuals and interactions over processes and tools)
Vision encompassing (Working software over comprehensive documentation)
Customer commitment encompassing (Customer collaboration over contract negotiation)
Embracing failure encompassing (Responding to change over following a plan)

Mr. Van Riper’s goal is to work himself out of a job.  he contends that when there is a healthy culture, people can come and go, and the culture remains.  The culture at Troux had to change from top to bottom.  their Agile transformation affected the CEO all the way down to individual contributors. 

At Troux, Mr. Van Riper owns the backlog, owns the vision and owns the budget.  By that structure, the process is very streamlined.  The development team owns the software.  They own every aspect of it, and they focus on fast delivery of it.  It’s interesting to note that the company only has 14 developers, but they release every 3 months. 

Mr. Van Riper emphasized “knowing when to stop”.  Instead of adding every possible feature, Troux uses market judgement to know when enough is enough to put out a release. 

As part of the culture change that occurred at Troux, Jim aimed to “Squash passive aggressive behavior and bitching.”  Troux has a disciplined chain of command, and they realize that not everyone needs to be involved in every decision.  If someone doesn’t like the decision, they can escalate, but the buck stops at the CEO, where the chain of command stops.  No design by committee, but a clear chain of command.

The next section of the presentation really impressed me:  “Hire the best, fire the rest.”  All management was replaced in the process.  Mr. Van Riper contends that some aren’t good enough, and some choose not to adopt Agile, so others self-select to move on.  Some might interpret this to mean that to adopt Agile, all current people have to leave, but I contend that if the organization isn’t getting the job done, then a lot needs to change, including some of the people responsible for the failures.

Jim also touched on team workload.  he doesn’t want folks working more than 45 hours per week because if they are tired, he doesn’t want that code checked in.  The code likely won’t be good if written while the programmer is fatigued.  I have to reiterate that Jim is relating things that have turned Troux around into a success.  These things worked for his organization.  Jim enjoys the churn in the organization because it constantly brings a fresh perspective. 

As far as product strategy, Troux does NOT copy the market.  They focus on their users’ boss, not the user directly.  By doing this, they make the users more successful in the eyes of the bosses, not just themselves.

Troux breaks up work into “Must”, “Hope”, and “Wish”. 

  • Must – will not ship without.
  • Hope – might ship without.
  • Wish – will ship without, but this needs to be kept in mind for future releases.

Jim recognizes that support and maintenance can sabotage projects.  Developers are routinely pulled off for support without adjusting the schedule for the project.  That just doesn’t make sense.  Bugs coming back from the fields are escalated up to the product owner so one person can assess each on relative to other potential work.  If bugs go into a release, other features must come out.  The plan has to be based on reality. 

Kill the product manager

It sounds crazy, but this is what Mr. Van Riper means:  In waterfall, Ghant charts are sacred, and Agile causes all practices to be rethought.  To some, throwing away the Ghant chart is heresy.  Jim relates that Ghant charts are produced along with Market requirements documents (MRDs) and that MRDs assume perfect knowledge, which is a fallacy.  MRDs are never always correct.  Rather, Troux makes product managers part of the Team, not separate from the team producing MRDs to be consumed by the team.  For those familiar with the Pigs and Chickens analogy, Troux makes product managers pigs instead of chickens.  This makes product managers completely committed and not just merely involved. 

“We need Product Management separate from Product Development for checks & balances.”  Mr. Van Riper scoffs at that notion and considers it dysfunctional.  Jim is the VP of Product, and everyone involved in a product release reports to him, product managers, software developers, and testers (with the help of line managers).

Jim equates lots of documentation as fear of failure.  “Henry Ford called failure an opportunity to begin again more intelligently.”  Troux prefers to get something working, share it, then fix it.

Gary, the Director of Development, capped off the presentation with an explanation of how development is organized to get stuff done.  Scott Bellware was more than happy to ask questions which enhanced the presentation and challenged the speaker.  We had a bit of difficulty defining an “ad hoc” process.  Gary  related taking the development organization from a waterfall process with sign-off gates to a successful Agile process which actually produced working software.  At first, everyone contended that if only waterfall was followed, the software would succeed.  In reality, the software was failing in many ways.  Individuals would try valiantly, but just work themselves to a pulp.  The company depended on heroism by individuals, not the coordinated work of a gelled team.   

Gary called out the importance of a cadence to the developers’ work.  Iteration by iteration, the group obtained a stride, and through it all, the team built trust.  Developers instilled discipline in themselves.  Through past experience, developers were asking permission to do things that were necessary like unit tests.  By pushing decisions down, developers decided how the work should be done – not management.  Then by planning capacity, Troux was able to plan based more on reality and have the developers empowered to execute with discipline. 

Tools:  Gary recommends not using any tools for the first six months in order to establish the desired values.  Then, tools can be adopted base on fit.  He’s referring to project management tools like Rally, Team System, etc.  Finally, Gary helped his developers change and on the other hand, insisted they do so.  Then, he got out of the way.  About practices, he contends that continuous integration is their most important practice and that their holy grail will be when the build doesn’t break any more.

Metrics:  Gary does track some metrics, but they are very high-level and posted publicly.  He only tracks burn-down  of an iteration as well as builds.  He doesn’t attempt to track everything because that’s not valuable.  By the amount of discussion surrounding metrics, it sounds like there are a lot of opinions. 

The first meeting of AgileAustin was a huge success with standing room only.  Visit AgileAustin.org if you are interested in the group, and join us for our next meeting on October 2nd.

Focus on the core: the most important part of the application

Technologies are coming and going faster than every before.  In this environment, how can we provide companies with a good return for their software investment.  Looking back, J2EE was all the rage.  Software executives were banking on J2EE and making significant investments.  The same thing happened with COM+, ASP 3.0, etc.  Managers were projecting significant savings by using these.  Now, where are the savings.  Many applications written with these are being rewritten in newer technologies. 

Why?  Because the applications had no core.  By core, I mean, the center of the application that describes the business domain.  Typically, these are classes and interfaces.  Creating classes using COM+ or J2EE doesn’t an application core make.  The core doesn’t care about surrounding technology. The core is your domain model.  By its design, it’s the most important part of the application, but, done well, it’s portable. 

Look around and see if you can relate to this:  A software team focuses much energy on making the database as good as possible and they create stored procedures to pull back the data as quickly as possible.  They also consider the use cases of the screens that are necessary.  Using technology X for the presentation, they make database table designs and stored procedures that return exactly what the screen needs to show to the user.  Perhaps J2EE or COM+ is used in the passage of information from the database to the UI.  Perhaps Enterprise Java Beans for COM+ components perform some transformation or calculations necessary for the screens. 

Take a step back and remove the screens.  Remove the database.  Is there any application left?  Can you point to any business rules or domain concepts left in the application after the presentation and storage components are removed?  In my experience, I’ve had to answer “no” more than once.  This is absolutely the wrong way to develop software.

Software systems should be resistant to climate change.  The technology climate is always changing.  The core is the most important part of the application, and it should be insulated against changes on the outside.  Over time presentation technologies have changed many, many times.  Data access technologies haven’t sat still either.  We still have the relational database, but the manner of using it is constantly changing. 

Software health check:  Take away the code used to talk to the database.  Take away every screen.  You should still have an application.  You should be left with your application core or domain model and domain services.  Everything should be intact. 

Handle changes in technology gracefully.  If your application has a healthy core, you will be able to upgrade to the next-generation UI.  You’ll be able to change your data access to use LINQ or an ORM without much impact.  If you don’t have a healthy core, any change in technology requires almost a wholesale rewrite.

Any software system is a large investment for a company.  That software is expected to last a LONG time.  By focusing on the core of the application (domain model), the software will be able to weather changes in the technology climate.  By creating a healthy core, your software will be able to drop and adopt technologies as necessary.  Let’s stop rewriting and rewriting and start creating healthy software.

The inspiration for this post came from Jim Shore’ s thoughts.

Levels of automated testing within a single application

We need a common language for the different types of automated testing.  We’re partially there, but the term “unit test” is still very confusing.  Here, I’ll lay out the different types of automated tests I find helpful with a single application:

  • Unit testing – testing a single class or possible a small group of collaborating classes (absolutely does not call out of process and is the fastest-running of all automated tests).  Running 1000 unit tests in 3 or 4 seconds is common.
  • Full system tests – through the UI integrated with the full application including the database.  May or may not use real system dependencies such as external web services.  (These are the slowest of all tests)
  • Integration testing.  Here, there are some categories.
    • Data access tests.  Used to test repositories, data access classes, etc.  These tests validate the translation from entities to data.  These tests run all SQL and test the structure of the database schema as well.  A real database must be involved.
    • General scenario testing.  Any time it’s appropriate to pull a section of the application in and run a lot of classes together, this is an integration test.  It involves several parts of the system, not just one.  It can run fast if completely in process, or it can be slow if it requires an out-of-process call such as leveraging the file system.

This is not an exhaustive list, but it includes most of the automated testing on a typical enterprise application.  Feel free to comment with any type I may have left out.

Scott Bellware reasoned that the database needs to be left out for unit testing.  I completely agree.  Unit testing, by common definition, excludes external dependencies.  It’s not a unit test if we reach out and touch things.  When you have the right number of unit tests (for example, I’ve worked on a smart client system with 80,000 lines of code and 1300 unit tests and another 700 integration tests), you can’t afford to take more than a few milliseconds to run each one.  You need your unit tests to run very quickly.  Otherwise, you won’t run them very often.

Conversely, this doesn’t mean that the database should be ignored when testing a system.  There are plenty of reasons why a database, SQL, or stored procedures, triggers (shudder), views, etc can cause a bug in the system.  I insist writing an automated integration test for every database operation.  How else can we verify that the database operation works correctly?  We can’t.  It is important, however, for communication’s sake, to understand that these database-inclusive tests are integration tests, as are any tests that exercise an external dependency.

Automated testing with the database REQUIRES the following:

  1. Every developer has a dedicated instance of the database that can be dropped and created at will.
  2. Tests must be responsible for their own data setup.  An empty database should be all that is required to run the test.  The test must be responsible for adding data for the appropriate scenario before testing the scenario.
  3. You will want to generalize test data setup because it isn’t feasible to expect EVERY test to set up all the data.  A general data set that sets a base line of data is very useful and can be invoked with a data helper class.  Then each test can just add specific data necessary for it’s test case.
  4. Data setup, database creation, etc should be automated.  If it’s manual, it cost more, and you won’t run the tests as often.
  5. Database schema must be in source control with the code.  Without that, you never know what the correct version of the schema is.

Another of Scott’s points: “As a side effect of doing the necessary dependency injection, you often get a cleaner and more explicit separation of concerns – which makes software easier to change and maintain.”

He’s right.  If you can’t unit test your domain classes because everything you do with them requires a real database to be online, you have an indication that you aren’t separating concerns.  Data access should be independent of domain object behavior in most cases.  I should be able to verify that a Customer object can Sort() itself without invoking a database query, but if constructing a Customer initiates a database call, my domain model is then materially coupled to the database and needs to be separated.

Jeremy Miller is of the same mind in his comment: “Referential integrity, non null checks, and sundry other data constraints.  All good things.  All a pain in the ass when you’re unit test only needs a single property set on the InvoiceItem class.”

To help clear up some confusion with the term “unit test”, I propose a simple constraint in our dialog:  If the test calls out-of-process, it is then disqualified from “unit test” status and falls into “integration test”.  Feel free to argue in the comments. 🙂

Xml is the code of the future – so long C# – say it isn’t so.

Ok, so the title is a bit sarcastic.  I remember the debut of Xml.  We used it for data.  It was a way to improve on comma-delimited strings.  We converted from flat files to xml files for data.

It wasn’t long before scripting could be expressed in Xml, and now I see teams with large libraries of executable Xml in the form of build scripts with NAnt.

I think it’s going to far, however.  I’m beginning to hear that all software can now be expressed in Xml if you have the right tools.  The implications of that is just shifting from C# programming to Xml programming.  Whatever the language of execution is – that’s the programming language.

I share Ayende’s concern for Xml programming.  Xml is not a 5th generation language.  Just because designers can generate Xml instead of C# doesn’t mean code is going away.  It just means that the chosen syntax is different for the generated code.

My stance on code generation has always been:

  • Machine generates code, human maintains (bad)
  • Machine generates code, machine runs code, human never has to see code (good). i.e. C# to MSIL
  • Machine generates code exactly how human would have written it anyway (good – just saves typing).
  • Machine generates code, human has to modify code and then maintain (worst)

Designers fall into the category that WANTS to be the 2nd bullet point, but that never happens.  WinForms tried to do this with the .designer file, but that code still needed to be understood and tweaked from time to time.  Designers don’t have good track records because they always constrain flexibility.  When you hit a wall, you always have to fall back and change the generated code.  That’s the problem.  Good intentions, but it won’t happen that way.  For it to work, the designer has to be the language. 

Designers come and designers go.  Languages stay and evolve.

Party with Palermo: Tech Ed 2007 edition (June 3rd @ 7PM) – official announcement updated

Party
with Palermo: Tech Ed 2007 edition (300 attendees estimated)

website:  http://partywith.palermo.cc

June 3rd, 2007 @ 7PM –
11PM


ADD THIS BADGE TO YOUR WEBSITE/BLOG IF YOU WILL BE THERE –
LINK IT BACK TO http://partywith.palermo.cc

Glo Lounge:  http://www.gloloungeorlando.com/

8967 International Dr,
Orlando, FL

(407) 351-0361
COME EARLY AT 6PM IF YOU’D LIKE TO HELP SET UP!!
 

Cover charge is 1
business card.  This will get you in the door and register you
for the grand prize drawings.

  • Free to attend
  • Free food
  • Free drink
  • Free swag

Sponsors:

 

 

KEEP TABS ON
HTTP://PARTYWITH.PALERMO.CC BECAUSE THIS IS WHERE THE INFORMATION WILL
BE POSTED.

Please
post a comment if you are planning on attending.  Also, if you
have a blog, write a post about the party and link back.

Who will be there?

Lorenzo
Barbieri

Don Demsak

Carl Franklin

Scott Lock

Rob Zelt

Keith Elder

James Kovacs

Kay Sellenrode

Dustin
Campbell

Bob Klass

Chris Knuckles

Dawn Maxey

Dave Mott

mark schoenbaum

Mickey
Gousset

Kevin Remde

Rob Windsor

Andy Winters

Jay Johnson

Mark Wiseley

Philip Colmer 

Jason Follas

Eric
Hexter

Arnold
Sese (kensington)

Joe Healy

Shawn
Weisfeld [C# MVP]

Sheryl Farmer

Scott Spradlin (INETA)

Drew Robbins

Cory Smith

Juan Jose de Leon

Bill
Reiss – DirectX MVP

Mike Wells

Guino – MVP C#

Guino Henostroza – MVP C#

Terje Flaarønning

Kjetil Nordahl

Dave Noderer

Chris Coneybeer

J. Ambrose Little

Regina

K. Rader

Hakan Karlsson

Joel Pratt

Jim Ferguson

Dan Villanti

Martin Woodward

Tim Meehle

Jeremy Wood

Connie Rennie

Rob Foster [MOSS MVP]

Sasha Krsmanovic

Pete Calvert (MCT)

Murray Gordon

John Paul Cook

Neil Bedekar

Mike Azocar

Bob Krainak

Matt Shultz

Brandon Kelly

David Patrick

Tom Barnum

Javier Lozano

Cam Soper

Tobias Barlind

Petter Källqvist

Jon Grant

Rob Rohr

John Miller

Arno Nel

Bob Beauchemin

Doug White

Edgardo Vega

Bryan Hoylman

David Makuta

DigitalMan

Scott Dorman

Robbie Clutton

Josh Watkins

Nigel Pepper

DE

DavidJ

Richard T

Milan Magudia

Bill Wolff

Todd Pukanecz

Dan Brinkmann

ranji abraham

Aaron Sudduth

Janssen Jones

Paul Dumigan

John Osborn

Dave Davis

Jeff Shaver

Tim Stevens

Bjarni Ivarsson

Dan Duda

Keith Rull

Julio Campos

Robert Stuczynski (Noise)

Andrew Connell (MOSS MVP)

Travis Fuller

Thomas Jespersen (Denmark)

Brad! Jones

Chuck Lysakowski

Marsee Henon

John W

Scott Hahn

Stuart Celarier

James Newkirk

Rob Caron

Bill Vaughn

Roberto Henriquez

Chris Rogers

Chuck Daubenspeck

Doug Seven

Andy Gray

Miguel Castro

Brett Richard

Josh Holmes

Jerry Sheehan

Andrea Mancini

Kim Dowding

Jason Rowe

Richard Bertini

Jason Heeter