Previously, I posted how to support developers running Visual Studio 2008 and Visual Studio 2005 on the same .Net 2.0 solution on the same team, built, CI server, etc. The solution is to have 2 solution files. These files have to be kept in sync to keep a healthy build process.
It is very important to keep the VS2005 solution the main solution and the solution used in the continuous build because VS2008 supports everything there, but if you add a project using VS2008, VS2005 might not recognize some things. An example of this is with the path to the Microsoft.CSharp.targets file.
If you create a new project using VS2008, you’ll see the following in the newly created project:
Note the $(MSBuildToolsPath) property. This was added in .Net 3.5, and Visual Studio 2005 and MSBuild for .Net 2.0 doesn’t understand this property. Changing the project file to the following makes both versions of Visual Studio happy. (change it to $(MSBuildBinPath)
I’ll be attending the AltNetConf. Convenient for me that it’s in Austin, TX. It’s an open space conference, and I consider it the founding conference of a conversation that is “Alt.Net”. I’ll be proposing the topic: What are the Alt.Net principles?.
The definition of Alt.Net isn’t defined yet. It’s not even at a point where I can explain what it is and actually have other agree with me.
First, Alt.Net inherits Agile, and IS Agile. Therefore:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
- Excellence and quality
In a world of wizard-generated, disposable software (disposed 2 years later by necessity, not choice), Alt.Net focuses us on excellence in the software we create. While the world may pay for and accept software that lives for 2 years before becoming utterly unmaintainable, we don’t accept shoddy work. We know that we can deliver high quality software faster than others can deliver low quality software, so we accept no less than the highest in quality. We strive for excellence through solid engineering practices and a high level of software education. Coincidentally, Extreme Programming helps in this area, but Alt.Net does not specifically _mean_ XP.
- Alternative Vendors
A common theme in many .Net shops is they are “Microsoft” shops. In other words, if it doesn’t come from Microsoft, they won’t use it. This makes no sense. Microsoft is not a large enough company to be the vendor to the whole world. .Net is a great platform, and we as a community have chosen the platform and choose to participate in this segment of the industry. We strongly believe that 3rd party vendors complement the .Net platform in a way that can contribute to excellent working software. In fact, some 3rd party offerings are superior to Microsoft’s offering. For instance, in a strive for excellence in an e-commerce website, a team may choose a mature O/R Mapper like NHibernate to accelerate team speed and produce a flexible data layer; however, Alt.Net does not _mean_ ORM. Open source software is a source of excellent 3rd party alternatives built on the .Net platform, and it should be used over Microsoft alternatives when they contribute to excellence; however, Alt.Net does not _mean_ open source.
- Joy in our work
We know that we will produce better software if a team is motivated and morale is high; therefore, we use libraries, tools, and practices that add joy to the working environment. We abandon or correct libraries, tools, and practices that make it a drag to work on the software. For instance, many find that Visual Studio is a bit slow to work with and that adding Resharper to the IDE adds a high level of joy while working with .Net code; however, Alt.Net does not _mean_ Resharper.
We know that we will never know everything there is to know. We strive for a greater understanding of software through studying successes and failures of the past as well as the present. We educate ourselves through many sources in order to bring the most value to our clients. We keep up with other platforms, such as Java and Ruby, so that we can apply their good ideas to .Net development and increase the quality of our .Net software. The technology is always changing, but the knowledge accumulates. We know that the knowledge applies no matter how the technology changes. With knowledge comes humility because without humility, knowledge would pass us by.
The above are principles, so they are intentionally abstract. Below, I’ll list some items that are concrete. These items apply the principles but are more directly applicable:
- Read more than just MSDN magazine and MS Press. Authors like Feathers, Fowler, Martin, Evans, etc have a lot to give (Knowledge)
- Use Resharper. It makes working with Visual Studio a (Joy). But if another vendor comes along that does even better than JetBrains, consider switching
- Use NUnit over MSTest, Subversion over TFS SCC, Infragistics/Telerik over in-the-box controls, RedGate over in-the-box SQL tools. Each of these is a better alternative to that which Microsoft provides (Alternative Vendors). Use NHibernate over hand-rolled stored procedures and especially over DataAdapter/DataSet, but if EntityFramework proves to actually be superior to NHibernate in a meaningful way, consider using it.
- Use a responsible application architecture. Don’t put everything in Page_Load like you see demonstrated at MSDN Events. Use knowledge to create an application that can stand the test of time and not be rewritten every 2 years. Deliver (high quality and excellence).
- Automate every repetitive task; builds, tests, deployments, etc – excellence and joy
The concrete examples could go on and on, and I hope AltNetConf produces a long list. I’ll be interested in having my proposed principles accepted by the community there or revised into something better. Either way, I’d like to get to a point where there is an accepted definition of Alt.Net.
Technologies are coming and going faster than every before. In this environment, how can we provide companies with a good return for their software investment. Looking back, J2EE was all the rage. Software executives were banking on J2EE and making significant investments. The same thing happened with COM+, ASP 3.0, etc. Managers were projecting significant savings by using these. Now, where are the savings. Many applications written with these are being rewritten in newer technologies.
Why? Because the applications had no core. By core, I mean, the center of the application that describes the business domain. Typically, these are classes and interfaces. Creating classes using COM+ or J2EE doesn’t an application core make. The core doesn’t care about surrounding technology. The core is your domain model. By its design, it’s the most important part of the application, but, done well, it’s portable.
Look around and see if you can relate to this: A software team focuses much energy on making the database as good as possible and they create stored procedures to pull back the data as quickly as possible. They also consider the use cases of the screens that are necessary. Using technology X for the presentation, they make database table designs and stored procedures that return exactly what the screen needs to show to the user. Perhaps J2EE or COM+ is used in the passage of information from the database to the UI. Perhaps Enterprise Java Beans for COM+ components perform some transformation or calculations necessary for the screens.
Take a step back and remove the screens. Remove the database. Is there any application left? Can you point to any business rules or domain concepts left in the application after the presentation and storage components are removed? In my experience, I’ve had to answer “no” more than once. This is absolutely the wrong way to develop software.
Software systems should be resistant to climate change. The technology climate is always changing. The core is the most important part of the application, and it should be insulated against changes on the outside. Over time presentation technologies have changed many, many times. Data access technologies haven’t sat still either. We still have the relational database, but the manner of using it is constantly changing.
Software health check: Take away the code used to talk to the database. Take away every screen. You should still have an application. You should be left with your application core or domain model and domain services. Everything should be intact.
Handle changes in technology gracefully. If your application has a healthy core, you will be able to upgrade to the next-generation UI. You’ll be able to change your data access to use LINQ or an ORM without much impact. If you don’t have a healthy core, any change in technology requires almost a wholesale rewrite.
Any software system is a large investment for a company. That software is expected to last a LONG time. By focusing on the core of the application (domain model), the software will be able to weather changes in the technology climate. By creating a healthy core, your software will be able to drop and adopt technologies as necessary. Let’s stop rewriting and rewriting and start creating healthy software.
The inspiration for this post came from Jim Shore’ s thoughts.
I have learned an important lesson from my combined experiences at all the places I’ve worked. That is: raw requirements cause waste. A term I’ve used (and have heard others use) is that requirements are either “baked” or “not baked”. For a development team to plan an iteration, or a scope of delivery, the requirements need to be baked. If we pull the development team into a planning session, we ensure the requirements are fully baked before the meeting. Developers will be asking specific questions about the details of the requirements, and answers need to be readily available.
A big cause of waste is when a project manager inaccurately declares the requirements as actionable and the entire team meets. This is the most expensive meeting you can have. As soon as the developers ask questions, a discussion ensues among business stakeholders on what the requirements should be. At this point, the developers sit and listen until the stakeholders finish defining what the system should do.
The above is a strong indicator that the requirements aren’t baked. There are holes in the analysis, and it comes out as soon as a developer asks a question about the expected behavior.
TIP: Project Managers: ensure the requirements are fully baked BEFORE you take up the ENTIRE team’s time. You may need help from the architect or tester, but ensure the center is not raw when the whole team is pulled in.
UPDATE: ScottBellware was a bit confused about the context of this post (see comment below), so I thought others might be also. This post is about behavioral requirements for a single user story. Very small scope. Before the team can estimate, this story must be “baked”. Otherwise, the coding is guesswork.
The above is a link to an Arcast episode where I had a conversation with Ron Jacobs about what an Agile Architect is. Give it a listen and tell me what you thought. My basic point (beyond introducing Agile for those not familiar) was that the architect on an agile team is the guys who looks ahead beyond the current iteration. I mostly agree with Sam Gentile as he outlines his views here.
I’ve had numerous requests to publish my podcast list, so here it is. Here is what I listen to on my commute to and from client sites or, in the case of this month, to and from a buddy’s wedding several states away.
In case you’d rather import the OPML, here it is in its entirety:
version=”1.0″ encoding=”utf-8″ ?>
title=”Polymorphic Podcast“ type=”rss“ xmlUrl=”http://polymorphicpodcast.com/podcast/feed/“ />
title=”Channel 9: Podcasts“ type=”rss“ xmlUrl=”http://channel9.msdn.com/rss.aspx?ForumID=34&Mode=0&sortby=0&sortorder=1&format=mp3“ />
title=”Hanselminutes“ type=”rss“ xmlUrl=”http://www.hanselminutes.com/hanselminutes_MP3Direct.xml“ />
title=”.NET Rocks!“ type=”rss“ xmlUrl=”http://www.dotnetrocks.com/DotNetRocks_FullMP3.xml“ />
title=”Slashdot Review“ type=”rss“ xmlUrl=”http://www.slashdotreview.com/wp-rss2.php“ />
title=”The Java Posse“ type=”rss“ xmlUrl=”http://feeds.feedburner.com/javaposse“ />
title=”Dr. Neil’s Notes“ type=”rss“ xmlUrl=”http://feeds.feedburner.com/DrNeilsNotes“ />
title=”IT Conversations“ type=”rss“ xmlUrl=”http://feeds.gigavox.com/gigavox/channel/itconversations“ />
title=”Tech, No Babel“ type=”rss“ xmlUrl=”http://www.trinitydigitalmedia.com/tnb.xml“ />
title=”One Minute Tip“ type=”rss“ xmlUrl=”http://www.oneminutetip.com/feed.xml“ />
title=”RunAs Radio FullMP3“ type=”rss“ xmlUrl=”http://www.runasradio.com/runasradio_FullMp3.xml“ />
Straight Talk podcast“ type=”rss“
There it is. A pile of ripped up note cards denoting all the engineering tasks completed by my pair. This was an unusual day because it started at 7:30am with some white board modeling and brain-crunching a very, very tough problem. We wrote out some tasks to get us started, and we played the TDD game. It was such a tough problem that neither my pair partner nor I could imagine the complete solution, but we could imagine a few things that would be included. We wrote down the items to do on note cards and ordered them. The navigator manned the note card deck. In the course of completing some of the engineering tasks (all part of the same story), we uncovered new tasks that needed to be done. We wrote those down and added them to the bottom of the pile. We also ran into a brick wall and had to stop to do another task before we could continue. We wrote that down and added it to the TOP of the stack. We used these note cards as a true STACK. FILO. It helped us stay on track. We finally got through the stack, and when we did, the story was implemented. We didn't have the complete plan at the beginning, but we adapted and changed the plan along the way.
This pile of ripped up note cards is all that's left of that nasty story. It did take us all day, though, and at 6PM, we both were wiped out! It was the last day of the iteration, and we had committed to the story. We ate a quick lunch at our desks and worked straight through. It was great that we were able to meet our iteration through micro-adaptive planning.
I hope I don't have to do that again for a long time because that was the most difficult pair-programming I have ever done. We were talking through the problems _all day_. My brain was hurting on the way home.
My pair partner commented that it would have taken him 3 days if he had had to work through this nasty story by himself. Pairing on the entire solution cut the delivery by 2/3.
I am a big fan of shortcut keys. I find that if I can keep my hands on the
keyboard and not reach for the mouse, my productivity stays high.
want to say thanks to Steve Donie
for this change in my mindset. Steve has a key for everything, and if there
is not a key for it, he has a batch script with a shortcut key mapped. Below,
I'll illustrate some shortcut keys I love. Note, I use many, many shortcut
keys that hook into Resharper,
but these shortcut keys work in Visual Studio 2005.
ctrl+enter: Insert a line above the current line and jump to the
ctrl+shift+enter: Add a line below the current line and jump to
the new line
I found out today that my wife is with child, and we're having a baby! Now, on with the post. . .
Integration testing isn't your basic 200-level topic at an MSDN event. It can be very involved. I believe that a good integration test has to depend on targeted unit tests being present. Consider the scenario without unit tests:
Bob has a use case that spans 15 classes. He sets up the environment to get this slice of the system under test. He then proceeds to write the test with asserts. He quickly becomes frustrated because for each of the 15 classes along the way, there are different scenarios that are possible. If each class has just 2 possible uses, his number of scenarios to test are 2^15. Each scenario requires many assert statements. Faced with 32,768 test combinations, Bob is disgruntled and concludes that automated integration testing is too much overhead.
What did Bob do wrong? First, Bob attempted to start his automated testing at the integration level. Second, he assumed unit test responsibilities inside the integration test. Third, he tried to test every possible combination of integration. Fourth, he hadn't surrounded himself with a quality team that could help guide the testing strategy.
Here's the success scenario:
Bob has written unit tests for each of his 15 classes. He marvels at how simple they looks since each unit test only has to cover 2 usage scenarios for each class. With confidence that each individual class will do its job correctly, Bob writes an integration test for the use case choosing one of the many combinations that could occur. Bob sets up the test, executes it, and then asserts on the resulting state of the system. Bob finds an integration issues caused by how two of the classes interact with each other. He fixes that bug, and the test passes. Bob now has confidence that the 15 classes are interacting properly in his use case.
If you haven't already read the following from my friend, Jeremy Miller, take a minute to do so: