In one of our projects, I purchased a 50" big-screen television on which to display build statuses. With the popularity of HD TVs, I was able to purchase one without defect for $180.00. This TV is set up in the room to give build status to everyone in the room. Here is a picture of the TV from the developer pod. We are using BigVisibleCruise in full-screen mode.
In software, everyone agrees that testing is good, but there is a lot of disagreement on the level of testing that is necessary. Fortunately, there is not an industry standard or government software code that mandates certain testing because this industry is still figuring it out. The hypothesis of this article is that if you can’t afford to test it, you can’t afford to sell it.
Producing, selling, and supporting software is expensive, and, like any business venture, it requires capital in order to make it happen. Testing is one of the investments required in order to sell and support a software product. I believe this is relevant for consultancies as well because, ultimately, consultants are helping someone else produce a product that can be sold and supported.
When sales staff are selling the software product, they are making certain claims about the software. Often, the underlying assumption is that the software works as expected. Consider the following dialog between a customer and a salesman:
Customer: What do I need to run your software?
Salesman: You just need Internet Explorer 7 or Firefox 2
Customer: So it will work on my Linux and Macintosh computers as long as I have Firefox 2?
Salesman: All you need is Firefox 2, yes
A responsible product company will have supported platforms enumerated and will have tested their product on all said platforms. With the web, the nirvana dream is that with the browser, we can write one application and run it anywhere. We haven’t actually accomplished that as an industry, for even Firefox is different from Linux to Mac to Windows. Just because we test with Firefox on one platform doesn’t guarantee identical behavior on others. We actually have to test on all the supported platforms.
In the above case, this salesman was not given operating system details. Sales staff were told that only the browser mattered. Furthermore, the product team emphasized browser testing but not full platform testing. The team tested every release with Internet Explorer 7 and Firefox 2. They had high confidence that the product worked in these browsers. . . on Windows.
This is a common scenario, and the proper way to deal with it is to publish supported platforms. If a customer calls the support line for a problem running Firefox 2 on Linux, support must be capable of reproducing the problem. The support team would have to have an environment similar to the customers.
We have to really decide what we are going to support. If we are going to support two browsers on three operating systems, then we need these six platforms readily available to the support team. If we would rather not do this, then we need to restrict the number of platforms we will support, an the sales team needs to be made aware. It is not an acceptable answer for the sales team to sell on the basis that it works in all operating systems but yet the support team cannot help anyone who is not running Windows. Deciding the supported platforms in the responsibility of product management. If we can’t afford to test on Linux, then we can’t afford to sell for Linux.;
New Releases, New Bugs
Unfortunately, users are accustomed to seeing new bugs appear along with new releases of software. Sometimes, even service packs designed to fix bugs introduce new ones, like the reflection bug introduced in .Net 3.5 SP1.
New releases are good. In order to serve our customers, we need to put out new releases. New releases are intended to have new features, but not new bugs. New releases assume that features in old releases continue to work properly. Here is the kicker: how do we know that old features still work? How do we really know? How do we ensure that customers, when using the new release, won’t have to call customer support because an old feature that they depend on is malfunctioning? The answer: we have to test it. This seems like an obvious statement, but it is very difficult to completely regression test a new release of software. If each release has roughly 1000 function points, then the sixth release will have 6000 function points. Therefore, the regression test burden for each subsequent release is the sum of the efforts of every previous release.
We can’t afford to test everything
Quality Assurance thought leadership in the industry has identified a point of diminishing returns in testing. For example, if a data entry form works with representative data values, do we know for sure that it works with every combination of characters possible in regular text fields? There is no return on the investment of testing with every possible combination of characters. Skilled testers categorize data and test with representative samples of data, not every possible combination. This is just an obvious example of where it doesn’t make sense to test everything.
What we must test is software behavior. Testing every possible data value has diminishing returns, but failing to test every behavior has huge potential support implications, especially if you sell a successful product. The number of software behaviors in software products is staggering. We often only demonstrate a subset of all the functionality in the software, but we must test the full behavior.
How do we know it works?
That is the kicker, isn’t it? As a product manager, don’t I want to know that answer? Every release, I want to ask that question and have the answer always be “yes”. It is unacceptable for the answer to ever be “no”. Often, the answer is: “I tested that last week.”. That’s great, but we have a new build of the software today. Duplicating software is cheap. Designing software is expensive. Duplicating an automobile is not as cheap but still cheaper than designing the car. If I change my design, I must retest the new result. It is not adequate to say that I tested last week but I’m “sure” that this week’s changes haven’t broken anything. Famous last words.
We must test
Testing is expensive, we’re told. If we look at the effort we spend in building up exhaustive automated regression test suites, it is pretty extensive. The untold story is the other activities that go away or are mostly diminished. These activities that are diminished are:
- Weekly production hotfixes
- Delayed releases due to surprise defects
- Huge lists of defects
- Usage of a bug-tracking system
- Triage meetings for defects
- Negotiations with customers regarding what hotfix fixes the defect
- Scripting sales demos around current defects
- Hate-hate relationship between developers and testers
My experience has been that investment in testing is very quickly recouped; therefore, I don’t consider it a long-term cost. When viewed holistically regarding the product, it actually saves the program money. We actually end up trading off time after development for time during.
The hard part is structuring the test infrastructure so that it decreases the cost of the software instead of being a cost in and of itself. If not done properly, the test suite actually has maintenance costs of its own, and that defeats the purpose. Done properly, the test suite verifies every behavior in the software every day. It catches problems early and gives us confidence that the product works. . . that all of it works.
If we say we support the product regardless of operating system, we must run our test suite on many operating systems. Realistically, we wouldn’t support every operating system, only Windows, Mac, and Linux (and specific versions of each). We must enumerate what we support, and then we run our test suite on each platform. If we don’t run our test suite on a platform, then we cannot say that we support that platform. We don’t actually know if our software runs perfectly if we don’t test it. Running a few tests manually is insufficient as well. Our full regression test suite must completely pass on each platform. If there are some known issues with a specific platform, we should call those out so our customers are aware.
First we must decide what to support. Then we must test for what we decided. Then we can sell what we tested. If we can’t afford to test it, we can’t afford to sell it.
A hot topic in the agile world is “self-organization”. The reaction against tight command and control management structures has swayed the pendulum all the way over to chaos.
First, I understand that every team is different, and my views are tainted by my personal experiences, which include heavy work in Austin, TX and with various companies sprinkled across the U.S. I have seen (and made) arguments stating that self-organizing teams are much more productive and effective than tightly-controlled team. I now believe that a balance is critical (surprise, surprise).
To understand my perspective, you should know that I have been in software management for just over two years, and before that I came up through the ranks as a software developer. My views on self-organization have changed with my role, but in either position, they have not been extreme.
When I was an individual contributor, I was hardily in the self-organization crowd, mainly because I preferred not to be directed. What I found was that I and so many of my very intelligent co-workers at several companies felt that we as the team knew the best way to proceed, and I somewhat resented management handing down decisions that seemed misinformed for the situation. The fact was that many of management’s decisions were misinformed. There was no alternative since management swooped in for a weekly status meeting and then was gone again. The lack of consistent, involved management on a day-to-day basis severely colored my opinion of managers (at least in I/T). With this lack of engaged management, across several organizations, the team has no choice but to self-organize or remain in constant chaos. The self-organizing did happen to a certain extent, but only after much chaos and posturing to determine which team member would surface as the “lead” since management had failed to identify such person. Since most of the time the team was made up of peers, making decisions was slower than necessary because no individual had the authority to make the decision so that we could move on. Consensus and needless discussion ruled. Over time each team member learned to carefully select the battles in which to engage so that these discussions could be limited, but the time wasted was significant. If any of my former co-workers or managers are reading this, you were the good group (the ones described above would not be reading this blog).
The above represents some of my experience as a software developer, and because of this experience, and reaction against managers who were not properly leading the team, I have been in favor of what the industry calls “self-organizing teams”. Some other writings on the topic (for and against) are below:
- Agile Processes and Self-Organization
- How to grow a self-organizing team
- Agile Work Uses Lean Thinking – Team Self-Organization
- No More Self-Organizing Teams
- The Myth of Self-organizing Teams
- Self Organizing Teams are Superior to Command n’ Control Teams
It is very important to emphasize that my experience managing and growing software teams has changed my thinking regarding self-organization. My current situation as Chief Technology Officer of Headspring Systems finds me managing a company with 10 software developers, not including me or our Chief Architect, Kevin Hurwitz. In our line of work, consulting projects, we cannot afford the time it takes for a project team to go through Forming, Storming, Norming, just to get to Performing. While that has to happen, ultimately, we had to find a way to streamline the forming and storming phases. This is where management comes in.
My approach to management has some roots in what not to do. From some of my past experiences, I know that I will be more successful if I’m engaged closely with my teams in not only supporting them but also in directing their activities. If we think about a continuum, dictatorship is as the opposite end of pure consensus. If management has a tight command and control approach, the organization just will not scale and cannot grow. A single manager can only control so much. A manager must know how to delegate if the organization is to grow.
As with anything in life, self-organization is a balance. My guys employ self-organization with certain things, but there are other things that are directed. Point: Self-Organization does not require Self-Directing. A team that is self-directing will likely not accomplish anything significant. A software team within a company cannot make every decision. It can only make certain ones. For instance, what market segment to compete in is a decision that has likely already been made; however, what unit testing framework to use might be up to them. This is where the balance between direction and delegation comes in.
We have certain things that are mandatory for Headspring projects. These things are considered competitive advantages, and no project team has the ability to deviate. This may seem like command and control, and it is to some level. This is the balance I am talking about. Some of the things that are “dictated” to the team are (most other decisions are delegated to the team):
- An automated, continuous build
- Very high (near 100%) unit test coverage
- O/R Mapper for data access
- IoC container required
- Onion Architecture as architectural approach
- Extreme Programming practices
- Some coding standards
- Sitting together
The above are just some of the things that are “handed down from above with an iron fist”, and one could argue that a team should have the authority to decide their own process framework or whether or not to do unit testing, but that is a business decision. If I were running a Research and Development group, I would certainly have different levels of delegation, but the above are considered core to our practice and key to project effectiveness. If I’m wrong, then it is clearly my fault, but that is one reason why management is hard: anything that is wrong with the organization is the manager’s fault, either by having the wrong people in place or sending them in the wrong direction.
The Self-Organizing Team is a myth because it communicates an absolute. What is more real is an appropriate level of direction coupled with suitable delegation and trust. In order to make our project teams effective, I must provide guidance and a behavioral framework for the team to operate in. Within these somewhat strict guidelines, all other decisions are delegated, and I trust them to make the right choices. This balance between directing and delegating has proved to be the appropriate balance between self-organization and dictated-organization. The team members are very happy as well because they are pushed in the direction of success.
General management support is always key. This approach to organizing would not work at all if I was disconnected and uninvolved with the teams. I must stay on top of progress and current blocking issues in order to keep the team as effective as possible. By staying engaged, I also can see problems coming from a distance, and I can take action to solve the problem before it becomes a blocking issue for the team. Software tools are an easy example that hangs up so many organizations. Any software tool for software development has an ROI story, and many sub-$1000 tools are available. Compared to developer time, these tools are dirt-cheap. Just yesterday, I purchased new licenses of Redgate SQL Compare for the team because they were incorporating it into the process for database migrations. I’ve seen several managers try to negotiate away a small tool cost just because the procurement process for the company is difficult or it is politically unpopular to ask for special allocations. This one small example is a symptom of the type of management that exists. Supportive or tourist (comes to visit once in a while) management.
In order for a team to exist, it must have a mission, a purpose in life. This mission must be directed, and is probably what caused the team to form in the first place. Management must find the appropriate balance between directing and delegating. Because of my background in software development I am able to competently direct my teams in the right direction while at the same time delegating tactical decisions that are contextual. Furthermore the Principal in charge of a single project team decides what further decision to make singly and which to delegate to the individual developers. The experiences in my career have shaped my views on self-organization, and I have found that neither self-organization nor command-control are appropriate for an effective software team. A team might perform in spite of either, but a carefully balanced direction/delegation ratio will increase effectiveness exponentially.