In software, everyone agrees that testing is good, but there is a lot of disagreement on the level of testing that is necessary. Fortunately, there is not an industry standard or government software code that mandates certain testing because this industry is still figuring it out. The hypothesis of this article is that if you can’t afford to test it, you can’t afford to sell it.
Producing, selling, and supporting software is expensive, and, like any business venture, it requires capital in order to make it happen. Testing is one of the investments required in order to sell and support a software product. I believe this is relevant for consultancies as well because, ultimately, consultants are helping someone else produce a product that can be sold and supported.
When sales staff are selling the software product, they are making certain claims about the software. Often, the underlying assumption is that the software works as expected. Consider the following dialog between a customer and a salesman:
Customer: What do I need to run your software?
Salesman: You just need Internet Explorer 7 or Firefox 2
Customer: So it will work on my Linux and Macintosh computers as long as I have Firefox 2?
Salesman: All you need is Firefox 2, yes
A responsible product company will have supported platforms enumerated and will have tested their product on all said platforms. With the web, the nirvana dream is that with the browser, we can write one application and run it anywhere. We haven’t actually accomplished that as an industry, for even Firefox is different from Linux to Mac to Windows. Just because we test with Firefox on one platform doesn’t guarantee identical behavior on others. We actually have to test on all the supported platforms.
In the above case, this salesman was not given operating system details. Sales staff were told that only the browser mattered. Furthermore, the product team emphasized browser testing but not full platform testing. The team tested every release with Internet Explorer 7 and Firefox 2. They had high confidence that the product worked in these browsers. . . on Windows.
This is a common scenario, and the proper way to deal with it is to publish supported platforms. If a customer calls the support line for a problem running Firefox 2 on Linux, support must be capable of reproducing the problem. The support team would have to have an environment similar to the customers.
We have to really decide what we are going to support. If we are going to support two browsers on three operating systems, then we need these six platforms readily available to the support team. If we would rather not do this, then we need to restrict the number of platforms we will support, an the sales team needs to be made aware. It is not an acceptable answer for the sales team to sell on the basis that it works in all operating systems but yet the support team cannot help anyone who is not running Windows. Deciding the supported platforms in the responsibility of product management. If we can’t afford to test on Linux, then we can’t afford to sell for Linux.;
New Releases, New Bugs
Unfortunately, users are accustomed to seeing new bugs appear along with new releases of software. Sometimes, even service packs designed to fix bugs introduce new ones, like the reflection bug introduced in .Net 3.5 SP1.
New releases are good. In order to serve our customers, we need to put out new releases. New releases are intended to have new features, but not new bugs. New releases assume that features in old releases continue to work properly. Here is the kicker: how do we know that old features still work? How do we really know? How do we ensure that customers, when using the new release, won’t have to call customer support because an old feature that they depend on is malfunctioning? The answer: we have to test it. This seems like an obvious statement, but it is very difficult to completely regression test a new release of software. If each release has roughly 1000 function points, then the sixth release will have 6000 function points. Therefore, the regression test burden for each subsequent release is the sum of the efforts of every previous release.
We can’t afford to test everything
Quality Assurance thought leadership in the industry has identified a point of diminishing returns in testing. For example, if a data entry form works with representative data values, do we know for sure that it works with every combination of characters possible in regular text fields? There is no return on the investment of testing with every possible combination of characters. Skilled testers categorize data and test with representative samples of data, not every possible combination. This is just an obvious example of where it doesn’t make sense to test everything.
What we must test is software behavior. Testing every possible data value has diminishing returns, but failing to test every behavior has huge potential support implications, especially if you sell a successful product. The number of software behaviors in software products is staggering. We often only demonstrate a subset of all the functionality in the software, but we must test the full behavior.
How do we know it works?
That is the kicker, isn’t it? As a product manager, don’t I want to know that answer? Every release, I want to ask that question and have the answer always be “yes”. It is unacceptable for the answer to ever be “no”. Often, the answer is: “I tested that last week.”. That’s great, but we have a new build of the software today. Duplicating software is cheap. Designing software is expensive. Duplicating an automobile is not as cheap but still cheaper than designing the car. If I change my design, I must retest the new result. It is not adequate to say that I tested last week but I’m “sure” that this week’s changes haven’t broken anything. Famous last words.
We must test
Testing is expensive, we’re told. If we look at the effort we spend in building up exhaustive automated regression test suites, it is pretty extensive. The untold story is the other activities that go away or are mostly diminished. These activities that are diminished are:
- Weekly production hotfixes
- Delayed releases due to surprise defects
- Huge lists of defects
- Usage of a bug-tracking system
- Triage meetings for defects
- Negotiations with customers regarding what hotfix fixes the defect
- Scripting sales demos around current defects
- Hate-hate relationship between developers and testers
My experience has been that investment in testing is very quickly recouped; therefore, I don’t consider it a long-term cost. When viewed holistically regarding the product, it actually saves the program money. We actually end up trading off time after development for time during.
The hard part is structuring the test infrastructure so that it decreases the cost of the software instead of being a cost in and of itself. If not done properly, the test suite actually has maintenance costs of its own, and that defeats the purpose. Done properly, the test suite verifies every behavior in the software every day. It catches problems early and gives us confidence that the product works. . . that all of it works.
If we say we support the product regardless of operating system, we must run our test suite on many operating systems. Realistically, we wouldn’t support every operating system, only Windows, Mac, and Linux (and specific versions of each). We must enumerate what we support, and then we run our test suite on each platform. If we don’t run our test suite on a platform, then we cannot say that we support that platform. We don’t actually know if our software runs perfectly if we don’t test it. Running a few tests manually is insufficient as well. Our full regression test suite must completely pass on each platform. If there are some known issues with a specific platform, we should call those out so our customers are aware.
First we must decide what to support. Then we must test for what we decided. Then we can sell what we tested. If we can’t afford to test it, we can’t afford to sell it.