I’ve posted on a retrospective of my team’s current release, and I’ve run a few code analysis numbers to get a baseline trend.
I normally don’t do code analysis since working code is our real goal, but here it is:
I analyzed our latest component, which is a part of a larger software product. This component delivers tremendous business value, and it was developed from scratch on my team using XP methods. Here’s the stats:
Statements: 6600
Productions statements: 2500. The rest is test code.
Number of classes 141 – 71 production classes. The rest are test classes.
7.5 methods per class.
About 5 lines of code per method on average.
Maximum cyclomatic complexity of any method: 6. Average is 1.5
We have a few methods that are close to 20 lines of code, but the number of those can be counted on 1 hand.
This release has seen very few bugs
I don’t see any value in using code metrics as a direct measurement of the quality of the code. It may be a trend, but there is no causality between the two. It is, however, interesting to look at the trends from time to time.
- We ended up with 2 times as much test code as production code.
- Our classes ended up very small. Our methods even smaller.
- Our method cyclomatic complexity averaged between 1 and 2.
- We ended up with about 5 actual bugs in the release. This might seem unreal, but I credit all the automated test coverage for this result.
I know some of you will cringe at the thought of writing two times as much test code as production code, but given the results we have achieved, I consider it worth it.