Another short reference post on the topic of unit testing, in the spirit of saving your keystrokes , based on a conversation I've had a couple of times concerning unit test code coverage. We're using Sonarqube on a current project, which, amongst other things, will review C# code and determine the percentage covered by unit tests. We're currently running at around 80%, which to me seems a fairly healthy metric. It's not the be all and end all of course in that a high percentage of test coverage doesn't completely diagnose a healthy code base, but, all things being equal, it's a good sign. The question then comes - why not 100%? Not in the naive sense though - there's clearly areas of the code, property getters and setters being the classic example, that you could unit test, but wouldn't get much if any value in doing so, and hence the effort is unlikely to be worthwhile. Rather in the sense that if there are areas that's aren't going