2010年12月22日星期三

code coverage - what's to cover

Attend a software testing conference somewhere, anywhere actually, you always hear the magic word "code coverage". 
"Our automation can cover 90% of the code", some one says. 
"Our code coverage rises from 20% to 50% this year", another one says.

When you get back, you couldn't wait to start your own code coverage detection, deploy some code coverage detector, execute your automation, collect the  data. Then, Bang! The number shoot you down. It probably be somewhere between 10% ~ 40% if you have never done this before. Oh, damn! You start hard working, read the code, talk to the developers, write new tests, do whatever you can to raise the little line chart up.

But what do you get when code coverage is rising? Better code? Ah... No. Unit testing provides better code. Because unit testing requires refactor to eliminate global variables, static methods, etc, just to provide testability on code level. Therefore code gets better.

Unfortunately, most of us are not using unit testing to drive automation. We drive automation through UI or API, using TCL/Expect to call command line interfaces, using Selenium to call web interfaces, etc. When the code doesn't provide much testability, and the code not covered in specification usually with less testability, we come up with very fragile ways like record&replay to automate those code, just to raise code coverage. Eventually, we end up with a lot of hard to maintain automation scripts, and the code still sucks.

Do you make customer happy when code coverage is rising? Ah... No. XP or TDD makes customer happy because XPers write Automated Acceptance Test before they write code. They define what the code should do first. If the code fails to match the acceptance test, it becomes a bug needs to be fixed.

Unfortunately, most of us are working in a traditional project. We write tests following function specification to describe what the code should do. When these tests do not cover 100% of the code, we rush out to read the code, follow the code to write our tests. That is white box testing. The problem of white box testing is that it tests what the code is doing, not what the code should do. And that seldom makes customer happy.

Now let's sum it up to see how we could do it right, like a tester:

* high code coverage on unit test level is good, for both code quality and testability

Just remember to refactor your tests while refactoring your code. (XUnit Patterns is a great book to start with)

* if you are trying to raise code coverage with component/integration level automation
** for both driving the interface and checking the results, evaluate testability first
** ask for developers' help

For example, component A only accepts authenticated and valid requests. All other requests including too long variable requests, invalid variable requests, unauthenticated requests, Component A just close the connection and returns nothing and logs nothing. This is a typical design that lacks in testability.

When testing component A, we send several invalid requests, and checks that A is doing nothing. The problem is that when component A is down, our results also passes. That's a false positive result. To prevent that, we need to verify that Component A is working before sending every invalid requests. That causes longer automation execute time and more scripts. All of these can be avoided if Component A provides specific responses to different requests. And for that, we need developers' help.

* when you have the driving and checking interfaces, write tests to describe what the code should do

The reason that I like XP/TDD is that XP-ers write tests before they write code. Then they write code to pass the test. Without this kind of checking, developer always implement what is easier to be done, not what should be done. This bad habit almost becomes a practice, "there's no design that can be completely implmented", developer says. Let me re-phrase and put it this way. "there is no design that can be completely implemented without testing".

Before following the code to write tests, understand why the system need this piece of code, write up your tests to describe how it should behave.

* A test should always describe what the code should do, not what the code is doing, therefore it is the code that need to cover the test, not the test cover the code. 

2 条评论:

匿名 说...

It's really inspiring, can give developers a chance to think in a different way.

匿名 说...

I totally agree to the statement "A test should always describe what the code should do, not what the code is doing...".