2010年9月18日星期六

Testing no-dojo session 6 - Agile testing part 2 - tons of bugs

After I posted Agile testing part 1, one developer commented it on twitter. "Nice article, your next challenge is how to sell your idea to developers." He said. The idea he meant was unit testing. Among all components I tested, there was only 1 component with unit testing. "Because it is easy to write unit test in this one." Developers said. That sounded like right. Do the easy part! And hey, the results didn't look too bad, on the contrary, it looked better. By avoiding unit testing, developers released earlier. A couple more bugs found by testers, so what?

It seemed like right for a while. Until some day, a developer needed to commit a piece of core code that might affect some other code, he was not sure about the consequence. So he came to me and asked,
"Is there a way to detect that my code doesn't break other code?"
"Yeah, unit testing." I said.
"What about automated regression?"
"Sure, but that doesn't detect every thing, and you have to wait 4 hours."

Developers always considered this automated function level regression could detect everything. Because it was built by testers. And if it failed to detect a change, blame testers! But what they didn't know was that functions in one component always involved other components, machines, network, and some other environment issues.
Brady as a developer with testing experience explained it this way, "let's assume 90% chance of each factor might work, and the chance to successfully execute a regression is 0.9*0.9*0.9*0.9*...*0.9." And the worst part was when a regression failed, tester needed to check up all factors to find the cause.

Pretty soon, developers found that it wasn't perfectly safe to commit code, even with automated function level regression. So they started to commit less, as little as possible.

Elisabeth mentioned the exact scene in the next half of the lecture - "Technical Debt", when people chose short term pleasure and ended in long term pain.

She also introduced what Agile testing iteration was:
* Automated acceptance testing => Automated unit testing => Manual exploratory testing
And after the code passed unit testing and acceptance testing, testers should still cover 3 kind of cases in exploratory testing:
* end-to-end - usability
* sequence - scenarios involve complex steps
* data - historical data

She explained why it was hard for management team to accept Agile:
* customers, testers, developers work together on acceptance cases
* testers and developers work on unit test
* all involve in exploratory testing
In agile, the whole team involved in all steps, therefore everyone was responsible for the potential risks. Traditional managers always were looking for someone to blame when things went wrong. That wouldn't work in Agile Team.

In the end, Q&A started. I got to say I seldom enjoyed Q&A session, but I loved this one. Elisabeth mentioned that Microsoft released a package with 63,000 bugs fixed. " I wander what kind of bug tracking system could handle this many bugs. Instead of providing more features in bug tracking system, I think they should reduce bug count in the first place." She said.

This one really touched me. Without the power of unit testing, our team had over 3000 bugs, and 1000 waiting to be fixed. Each release, testers would find over 200 bugs, and verify almost the same mount from last release. To make testers' lives easier, I developed a series of tools to help us, including batch creating, resolving, verifying and searching. But no matter how many tools I provided, I just didn't feel right. Now I knew, I wasn't focusing in source of the problem. Reducing bug in the first place, then I could just throw away all these tools.

Isn't it weird? When I finally get to the right answer, part of me feel happy for knowing it, the other part feel sad that I didn't know it earlier.

没有评论: