2010年9月5日星期日

Testing dojo session 3 - domain testing part2

In this session, we continued to study domain testing with BBST domain testing video lecture 2.

At the beginning of the lecture, Cem Kaner did a short review on lecture 1, he claimed that examples mentioned in lecture 1 were all about numbers, therefore linear and with clear boundaries. But in real life, most of the variables were not linear. So how do we classify variables with values in non-ordered sets?

Cem Kaner gave us a generalized concept for "equivalence" including 4 views:
* intuitive similarity
* specified as equivalence - specification
* equivalent paths - code path
* risk based - same type of possible error
With each of these 4 types, testers should be able to classify several domains of a variable. But which one to choose in 1 domain as the best representitive? Cem Kaner suggested using risk driven method.

We spent 30 minutes on the lecture and 10 minutes on discussion.
* Jing: Lecture 2 is not as good as lecture 1. In lecture 1, it introduces a sheet so that we know how to apply domain testing, there's no sheet in this one.
* Mei: if there is a perfect tester, he/she eventually will figure out the same set of test cases with any type of the 4 views.
I agreed with Mei. If we had a perfect specification, then it would include all possible variables and how the system would handle them. If we had a perfect developer, then he/she would handle all exceptions in the code. As a tester, we could follow either the perfect specification or the perfect code path to come up with the same set of test cases - the complete test cases. But as we knew there was no perfect, we could only rely on principles and practices, that was all we have, and we could try to get closer to perfection by mastering those techniques.

Then we moved on to apply domain testing. This time our SUT was a dir scanning system.
User started by writing [several] [dir paths] in 1 file, then started scanning. The system first parsed file to get the list of paths, then scan each dir 1 by 1. After finished scanning, the system outputed the result including total number of dirs and files inside these paths, and the total size occupied on disk.

We updated the domain testing sheet mentioned in lecture 1, added 3 new sheets. Each sheet represented for 1 view. And a small part of our sheet looked like:
| variables | similiarity |
| several | 0 , 20 |
| path | short path, long path, relative path, absolute path, path including glob expression |

| variables | risks |
| several | 128 - might break unsigned int for line numbers|
| path | empty line, empty file |

| variables | code path |
| several | 128, 1024, 65536|
| path | path name including space |

Since we didn't have a specification, we skipped the view of specified.

Jing: each view has something overlapped with each other, is it right?
I thought it was not the overlap part we cared. What we cared was that every view lead us to something new, something we missed in other views. That was the reason why we needed different views, different methodologies, to review more, and to hope for a better result.

没有评论: