Writing Effective Test Cases

Copyright 2007, Clinton Staley

 

1. Overview

 

Testing is an essential part of professional software development, and is a highly sophisticated discipline in its own right.  Programmers tend to view testing (or SQA – Software Quality Assurance) as a lower-level skill, but in fact it’s a different technical skill from programming, requiring different abilities to do well.  Good programmers are not necessarily good testers, and vice versa.

 

Even if you intend to work in development, not SQA, you will benefit from good SQA skills, for three reasons.   First, you will almost certainly at some point in your career be involved in a testing group, even if it’s just as a step into development.  Second, if you test well, you will write more reliable code and place less demand on the SQA group.  Third, if you understand what good testing takes, you’ll have more respect for it.

 

This document, and the exercise that goes with it, are designed to give you an appreciation for what good testing means and for how challenging it is, and to improve your testing skills.

 

2. How To Write Good Test Cases

 

A. Care about testing

Good testers see testing as an art, and they’re right.  They’re very enthusiastic about finding bugs.  An SQA group with good morale will be found on lunch breaks bragging about the clever way they made an application fail this morning and how upset the developers were over the rough treatment given to their baby. 

 

Good SQA people like breaking code.   They’re creative, even devious, in finding ways to do so.  And that’s good, because the kind of bugs that make it into the testing phase of software development are devious and deep, and they take organization and focused creativity to uncover.  And if you think perhaps hard-to-find bugs can be safely released since no user will find them, rest assured that the collective unpredictability of tens of thousands of users will uncover even hard-to-find bugs.

 

Every quarter, I get students who maintain they’ve tested as thoroughly as possible, and my standard question is “If it was going to cost you $1000 for each unfound bug in the code, would you be building more tests, or still giving up?”  The answer is usually “Well, if there were $1000 at stake I’d test some more.”  You haven’t really tested until you’ve done it so thoroughly that you wouldn’t bother to do more even if $1000 was at stake, because you’re so confident no more bugs could be found.

 

B. Write organized test plans – don’t just hack at it.

Testing is not just “beating at the code”.  You need to start with an organized plan.  If the spec says “spaces are allowed anywhere in the move string except within words” your test plan should include systematically trying a space at every allowed point in the move string.  If the spec gives a complex rule for comparing two objects then your plan should include systematic checking of every possible step in the comparison process.

 

C. Write test cases first

Yeah, that's right.  Write'em before you write a line of code.  You're going to have to write them sometime, and doing it first makes you think more carefully about special cases, and forces you to really read the spec.  This in turns makes your code more accurate.

 

Starting with test cases is one of the elements of agile development.  Like any new approach to software development, agile has its proponents and detractors, and it has some good ideas and some dubious ones.  But test-driven development is almost universally accepted as one of the best ideas introduced by agile development, and for good reasons.


D. Read the spec carefully

Missing a point in the spec is one of the most common errors, and one that cannot be checked any other way than by hand.  Go over the spec carefully when you design test cases.  As you cover each sentence or phrase in the spec, highlight it out on a printout, or keep an ECopy with strikeout marks on it.  This will force you to consider each part of the spec and not skim over some essential portion of it.

 

Another good tip is to state the spec in negative form, and test as if you are verifying negative form.  “Spaces are allowed at any point” => “Spaces will result in an error at every point.”, and then you write your test to confirm that (which it of course should not be able to confirm unless there’s a bug).

 

E. Test the simple stuff

A surprising number of bugs show up in areas like I/O validation, error checking, etc.  Focusing only on the difficult logical portions of the program is a mistake; most bugs are simple things that are obvious once tested. 

 

F. Test the error cases, the rarer cases, and the boundary conditions

A surprising percentage of code that is shipped professionally has never been run.  Much of this code is in branches designed to handle exceptions, unusual cases such as zero-length input, etc.  Think carefully to be sure you try every error condition, all the odd boundary conditions, etc.

 

Situations that don’t arise easily are often buggy.  For instance, in a checkers-playing program from my 305 class, an important test case was a series of checkers jumps that ended with the jumping piece returning to its original location.  This is really hard to make happen in checkers, but it can be done, and a lot of programs broke on this.  Many sophisticated programs have a few odd cases like this that you must deliberately design for.

 

G. Use random testing, but recognize its limitations

Writing random test-case generating code is a good way to get lots of test cases generated quickly, but it has two significant limitations:

 

i. In practice, you’d need to determine whether the outcome of the random test runs was correct (you wouldn’t have an existing reference implementation to run against) so big random test runs are most useful for uncovering bugs with obvious effects like seg faults.

 

ii. It takes a lot of random test data to arrive at some rare conditions that could be generated quickly with a carefully designed test.

 

H. Automate your tests so they’re easy to run

Build a system that lets you run tests easily and automatically.  Automating tests is increasingly common in industry, not least because it makes rerunning the full test suite easy.  Regression tests, in particular, benefit from this because once you’ve fixed the bugs from a regression, you need to run the full regression again to be sure your fixes didn’t introduce new bugs (this happens surprisingly often), and this is only feasible if the regression can be done by machine.