Software Test Plan
[Unit Tests] [Integration Tests] [System Tests] [Smoke Tests]
The test plan is based on the outline provided by
ANSI/IEEE Std.
829-1991 but has been streamlined for CPE 309..
2. Introduction
This document describes the testing plans for
student team project in CPE 309.
6. Approach
Programmers will perform their own unit testing: assertion testing,
white-box testing (Complete segment, decision, and loop coverage),
black-box testing, and data structure testing. (Pressman Ch 16, Mynatt
Ch 7).
The implementation manager will perform the daily build and daily
smoke test.
The test group will perform integration tests and system tests (both
functional and non-functional). (Mynatt Ch 7).
The test group will perform automated regression testing.
(Mynatt Ch 7, Pressman Ch 17).
6.1 Unit Testing
Unit testing is testing of the source code for an
individual unit (class or method) by the developer who wrote it. A unit
is to be tested after initial development, and again after any change
or
modification. The developer is responsible for testing that his new
code
does not break his or her private build before committing the new
source
code to change control.
The concept of unit testing is that a unit is
tested in isolation, independently of the rest of the system, and as
such requires the creation of a test driver. In CPE 309 these
test drivers will be written as JUnit
test cases. (There is a training
on JUnit in the Training Plan.) The unit tests must
demonstrate that every method in the class works correctly. All test
driver code must be placed in the team source code repository when the
unit is submitted to the project build. Here is a tutorial on
setting up the source code structure for testing
units in isolation. (This will be required for Stage 2).
JUnit is not for GUI testing.
JUnit test code does not have to follow the
class coding standard to the letter. However, the instructor will
read the test code so it needs to be readable. In particular, the
purpose of each test case must be documented, either as a method
comment
or as a JUnit test message. Be sure to cross reference any tests
that were created in response to a defect report to the defect
number. Also, here are the recommended tips on JUnit
style.
As a simple check that developers are writing thorough unit tests,
the QA person may require that unit tests demonstrate that a certain
level of statement
coverage is achieved before they are allowed to be checked in.
(See 6.7).
It would be logical to require coverage levels that are the same as
those specified in the Release
Criteria. One way to do this is to require that the developer post
the Emma coverage report in a designated topic on the team Forum when
they commit their code to the team repository. Even more rigorous
would be to require that the test manager inspects
the unit tests before a new module can be checked in.
Also, don't forget that McConnell recommends
Source Code Tracing - step through every line of code with an
interactive debugger and verify all values. The QA manager may
choose to require tracing documentation from each developer.
Unit Testing for Methods
Required in Stage 1 and Stage 2
- Verify operation at representative
parameter values (black box - test derived via Equivalence
Partitioning)
- Verify operation at limit parameter
values (black
box - Boundary Value Analysis)
- Verify operation outside parameter values
(black
box - Boundary Value Analysis)
Required in Stage 2
- Verify preconditions and postconditions
via assertion testing (if your team is DBC).
- Ensure that all instructions execute (white
box - statement coverage)
- Check both sides of all branches
(including loops) (white box - branch testing)
- Check all paths in the basis set (basis
path testing). Each developer is
responsible for writing formal basis path test case documentation for
at
least one important unit. The documentation can be in the form of the Basis
Path Testing Worksheet. Append the JUnit source code which
implements the test and test output
Recommended in all stages
- Check the use of all called objects
- Verify the handling of all data structures
- Verify the handling of all files
- Verify control flow invariants via
assertion testing
- Check normal termination of all loops
- Check abnormal termination of all loops
- Check normal termination of all recursions
- Check abnormal termination of all
recursions
- Verify the handling of all error
conditions
- Check timing and synchronization
- Verify all hardware dependencies
In addition, consult these excellent strategies for finding
defects.
Unit Testing for Classes
Item 1 is required in Stage 2, the remainder are
recommended for all stages.
- Exercise methods in combination
- 2-5, usually
- choose most common sequences first
- include sequences likely to cause
defects
- requires hand-computing the resulting
attribute values
- Focus unit tests on each attribute
- initialize, then execute method
sequences that affect it
- Verify that each class invariant is
unchanged
- verify that the invariant is true with
initial values
- execute a sequence (e.g., the same as
in
1.)
- verify that the invariant is still true
- Verify that objects transition among
expected states
- plan the state / transition event
sequence
- set up the object in the initial state
by setting variables
- provide first event & check that
transition occurred . etc.
Unit Test
Results
When completed, you will post your unit tests and
results on your web page. See the recommended directions for unit
test results.
6.2 Daily
Smoke Tests
With every build a smoke test will be run after
the build successfully compiles. The smoke test is designed to ensure
that the newly developed code has not damaged the system and that the
system, with the newly integrated code, is stable enough to be
subjected
to further integration and system testing. The smoke test is not
exhaustive testing, but a very quick, superficial, end-to-end
test.
It is run by the person who does the daily build and should take less
than 5 minutes to run. The
Test Manager designs the smoke test. Smoke
tests may be manual or automated as
determined by the test team. The contents
of the daily smoke
test must be changed throughout the project as new features are
implemented. Include a link to archive of old versions.
It is recommended that there be some
convention or mechanism to "freeze" the source code repository during
daily build and smoke test so that new code doesn't get committed in
the
middle of a build. For example, the team might set a deadline of 6pm
for
all new code to be commited to the repository. Then the daily
build is run every day at 7pm and after the build report and smoke test
report is posted the repository is "unfrozen." (See Braude fig 9.21).
6.2.1
Failed Smoke Test Reporting
Who ever runs the daily build (usually the
implementation manager) performs the smoke test to make sure the
software is stable.
If it isn't stable, that's a big problem. Not as big as a broken
build, but a failed smoke test needs prompt action.
Each team needs to create a procedure for reporting and handling a
failed smoke test, similar to a procedure for a broken build.
Place your team's procedure here.
If the smoke test passes, the integration team may notify the
test team that the build is available for further testing.
6.3 Integration
Testing
Integration testing is exercising newly developed
code that has been committed to the build in combination with other
code
that has already been integrated. Integration tests are performed while
builds are in progress. Braude discusses integration testing at length
in Chapter 9.
Integration test is guided by the Implementation and Integration Plan
created by the Implementation Manager (consulting with designer and
testers). The plan gives the order in which modules will be
integrated and the schedule of their completion. Based on this
plan testers must determine:
- What components will be available for
testing in each build increment?
- What drivers, stubs, and fakes must be
available at what time in order to test each build increment?
- What new test cases (in addition to unit
tests) are needed to test the interactions between the components?
It is highly recommended to automate integration
test cases via the JUnit test framework. If you followed the
directions for testing
units in isolation then it is very easy to use the same test
cases during integration test; just leave out the stubs and
fakes. (In some cases you may need different oracles).
The test manager must determine how much and exactly what supplemental
test cases must be written in order to have comprehensive integration
tests. Remember the goal is to test the modules in combination with
each other so you want to force all the different possible
interactions between the modules to occur. In theory you would
produce every possible permutation of each sequence of calling all
methods (could be very large). In reality not all of these
sequences would make sense for any particular problem domain. So
strategically provide different method calling sequences to exercise
all the interface combinations that make sense for the problem.
Organizing the tests is a major undertaking. Here's a naming
convention that can help:
I_Customer.java |
Interface
|
Customer.java |
Class
|
FakeCustomer.java |
Fake class
|
CustomerTest.java
|
unit test
|
CustomerIntTest.java |
integration test
|
CustomerBug13Test.java |
regression test
|
Each team decides who is responsible for writing and running the
integration tests. Obvious alternatives are a) the developer who
developed the new code or b) the Test Manager or c) the developer
writes
the tests and the Test Manager runs them.
The instructor recommends option B.
Describe your team's decision here.
6.4 System
Testing
System testing is testing the functionality of
the system as a whole in order to verify the requirements in the
SRS. There needs to be at least one system test case for each
functional and non-functional requirement in the SRS.
Follow these guidelines for writing
system test cases. The tests
must be evenly assigned to all developers. Each developer must
submit their test cases as part of their individual project grade.
It is suggested but not required that the functional
System Tests be automated as they will be run at least once a
week. An automated test case is called a test "script."
Follow these directions for writing
system test scripts. Automation
of GUI elements of your functional system tests can be achieved with
the instructor's GUI
Robot tool or another tool such as Abbot.
It his highly recommended that you follow a thin
GUI testing strategy.
Non-functional requirements may be verified
using manual test methods.
Randomness
If your system has a non-deterministic component, such as dealing
cards in random order, that part has to be isolated and tested
manually.
For example, have the Deck class contain a shuffle()
method and a nonShuffle() method. For automated
testing, have a command line argument --noshuffle
that runs the game calling the nonShuffle() method
to deal the cards in a known order. The expected results
can then be determined.
The random game is tested manually
and observed to see that cards are presented in an unpredictable
order.
6.4.1
System Test Results
The functional System Tests should be run
weekly and a summary report posted on the team web page.
You may want to create a list of chronological links to a separate page
for each system test report.
A summary report contains
- Date and Time the System Tests were run.
- Who executed the tests.
- Which Build Number were the tests run
against.
- Summary: Number of tests passing out
of total tests run.
- A list of Test Case Numbers and status of
each test: Pass / Failed.
- Defect ID of any new defect reports
generated during the test.
- Analysis: Characterize the overall
readiness of the software, describe known issues and workarounds,
recommendations for action items, etc.
Here is a suggested system
test results template.
6.5 Regression Testing
At least once a week, a regression test will be
run (see Braude Chapter 9) to verify that recent additions to the
system
have not caused it to regress to a less functional state than in a
previous build. Regression testing is not a different type of
testing, it doesn't require writing any new test cases. Instead, it
refers more to a schedule for running a suite of tests that have
already
passed. The tests that comprise the regression test suite come from a
"test library" of stored tests (including unit, integration, and system
tests).
From a practical perspective, regression
testing must be automated as there are hundreds of tests to be run. Dr.
Dalbey has a handout "Automating Regression Tests" which has many
useful
tips. The Test Manager creates and runs the regression tests.
6.6 Validation/Acceptance Testing
Validation testing is ensuring that the system
responds and behaves in a reasonable manner when placed under normal or
expected operating conditons such as those that would be used by an
average user. Validation testing for our project will be in the form of
an acceptance test implemented by the customer. The customer may
formalize this testing on his own, or it may be nothing more than an
informal smoke test in which he steps through what he feels is a normal
use case. The customer will report on the results of his acceptance
test
with each release. The acceptance of our release by the customer in
this
manner is his stamp of approval that all contracted functionality for a
specific release is present and behaving properly.
This paragraph subject to change!
Each team must schedule a 3 hour time block
for Acceptance testing with the "customer." You must
schedule a time when Dr. Dalbey and your customer can attend.
Refer to Dr. Dalbey's schedule
for his available times. Schedule a lab where your target
platform is available and there are minimal distractions; often 14-256
is best.
It is not required that all team members attend the Acceptance Test;
however, one or more team members
(developers) should be available nearby in the event problems are
encountered that require a developer to fix.
6.7 Coverage Testing
For Release 2 you will demonstrate the
effectiveness of your
tests by using a coverage testing tool. Dr. Dalbey's favorite tool is Emma. Here is a complete
Emma
walkthrough.) Your tests must achieve
a
designated percent of block coverage specified in the Release Criteria.
9. Test deliverables
Each of these deliverables has a link on the team home
page template.
- this Test Plan document (may be modified
with approval of instructor)
- Test Matrix (See 9.1)
- Unit Test Results
- Integration Test Results
- System Test Cases
- System Test Results
Each of these items is in the team repository.
- Unit Test Cases (Basic Path tests - Stage 2)
- Integration Test Cases
9.1 Test Matrix
A matrix showing which test cases are used to
verify each requirement. McConnell Ch 9 page 134 describes a test
matrix, but he uses the term Requirements Traceability. Please
note that Dr. Dalbey defines Requirements Traceability as in the
Quality
Assurance Plan.
11. Environmental needs
This section is to be completed by the test
team to specify the technical details of the test environment. This
specification should contain the physical characteristics of the
facilities including:
- the hardware
- the communications and system software
- the mode of usage (for example,
stand-alone)
- any other software or supplies needed
to support the test
11.1 Test tools
This section is to be completed by the test
team to describe all the tools needed for testing.
12. Responsibilities
No separate list of responsibilites is included
here. The test responsibilities should be included in the Project Plan
under Job Descriptions or Responsibilities.
14. Schedule
No separate test schedule is included here. All
test milestones should be included in the project detailed schedule.
TBD: References
Braude
Mynatt
Pressman
Pfleeger
McConnell
Document History
4/5/08 JD Removed Smoke Test Reporting on web.
1/31/05 JD Added logistics to 6.6 Acceptance Testing.
4/13/04 JD Added some explanation to Integration Testing
2/18/04 JD Reorganized "Unit Testing for Methods" section to make
some items "recommended" instead of required.
1/15/04 JD Added explanation of responsibility to section
6.3.
CPE 309
Home
Last modified on 03/29/2010 00:21:58