The implementation manager will perform the daily build and daily smoke test.
The test group will perform integration tests and system tests (both functional and non-functional). (Mynatt Ch 7).
The test group will perform automated regression testing. (Mynatt Ch 7, Pressman Ch 17).
The concept of unit testing is that a unit is
tested in isolation, independently of the rest of the system, and as
such requires the creation of a test driver. In CPE 309 these
test drivers will be written as JUnit
test cases. (There is a training
on JUnit in the Training Plan.) The unit tests must
demonstrate that every method in the class works correctly. All test
driver code must be placed in the team source code repository when the
unit is submitted to the project build. Here is a tutorial on
setting up the source code structure for testing
units in isolation. (This will be required for Stage 2).
JUnit is not for GUI testing.
JUnit test code does not have to follow the
class coding standard to the letter. However, the instructor will
read the test code so it needs to be readable. In particular, the
purpose of each test case must be documented, either as a method
comment
or as a JUnit test message. Be sure to cross reference any tests
that were created in response to a defect report to the defect
number. Also, here are the recommended tips on JUnit
style.
As a simple check that developers are writing thorough unit tests, the QA person may require that unit tests demonstrate that a certain level of statement coverage is achieved before they are allowed to be checked in. (See 6.7). It would be logical to require coverage levels that are the same as those specified in the Release Criteria. One way to do this is to require that the developer post the Emma coverage report in a designated topic on the team Forum when they commit their code to the team repository. Even more rigorous would be to require that the test manager inspects the unit tests before a new module can be checked in.
Also, don't forget that McConnell recommends
Source Code Tracing - step through every line of code with an
interactive debugger and verify all values. The QA manager may
choose to require tracing documentation from each developer.
It is recommended that there be some convention or mechanism to "freeze" the source code repository during daily build and smoke test so that new code doesn't get committed in the middle of a build. For example, the team might set a deadline of 6pm for all new code to be commited to the repository. Then the daily build is run every day at 7pm and after the build report and smoke test report is posted the repository is "unfrozen." (See Braude fig 9.21).
I_Customer.java | Interface |
Customer.java | Class |
FakeCustomer.java | Fake class |
CustomerTest.java |
unit test |
CustomerIntTest.java | integration test |
CustomerBug13Test.java | regression test |
Each team decides who is responsible for writing and running the integration tests. Obvious alternatives are a) the developer who developed the new code or b) the Test Manager or c) the developer writes the tests and the Test Manager runs them. The instructor recommends option B.
Describe your team's decision here.
Follow these guidelines for writing system test cases. The tests must be evenly assigned to all developers. Each developer must submit their test cases as part of their individual project grade.
It is suggested but not required that the functional System Tests be automated as they will be run at least once a week. An automated test case is called a test "script." Follow these directions for writing system test scripts. Automation of GUI elements of your functional system tests can be achieved with the instructor's GUI Robot tool or another tool such as Abbot. It his highly recommended that you follow a thin GUI testing strategy.
Non-functional requirements may be verified using manual test methods.
Randomness The functional System Tests should be run
weekly and a summary report posted on the team web page. From a practical perspective, regression
testing must be automated as there are hundreds of tests to be run. Dr.
Dalbey has a handout "Automating Regression Tests" which has many
useful
tips. The Test Manager creates and runs the regression tests.
This paragraph subject to change!
Each team must schedule a 3 hour time block
for Acceptance testing with the "customer." You must
schedule a time when Dr. Dalbey and your customer can attend.
Refer to Dr. Dalbey's schedule
for his available times. Schedule a lab where your target
platform is available and there are minimal distractions; often 14-256
is best. It is not required that all team members attend the Acceptance Test;
however, one or more team members
(developers) should be available nearby in the event problems are
encountered that require a developer to fix.
If your system has a non-deterministic component, such as dealing
cards in random order, that part has to be isolated and tested
manually.
For example, have the Deck class contain a shuffle()
method and a nonShuffle() method. For automated
testing, have a command line argument --noshuffle
that runs the game calling the nonShuffle() method
to deal the cards in a known order. The expected results
can then be determined.
The random game is tested manually
and observed to see that cards are presented in an unpredictable
order.
6.4.1
System Test Results
You may want to create a list of chronological links to a separate page
for each system test report.
A summary report contains
Here is a suggested system
test results template.
6.5 Regression Testing
At least once a week, a regression test will be
run (see Braude Chapter 9) to verify that recent additions to the
system
have not caused it to regress to a less functional state than in a
previous build. Regression testing is not a different type of
testing, it doesn't require writing any new test cases. Instead, it
refers more to a schedule for running a suite of tests that have
already
passed. The tests that comprise the regression test suite come from a
"test library" of stored tests (including unit, integration, and system
tests).
6.6 Validation/Acceptance Testing
Validation testing is ensuring that the system
responds and behaves in a reasonable manner when placed under normal or
expected operating conditons such as those that would be used by an
average user. Validation testing for our project will be in the form of
an acceptance test implemented by the customer. The customer may
formalize this testing on his own, or it may be nothing more than an
informal smoke test in which he steps through what he feels is a normal
use case. The customer will report on the results of his acceptance
test
with each release. The acceptance of our release by the customer in
this
manner is his stamp of approval that all contracted functionality for a
specific release is present and behaving properly.
6.7 Coverage Testing
For Release 2 you will demonstrate the
effectiveness of your
tests by using a coverage testing tool. Dr. Dalbey's favorite tool is Emma. Here is a complete
Emma
walkthrough.) Your tests must achieve
a
designated percent of block coverage specified in the Release Criteria.
9. Test deliverables
Each of these deliverables has a link on the team home
page template.
Each of these items is in the team repository.
9.1 Test Matrix
A matrix showing which test cases are used to
verify each requirement. McConnell Ch 9 page 134 describes a test
matrix, but he uses the term Requirements Traceability. Please
note that Dr. Dalbey defines Requirements Traceability as in the
Quality
Assurance Plan.
11. Environmental needs
This section is to be completed by the test
team to specify the technical details of the test environment. This
specification should contain the physical characteristics of the
facilities including:
11.1 Test tools
This section is to be completed by the test
team to describe all the tools needed for testing.
12. Responsibilities
No separate list of responsibilites is included
here. The test responsibilities should be included in the Project Plan
under Job Descriptions or Responsibilities.
14. Schedule
No separate test schedule is included here. All
test milestones should be included in the project detailed schedule.
TBD: References
Braude
Mynatt
Pressman
Pfleeger
McConnell
Document History
4/5/08 JD Removed Smoke Test Reporting on web.
1/31/05 JD Added logistics to 6.6 Acceptance Testing.
4/13/04 JD Added some explanation to Integration Testing
2/18/04 JD Reorganized "Unit Testing for Methods" section to make
some items "recommended" instead of required.
1/15/04 JD Added explanation of responsibility to section
6.3.
CPE 309
Home