Peer Team Testing
Spring 2014
- Peer Team Testing occurs during scheduled lab hour.
- Reviewing team divides into three subteams: Functional System testers (5), Non-functional System testers (2) and Regression Testers (2). The functional testers may pair up.
- Development team assigns one developer to watch each reviewer.
- At least one pair of functional testers must run tests on
the Lab workstations.
- Reviewers - Take notes of all errors, discrepancies, or anomalies on a Defect Log and submit to instructor.
Functional System testers
- Locate the Download Page on the team's Wiki. If you can't identify how to obtain the most current release, STOP. Require the developers to fix this before proceeding.
- Download the Jar file for the current release. Launch the application according to team's installation instructions. If you can't run the application, STOP. Require the developers to fix this before proceeding.
- For the remainder of functional testing, run the application from the console so that exceptions and messages will be visible in the terminal window.
- Locate the SRS on the team's wiki. Divide the
functional requirements among three Functional
testers. For each requirement you are reviewing:
- Find the requirement in the SRS on the team wiki. Inspect the requirement for correctness, completeness, and clarity.
- Locate the System Test matrix on the team wiki.
- Verify that all features in the Staged Delivery Plan are included in the test matrix.
- Verify each feature has a corresponding hyperlink to the associated test case(s). If not, STOP; Require the developers to fix this before proceeding.
- Inspect each test case for consistency; does it match the SRS (and Prototype) exactly? If not, STOP; Require the developers to fix this before proceeding.
- Inspect each test case for correctness; does it really test the specified functionality? If not, STOP; Require the developers to fix this before proceeding.
- Ask the developer if there are any known issues with the product functionality. Record them in your notes.
- Verify that the team home page has a link to the deployed Jar file. If not, STOP; Require the developers to fix this before proceeding.
- Download the Jar file from the team's home page and execute the system test on the deployed system. Directions should be complete so that no outside assistance is required. If you are unable to execute a test because of unclear directions, etc., inform the developer you are skipping that test and continue to the next test. (Developers are allowed to revise their test directions, and if the revisions are completed before the end of lab the tester may attempt the test again.)
- Record whether the test passes or fails. The test case should make it crystal clear how to determine if a test passes of fails.
- Consider if there are any features of the software for which there was NOT a system test written by the team. Invent your own test for this aspect of the software and record your results.
- You may perform ad-hoc testing for anything else that occurred to you as you ran the system tests.
- Submit test results to instructor at end of lab.
Non-functional System testers
- Find the non-functional requirements in the SRS on the team wiki. Inspect the requirement for correctness, completeness, and clarity.
- Locate the System Test matrix on the team wiki.
- Verify that all non-functional requirements are included in the test matrix. If not, STOP; notify instructor.
- Verify each requirement has a corresponding hyperlink to the associated test case(s). If not, STOP; Require the developers to fix this before proceeding.
- Inspect each test case for consistency; does it match the SRS (and Prototype) exactly?
- Inspect each test case for correctness; does it really test the specified requirement? Does the test require objective, measurable criteria for success?
- Execute the system test on the deployed system. Directions should be complete so that no outside assistance is required. If you are unable to execute a test because of unclear directions, etc., inform the developer you are skipping that test and continue to the next test. (Developers are allowed to revise their test directions, and if the revisions are completed before the end of lab the tester may attempt the test again.)
- Record the results of the test. The test case should make it crystal clear how to determine if a test passes of fails.
- Submit test results to instructor at end of lab.
Regression Testing
- Development team does a Subversion export of their project onto a USB drive provided by tester.
- Verify that the tester's environment is using the JRE version specified in the SRS.
- Tester opens the project in NetBeans. If it has errors, STOP. Require the developers to fix this before proceeding. Do not alter the tester's environment. The most common cause for errors is that the repository library lacks needed Jar files. DO NOT add the Jar files to the tester's working directory. The developers must correct the deficiency in their repository.
- Tester creates a running version of the application on a local workstation.
- Working secretly, the tester modifies the source code to insert one or two defects into the working copy of the code. (Do not commit). Tester secretly records the line numbers of modified code.
- Development team runs their automated regression tests to see if their test suite can identify the seeded defects.
- For each seeded defect found by the tests, the tester records "FOUND" on their notes. For each seeded defect NOT found by the tests, the tester records "NOT FOUND" on their notes. Tester submits results to instructor at end of lab.
- Developers may use the remaining lab time to try to find the seeded defects that escaped testing and write an appropriate unit test.