CSC 307 Lecture Notes Week 9
Use of Formal Method Specification in Testing
Introduction to System Testing Techniques
Testing Implementation, in TestNG and JUnit
Case No. Inputs Expected Output Remarks 1 parm 1 = ... ref parm 1 = ... ... ... parm m = ... ref parm n = ... return = ... data field a = ... data field a = ... ... ... data field z = ... data field z = ... n parm 1 = ... ref parm 1 = ... ... ... parm m = ... ref parm n = ... return = ... data field a = ... data field a = ... ... ... data field z = ... data field z = ...
Table 1: Unit test plan.
unix3:~gfisher/work/calendar/testing/implementation/source/java/caltool/integration-test-plan.html
class X { // Method under test public Y m(A a, B b, C c) { ... } // Data field inputs I i; J j; // Data field output Z z; }
class XTest { public void testM() { // Set up X x = new X(...); ... // Invoke Y y = m(aVAlue, bValue, cValue); // Validate assertEqual(y, expectedY); } }
package caltool.schedule.junit; import junit.framework.*; import caltool.caldb.*; import mvp.*; import java.util.*; /**** * * This is a JUnit version of the ScheduleTest class. This class has the same * logical testing structure as the class in the parent directory, but this * class uses JUnit coding conventions. * ... * */ public class ScheduleTest extends TestCase { // Note extension of TestCase ... /** * Unit test getCategories by calling getCategories on a schedule with a * null and non-null categories field. The Categories model is tested * fully in its own class test. * <pre> * Test * Case Input Output Remarks * ==================================================================== * 1 schedule.categories null Null case * = null * * 2 schedule.categories same non-null Non-null case * = non-null value value * </pre> */ protected void testGetCategories() { Categories result; // method return value /* * Do case 1 and validate the result. */ schedule.categories = null; // setup result = schedule.getCategories(); // invoke assertTrue(validateGetCategoriesPostcond(result)) // validate /* * Do case 2 and validate the result. */ schedule.categories = new Categories(); // setup result = schedule.getCategories(); // invoke assertTrue(validateGetCategoriesPostcond(result)) // validate } ... }
package caltool.schedule.junit3; import junit.framework.*; import junit.runner.BaseTestRunner; import caltool.caldb.*; /**** * * Test driver for ScheduleTest. This driver class contains only a simple main * method that constructs a JUnit test suite containing ScheduleTest. A JUnit * runner, either command-line or in an IDE, takes it from there. * */ public class ScheduleTestDriver { /** * Construct Junit test suite containing ScheduleTest and call the runner. */ public static void main(String[] args) { junit.textui.TestRunner.run(suite()); } /* * Construct the test suite containing XTest. */ public static Test suite() { TestSuite suite= new TestSuite("XTest"); suite.addTestSuite(XTest.class); return suite; } }
where p is the number of the method path covered by the test case i.
Test No. Inputs Expected Output Remarks Path i parm 1= ref parm 1 = p ... ... parm m = ref parm n =
unix3:~gfisher/work/calendar/testing/implementation/source/java/Makefile
class SomeModestModel { public void doSomeModelThing(String name) { ... hdb.doSomeProcessThing(...); ... } protected HumongousDatabase hdb; } class HumongousDatabase { public void doSomeProcessThing(...) { ... } }
Figure 1: Testing directory structure.
Directory or File | Description |
*Test.java | Implementation of class testing plans. Per the project testing methodology, each testing class is a subclass of the design/implementation class that it tests. |
input | Test data input files used by test classes. These files contain large input data values, as necessary. This subdirectory is empty in cases where testing is performed entirely programatically, i.e., the testing classes construct all test input data dynamically within the test methods, rather than inputing from test data files. |
output-good | Output results from the last good run of the tests. These are results that have been confirmed to be correct. Note that these good results are platform independent. I.e., the correct results should be the same across all platforms. |
output-prev-good | Previous good results, in case current results were erroneously confirmed to be good. This directory is superfluous if version control of test results is properly employed. However, this directory remains as a backup to avoid nasty data loss in case version control has not been kept up to date. |
$PLATFORM/output | Current platform-specific output results. These are the results produced by issuing a make command in a platform-specific directory. Note that current results are maintained separately in each platform-specific subdirectory. This allows for the case that current testing results differ across platforms. |
$PLATFORM/diffs | Differences between current and good results. |
$PLATFORM/Makefile | Makefile to compile tests, execute tests, and difference current results with good results. |
$PLATFORM/.make* | Shell scripts called from the Makefile to perform specific testing tasks. |
$PLATFORM/.../*.class | Test implementation object files. |
Table 2: Test file and directory descriptions.