Authors |
Ge Fan
Email: gefan@cis.ksu.edu
Fang Fang Email: ffa1928@cis.ksu.edu Stacy Lacy Email: slacy@kscable.com |
Instructor | Dr. Bill Hankley |
The cost of changing requirements, altering design elements, and fixing and retesting code errors esculates as the project progresses. Implementing formal inspections into the software development process can have a direct postive impact on project cost, schedule, and functionality.
This tutorial will discuss the general concepts and definitions on the
software formal inspections, the formal inspection process and roles of
the inspection team. Due to the wide spread adoption of object oriented
development emphasis will be on formal inspections in an object oriented
environment.
1.1 What are inspections2.0 Inspection Process (Lacy)
1.2 Who uses inspections
1.3 Why use inspections
1.4 What are the differences among inspections, walkthroughs and reviews
1.5 What are paybacks from inspections
2.2 Planning an Inspection
2.2.1 Selecting Participants
2.2.2 Developing the Agenda
2.2.3 Distributing Materials
2.2.4 Entrance Criteria
2.3 Pre-Inspection Overview
2.4 Preparing for the Inspection
2.4.1 Inspecting the Product or Artifacts
2.4.2 Checklists
2.4.3 Preparation Logs
2.5 Conducting the Inspection
2.5.1 Focus on Inspecting the Product
2.5.2 Defect Log
2.5.3 Inspection Report
2.6 Post-Inspection Discussion
2.7 Rework
2.8 Follow-up
2.8.1 Follow-up Analysis
2.8.2 Amended Inspection Report
3.1 Procedural roles4.0 Inspections During the Software Life Cycle3.1.1 Moderator3.2 Guidelines for roles
3.1.2 Author
3.1.3 Reader
3.1.4 Recorder
3.1.5 Inspectors3.3 Participation of inspectors
3.3.1 Planning
3.3.2 Overview
3.3.3 Preparation
3.3.4 Inspection Meeting
3.3.5 Discussion
3.3.6 Rework
3.3.7 Follow-up
4.1 Requirements Inspection (Fang) This section is currently unavailable4.1.1 Purpose
4.1.2 Materials to be distributed
4.1.3 Entrance criteria
4.1.4 Reference materials
4.1.5 Coverage rates
4.1.6 Participants and roles
4.1.7 Procedure
4.1.8 Exit criteria
4.1.9 Checklist
4.3 Code Inspection (Fan)
5.1 ASSIST6.0 Summary (Lacy)
5.2 Scrutiny
5.3 ICICLE
5.4 CSI
5.5 WiP
5.6 ReviewPro
5.7 CheckMate
8.0 Self-assessment (Fan, Fang, Lacy)
9.0 Glossary (Fan, Fang, Lacy)
1.1 What are inspections
The inspections are a means
of verifying intellectual products by manually examining the developing
product, a piece at a time, by small groups of peers to ensure that it
is correct and conforms to product specifications and requirements [STRAUSS].
The purpose of inspections is the detection of defects. There are two aspects
of inspections to address. One is the inspections occur in the early stages
in software life cycle and examine a piece of the developing product at
a time. These stages include requirements, design, and coding. The defects
in the first stage would be amplified to more defects in design stage and
much more defects in coding stage without inspections. Thus earlier detection
for defects can make lower cost on software development and well ensure
the quality of the software product to delivery. On the other hand, a small
group of peers concentrating on one part of the product, can detect more
defects than the same number of people working alone. Therefore, this improved
effectiveness comes from the thoroughness of the inspection procedures
and the synergy achieved by an inspection team.
1.2 Who uses inspections
Inspections can play a significant
role in a quality management system when used consistently and correctly,
but they are of little value if applied haphazardly or without controls
or for the wrong tasks [STRAUSS]. For example, inspections can not be applied
for or only limitedly used in the following circumstances:
1.4 What are the differences among inspections,
walkthroughs and reviews
In the methods of quality
control, inspection is a mechanism that has proven extremely effective
for the specific objective of product verification in many development
activities. It is a structured method of quality control, as it must follow
a specified series of steps that define what can be inspected, when it
can be inspected, who can inspect it, what preparation is needed for the
inspection, how the inspection is to be conducted, what data is to be collected,
and what the follow-up to be the inspection is. Thus the result of inspections
on a project has the performance of close procedural control and repeatability.
However, reviews and walkthroughs have less structured procedures. They
can have many purposes and formats. Reviews can be used to form decisions
and resolve issues of design and development. They can also be used as
a forum for information swapping or brainstorming. Walkthroughs are used
for the resolution of design or implementation issues. Both methods can
range from being formalized and following a predefined set of procedures
to completely informal. Thus they lacks the close procedural control and
repeatability.
1.5 What are paybacks from inspections
The key reason for inspections
is to obtain a significant improvement in software quality, as measured
by defects that are found in the product when it is used. A project example
from AT&T the Integrated Corporate Information System shows the
inspections for a portion of ICIS.
Inspection Results:
Figure 2.1 Inspection Process Flow. currently unavailable |
Inspection Step | Deliverable(s) | Responsible Role |
---|---|---|
Planning | Participant List | Moderator |
Materials for Inspection | Author | |
Agenda | Moderator | |
Entrance Criteria | Moderator | |
Overview | Common Understanding of Project | Moderator |
Preparation | Preparation Logs | Each Inspector |
Inspection | Defect Log | Recorder |
Inspection Report | Moderator | |
Discussion | Suggested Defect Fixes | Various |
Rework | Corrected Defects | Author |
Follow-up | Inspection Report (amended) | Moderator |
Another complexity to selecting participants is how many people should be included. The number is somewhat determined by assigning specific roles. Section 5.0 discusses roles and how to assign them. A second consideration on team size has to do with communication. The larger the group, the more lines of communication must be maintained. If the group grows beyond 6-10 people, the time spent on communication, scheduling, and maintaining focus will detract from the quality and timeliness of the inspection. Rule of thumb #2: The optimum team size for inspections is less than 10.
A common question is should managers be included? Managers should be aware of the outcome of inspections, but generally not included [FREE]. Referring to rule of thumb #1, managers should only participate if they will directly add value to the substance of the inspection. Inspections and reviews are meant to improve software quality, not to access or manage people. If the latter becomes the goal of the process, the inspections will be threatening and participants will not be completely open.[NASA1]
The method of distribution for materials is dependent on the culture of the development team and the type of inspection to be conducted. Many participants may prefer hard copy for design elements, while others prefer to navigate through electronic documents. Online access to code for inspection may be preferred for access to search functions.
Inspection participants should be expected to bring any materials they need for the review to the meeting. There should not be new material at the meeting, if materials were incomplete, the review should be rescheduled.
For inspections be conducted by people already familiar with the project, the overview may be abbreviated or eliminated. If an inspection seems to be bogged down in questions that should be general knowledge, the moderator may wish to postpone the inspection and conduct an overview.
The defect log may be sufficient for many projects. However, defects may be somewhat complex and further information may need to be provided. A defect report form can be used in conjunction with the defect log to provide more information. In such circumstances, the defect log may become the index to individual defect reports.
Click here for an example Defect Log.
If rework is required, the Inspection Report will later be amended after completion of rework. Click here for an example Inspection Report.
The amended Inspection Report should provide one of two recommendations for the product inspected: Accept or Reject.
3.0 Roles in Inspection Process
The inspection process is performed by inspection team. It is repeated many times on many work products during the given product development. How well the inspection teams do their job will decide whether there is a net decrease in the overall development schedule and an increase in net productivity, To carry out the inspection, there are always five specific procedural roles are assigned.
3.1 Roles Responsibilities
Moderator
Reader
Recorder
Author
Inspector
3.2 Guidelines for roles
To ensure the quality, efficiency and effectiveness of inspection teams,
it is very important to carefully manage and use well-formed inspection
teams. Inspection
teams need to combine several factors.
All team members are inspectors. Readers and recorders should be experienced
inspectors. The number of inexperienced inspectors should be limited if
possible.
Minimum is three (a moderator/ recorder, a reader, and an author).
Enough team members can adequately verify the work product for the intended
purpose of the inspection, but any more persons will reduce the effectiveness
of the process.
So inspection team should be small with Maximum of seven persons.
3.3 Participation of inspectors
3.3.1 Planning
Roles: - Moderator
- Author
3.3.2 Overview
Roles: - Moderator
- Author
- Inspectors
3.3.3 Preparation
Roles: - All inspectors
3.3.4 Inspection Meeting (5)
Roles: - Moderator
- Author
- Reader
- Recorder
- Inspectors
3.3.5 Discussion
Roles: - All inspectors
3.3.6 Rework
Roles: - Author
3.3.7 Follow�Up
Roles: - Moderator
- Author
Formal inspections are in-process peer reviews conducted within the
phase of the life cycle in which the product is developed. The period of
time that starts
when a software product is conceived and ends when the product is no
longer available for use. The software life cycle typically traditionally
includes the
following eight phases:
Concept and Initiation Phase
Requirements Phase
Architectural Design Phase
Detailed Design Phase
Implementation Phase
Integration and Test Phase
Acceptance and Delivery Phase
Sustaining Engineering and Operations Phase.
This tutorial emphasizes inspections in the phases of the requirements,
design, and implementation in software development and suggests products
that may
be inspected during each phase. The software life cycle used is the
NASA standard waterfall model. The following sections describe the inspections
to be
conducted in the three phases.
In an object oriented project the PDR would focus on the package and collaboration diagrams. The PDR might include review design to ensure compliance with a corporate component framework. A verification that all requirements for a system are mapped to a software component may also be done in this phase. Some organizations require a test matrix be defined and reviewed in the PDR [NASA2].
PDRs can be facilitated by the use of checklists for reviewers. Since the PDR is at a high level, the checklist can be useful for a wide range of projects implemented with varying technologies. The NASA Jet Propulsion Lab uses an Architecture Design Checklist. Click here for the Checklist
Each inspection of the design completed as part of or before the CDR would follow the basic inspection process outlined in section 4 of this chapter. Each inspection consists of planning, preparing, inspecting, discussing, rework, and follow-up. The inspection should result in a defect log that may be customized for the design inspection process. The inspection report would provide management with an assessment of readiness to begin construction of the system.
CDRs can also benefit from a standard checklist of design considerations. Click here to the Detail Design Checklist from JPL.
Code and all new documentation are the candidates for inspections during
this phase. Code inspections should check for technical accuracy and completeness
of the code, verify that it implements the planned design, and ensure good
coding practices
and standards are used. Code inspections should be done after the code
has been compiled and all syntax errors removed, but before it has been
unit tested. Other candidates are the integration and test plan and procedures,
and other documents that have been produced. Documents should be inspected
for accuracy, completeness, and traceability to higher level documents.
The inspection team may be selected from participants in the detailed design,
code, test, verification and validation, or from software quality assurance
[NASA1].
4.3.1 Purpose
According to [STRAUSS], the basic purposes of the code inspection include:
4.3.2 Materials to be distributed
Following materials need to be distributed prior to the inspection:
4.3.7 Inspection procedures
Examine the code for conformance to the detailed design, conformance
to the coding standards, and correctness. Focus on control logic, linkage
parameters, internal and external interfaces, and data definitions and
usage. Look for performance, structuring, and storage problems.
The basis of this section is the existing inspection tools and techniques available either in market or in free use. Dunsmore [DUNS] evaluated several tools that had been created for inspections, such as ASSIST, Scrutiny, ICICLE, CSI, Wip, ReviewPro and CheckMate. The following sections briefly describe some current inspection tools, and highlight any features they have which could be used to help in the comprehension of code being inspected.
5.1 ASSIST
ASSIST (Asynchronous/Synchronous Software Inspection Support Tool)
is an inspection tool designed by F. Macdonald to support any inspection
process and allow inspection of any type of document. This is a research
tool that uses a custom-designed process modeling language called IPDL
(Inspection Process Definition Language). ASSIST is based on a client/server
architecture, where the server is used as a central repository of documents
and other data. ASSIST supports both individual and group-based phases
of inspection. Group-based phases can be performed synchronously or asynchronously,
with the choice of same-place or different-place synchronous meetings.
Macdonald created a generic software inspection template, which would cater for all current inspection processes and be versatile enough to cope with any future processes. This generic template was converted into a process definition language, and was embedded into ASSIST.
Facilities within ASSIST include defect finding aids, enhanced document representations, facilities for metric collection and analysis, and the provision of facilities for distributed inspection. ASSIST provides and on-line checklist, which can be marked off as each item on the list is covered, as well as a text browser that is used to view the product under inspection. The text browser can be used to highlight areas of the document and add annotations. The annotations describe defects that have been found in the document. ASSIST also supports the on-line usage of checklists [DUNS].
ASSIST is freely available for research purposes. It currently runs on SunOS 4.1.3, Solaris 5.1 and OSF/1 platforms. It requires Python 1.4 and Tcl 4.0 / Tk 7.4. Full details are available on the ASSIST homepage: http://www.cs.strath.ac.uk/CS/research/efocs/assist.html
5.2 Scrutiny
Scrutiny, an on-line inspection system developed by Bull HN Information
Systems in conjunction with the University of Illinois. Scrutiny is a general
inspection support tool which can support distributed inspections. The
inspection method used by Scrutiny is based on four phases. The first phase
is called Initiation, where the inspection team is formed and the
Moderator prepares the necessary documentation. Phase two, Preparation,
involves the inspectors creating their annotations of the presented documents
for inspection. Phase three, Resolution, is equivalent to inspection
meeting. The final phase, Completion, encompasses both the rework
and follow-up stages of inspection process.
Scrutiny supports only text documents, but has been designed to be an open tool, in the hope of integrating other tools at a later date. In the product window, which displays the document being inspected, areas of text can be highlighted and an annotation assigned to it. There is no comprehension support available in scrutiny, and no support for checklists [DUNS].
A related paper "Scrutiny: A Collaborative Inspection and Review System", by John W. Gintell and John Arnold and Michael Houde and Jacek Kruszelnicki and Roland McKenney and Gerard Memmi, Proceedings of the Fourth European Software Engineering Conference, Garwisch-Partenkirchen, Germany, September, 1993, can be found at URL: http://www.ics.hawaii.edu/~johnson/FTR/Bib/Gintell93.ps
5.3 ICICLE
Intelligent Code Inspection in a C Language Environment (ICICLE) is
an intelligent inspection assistant for the inspection of C code. A major
difference between this tool and others is that it tries to find common
defects itself, in an attempt to help the inspector by removing the more
obvious errors in the code. ICICLE achieves this through its own rule-based
static debugging tool and the UNIX lint tool.
ICICLE supports a two-phase inspection, the individual inspection and
the inspection meeting. Group meetings are held in the same room, no distributed
facilities are provided. In the individual inspection stage, inspectors
can produce comments for each line of code. A referencing system is supplied
for variables and functions, allowing quick movement through code, e.g.
selecting a variable name
would move you to its point of declaration. A hypertext browser is
also available, providing domain specific knowledge. No facilities are
provided for any form of comprehension or defect detection [DUNS].
A related paper "ICICLE: Groupware for Code Inspection", by L. Brothers and V. Sembugamoorthy and M. Muller, can be found at Proceedings of the 1990 ACM Conference on Computer Supported Cooperative Work, pages 169-181, October, 1990.
5.4 CSI
CSI (Collaborative Software Inspection) is an on-line inspection environment
to allow distributed inspections, developed at the University of Minnesota.
All material is available on-line, and inspection products are created
on-line. CSI was designed to be able to cope with four types of collaborative
inspection meeting: 1) same time, same place, 2) same time, different place,
3) different time, same place, 4) different time, different place. CSI
supports both synchronous activities, e.g. group meeting, and asynchronous
activities, e.g. individual checking.
CSI contains a browser that displays the material under inspection, but currently only supports text, and contains hyperlinks from the inspected material to a fault list, note pad, inspection summary, and action list. CSI contains hyperlinks to allow easy navigation between different areas of the system, but currently contains no help for the preparation stage or for checklists or any other form of defect detection.
Arelated paper "Distributed, Collaborative Software Inspection", by Vahid Mashayekhi and Janet Drake and Wei-Tek Tsai and John Riedl, can be found at IEEE Software, Volume 10, Number 5, September, 1993.
5.5 WiP
The WiP tool is designed to support distributed inspections, attempting
to solve the problem of having a scattered inspection team. WiP utilizes
the World Wide Web, and is designed to distribute the documents to be inspected,
allow annotation of those documents, be able to search related documents,
allow selection of a checklist, and gather inspection statistics. For the
preparation stage of inspection, users are given access to source documents
and checklists, as well as informal information which could lead to a better
understanding of the given documents. The WiP interface also contains hyperlinks
to allow easy navigation through the documents. Annotations that are made
during the inspection are not made to the documents directly, to avoid
multiplication of data and are instead kept separately and sent back to
the main server. WiP by the authors own admission was designed primarily
as an investigation to the possibility of carrying out inspections over
the World Wide Web, and not as a complete inspection tool. As with ASSIST,
WiP is aimed at enforcing the rigours of the overall inspection process
[DUNS].
The WiP tool provides a set of functions for distributing the document to be inspected, annotating it, searching related documents, choosing the checklist and gathering inspection statistics. Evaluation of the tool indicated that despite some minor shortages it was well liked. The idea of on-line commenting was pleasant and the elimination of tangling with a variety of paper documents and reports made the inspection more effective. Many test users liked the simple statistics, and the tool was easy to learn and use.
A related paper "A WWW-Based Tool for Software Inspection", by Lasse Harjumaa, Ilkka Tervonen, University of Oulu can be found at Proceedings of the 31st Hawaii International Conference on System Sciences (HICSS'98), published by the IEEE Computer Society.
5.6 ReviewPro
ReviewPro is a commercial tool developed by Software
Development Technologies Corporation. ReviewPro� is a web-based, Software
Technical Reviews and Inspections tool. Technical Reviews and Inspections
have proven to be the most effective form of defect detection and removal.
Now there is a tool that automates this valuable process and prevents the
migration of defects to late phases of software development. ReviewPro�
is architecturally independent of the Web browser, Web server and messaging
server software and runs on both Windows NT and Unix server platforms.
5.7 CheckMate
CheckMate enables a software inspection group to automatically inspect
C and C++ source code against a pre-determined coding policy. CheckMate
allows users to configure the coding policy or standard according to the
developers' needs. It also keeps a list of software metric statistics.
Basically, it allows the inspectors to concentrate on the functionality
and architecture of the classes and their methods. CheckMate is available
for all Windows platforms with UNIX/VMS and support for Visual Basic under
development. More information can be found at http://www.sybernet.ie/source/projects_frame.html
[DUNS] Alastair Dunsmore, "Comprehension and Visualisation of Object-Oriented Code for Inspections", URL: http://www2.umassd.edu/SWPI/EFoCS/EFoCS-33-98.pdf
[MAC] Macdonald, F. and J. Miller, "A Comparison of Tool-Based and Paper-Based Software Inpection", URL: http://www2.umassd.edu/SWPI/ISERN/ISERN-98-17.pdf
[FREE] Freedman, Daniel P. Weinberg, Gerald M. "Handbook of Walkthroughs, Insepctions, and Technical Reviews: Evaluating Programs, Projects, and Products", Dorset House Publishing. New York. 1990.
[HOLL] Hollocker. "Software Reviews and Audits Handbook", 1990.
[IEEE1] Institute of Electrical and Electronics Engineers. "IEEE standard for software reviews / Sponsor Software Engineering Standards Committee of the IEEE Computer Society", The Institute. New York. 1998.
[NASA1] National Aeronautics and Space Administration. "Software Formal Inspections Guidebook", NASA-GB-A302. August, 1993.
[NASA2] National Aeronautics and Space Administration. "Software Assurance Guidebook", NASA-GB-A201.
[NASA3] National Aeronautics and Space Administration, "Software Formal Inspections Standard", NASA-STD-2202-93, April 1993.
[NASA4] National Aeronautics and Space Administration, "NASA Guidebook and Standard" (this usefull link is temperarily listed here)
"Reviews and Inspections" (this usefull link is temperarily listed here)
"Brad Appleton's Software Engineering Links" (this usefull link is temperarily listed here)
"Software Quality Links" (this usefull link is temperarily listed here)
[PRESSMAN] Pressman, R., "Software Engineering: A practitioner?s Approach (4th Edition)", McGraw-Hill, 1997.
[STRAUSS] Strauss, S. H. and R. G. Ebenau, "Software Inspection Process", McGraw-Hill, Inc, 1994.
[GILB] Gilb, T. and D. Graham, "Software Inspections", 1993.
[SCHULMEYER] Schulmeyer G. G. and J. I. McManus, "Handbook of Software
Quality Assurance", Van Nostrand Reinhold Company Inc., 1987.
These questions are to help you assess your understanding of the concepts
in this article.
Question | Choice | |
|
What is the purpose of an inspection? | A. To make a go/no-go decision
to go to next phase.
B. Find defects through a detail examination of the product. C. Defines the methods for developing software. D. For manager to evaluate personnel. |
|
What is the difference between a review and an inspection? | A. They are the same.
B. An inspection is a project management check and a review finds errors. C. An inspection is formal process to detect defects and a review is a management process to determine if the product is ready to move to the next phase. D. A review is sub-set of an inspection. |
|
What is a defect? | A. A violation of a standard
or a procedure.
B. A sub-optimal design approach. C. Code that will not compile. D. A difference in design approach.
|
|
Identify the 5 valid roles in an inspection? | A. Author
B. Leader C. Coordinator D. Recorder E. Secretary F. Moderator G. Reader H. Manager I. Inspectors |
Roles
Author(s)
Person or persons primarily responsible for creating a work product. The member of the inspection team that provides information about the work product during all stages of the inspection process and corrects defects during the rework stage. (Also known as (AKA) "Owner(s)".)Inspection Team
A small group of peers who have a vested interest in the quality of the inspected work product and perform the inspection. This group usually ranges in size from 3 to 8 people and can be selected from various areas of the development life cycle (requirements, design, implementation, testing, quality assurance, user, etc.). Selected members of the inspection team fulfill the roles of moderator, author, reader, and recorder.Inspector
A person whose responsibilities include reviewing work products created by others. All members should be considered inspectors in an inspection team.Moderator
Person who is primarily responsible for facilitating and coordinating an inspection. When there is no "Reader" in the inspection process, the moderator also controls the pace of review of the work product during the inspection meeting.Reader
Person who guides the team during the inspection meeting by reading or paraphrasing the work product. The role of the reader is usually fulfilled by a member of the inspection team other than the author(s). All inspection methods don't use this role.Recorder
Person who records, in writing, each defect found and its related information (severity, type, etc.) during the inspection meeting. (AKA "Scribe".)Process
Formal Inspection
Period of time in which details for an inspection are decided and necessary arrangements are made. These usually include; checking to ensure that entry criteria have been met, selection of an inspection team, finding a time and place for the inspection meeting, and deciding whether an overview meeting is needed.Overview Meeting Stage
Meeting where the author(s) present background information on the work product for the inspection team. An overview meeting is held only when the inspection team needs background information to efficiently and effectively examine the work product. (AKA Kickoff Stage.)Kickoff Stage
The kickoff stage is used to brief the inspection team on the contents of the inspection packet, inspection objective(s), inspector's defect finding role(s), logistics for the inspection meeting, recommended preparation time and preparation stage data to be collected. The moderator can elect to hold a short (5 to 30 minutes) meeting or may use any other method that will accomplish briefing the team.Preparation Stage
Period of time inspectors individually study and examine the work product. Usually a checklist is used to suggest potential defects in the work product.Inspection Meeting Stage
Meeting where the work product is examined for defects by the entire inspection team. The results of this meeting are recorded in a defect list (defect log).Third Hour Stage
Time allotted for members of the inspection team to resolve open issues and suggest solutions to known defects (or Causal Analysis Stage).Causal Analysis Stage
Time allotted for the inspection team to analyze defect causes and/or inspection process problems and, if possible, to determine solutions for those problems.Rework Stage
Time allotted for the author(s) to correct defects found by the inspection team.Follow-Up Stage
Short meeting between the author(s) and moderator to verify that defects found during the inspection meeting have been corrected and that exit criteria has been met. When exit criteria has not been met, the inspection team repeats inspection stages described under "process".
Materials
Checklist
A list of items posed as questions summarizing potential technical problems for an inspection. Checklists are used during the preparation stage to suggest potential defects and again during the inspection meeting stage to classify defects according to their type.Defect
The process where defects identified during an inspection are classified by severity and type.Entry Criteria
A set of measurable actions that must be completed before the start of a given task.Exit Criteria
A checklist of activities or work items that must be complete or exist, respectively, prior to the end of a given process stage, activity, or subactivity.Inspection Package
The collection of work products and corresponding documentation presented for inspection, as well as required and appropriate reference materials.Severity
A degree or category of magnitude for the ultimate impact or effect of executing a given software fault, regardless of probability. The severity of a defect is generally classified as "Major" or "Minor".Major
A defect, which if not identified and removed in a work product, or in a subsequent work product, could result in a test or customer/field reported problem.Minor
All other defects.Type of Inspection
A group of inspections which share a common type of work product, life cycle phase, checklist, entry criteria, and exit criteria. Common inspections types include (but are not limited to): systems requirements, system design, software requirements, software detailed design, source code, test plans, test procedures, user documentation, plans, standards, procedures, training materials, hardware diagrams, interface specifications, and the SIRO Newsletter.Work Product
The output of a task. Formal work products are delivered to the acquirer. Informal work products are necessary to an engineering task but not deliverable. A work product may also be an input to a task.
FUNCTIONALITY
1. Does each module have a single function?
2. Is there code which should be in a separate function?
3. Is the code consistent with performance requirements?
4. Does the code match the Detailed Design? (The problem may be in
either the code or the design.)
DATA USAGE
A. Data and Variables
1. Are all variable names lower case?
2. Are names of all internals distinct in 8 characters?
3. Are names of all externals distinct in 6 characters?
4. Do all initializers use "="? (v.7 and later; in all
cases should be consistent).
5. Are declarations grouped into externals and internals?
6. Do all but the most obvious declarations have comments?
7. Is each name used for only a single function (except single character
variables "c", "i", "j", "k", "n", "p", "q", "s")?
B. Constants
1. Are all constant names upper case?
2. Are constants defined via "# define"?
3. Are constants that are used in multiple files defined in an INCLUDE
header file?
C. Pointers Typing
1. Are pointers declared and used as pointers (not integers)?
2. Are pointers not typecast (except assignment of NULL)?
CONTROL
1. Are "else__if" and "switch" used clearly? (generally "else__if"
is clearer, but "switch" may be used for not-mutually-exclusive cases,
and may also be faster).
2. Are "goto" and "labels" used only when absolutely necessary, and
always with well-commented code?
3. Is "while" rather than "do-while" used wherever possible?
LINKAGE
1. ARE "INCLUDE" files used according to project standards?
2. Are nested "INCLUDE" files avoided?
3. Is all data local in scope (internal static or external static)
unless global linkage is specifically necessary and commented?
4. Are the names of macros all upper case?
COMPUTATION
A. Lexical Rules for Operators
1. Are unary operators adjacent to their operands?
2. Do primary operators "->" "." "()" have a space around them? (should
have none.)
3. Do assignment and conditional operators always have space around
them?
4. Are commas and semicolons followed by a space?
5. Are keywords followed by a blank?
6. Is the use of "(" following function name adjacent to the identifier?
7. Are spaces used to show precedence? If precedence is at all complicated,
are parentheses used (especially with bitwise ops)?
B. Evaluation Order
1. Are parentheses used properly for precedence?
2. Does the code depend on evaluation order, except in the following
cases?
a. exprl, expr2
b. exprl? expr2 : exp2
c. exprl & & expr2
d. exprl || expr2
3. Are shifts used properly?
4. Does the code depend on order of effects? (e.g., i = i++;)?
MAINTENANCE
1. Are library routines used?
2. Are non-standard usages isolated in subroutines and well documented?
3. Does each module have one exit point?
4. Is the module easy to change?
5. Is the module independent of specific devices where possible?
6. Is the system standard defined types header used if possible (otherwise
use project standard header, by "include")?
7. Is use of "int" avoided (use standard defined type instead)?
CLARITY
A. Comments
1. Is the module header informative and complete?
2. Are there sufficient comments to understand the code?
3. Are the comments in the modules informative?
4. Are comment lines used to group logically-related
statements?
5. Are the functions of arrays and variables described?
6. Are changes made to a module after its release noted in the development
history section of the header?
B. Layout
1. Is the layout of the code such that the logic is apparent?
2. Are loops indented and visually separated from the surrounding code?
C. Lexical Control Structures
Is a standard project-wide (or at least consistent) lexical control
structure pattern used:
e.g.
while (expr)or
{
stmts;
}
while (expr) {ETC.
stmts;
}
FUNCTIONALITY
1. Do the modules meet the design requirements?
2. Does each module have a single purpose?
3. Is there some code in the module which should be a function or a
subroutine?
4. Are utility modules used correctly?
5. Does the code match the Detailed Design specifications? If not,
the design specifications may be in error.
6. Does the code impair the performance of the module (or program)
to any significant degree?
DATA USAGE
A. General
1. Is the data defined?
2. Are there undefined or unused variables?
3. Are there typos, particularly "O" for zero, and "l" for one?
4. Are there misspelled names which are compiled as function or subroutine
references?
5. Are declarations in the correct sequence? (DIMENSION, EQUIVALENCE,
DATA).
B. Common/Equivalence
1. Are there local variables which are in fact misspellings of a COMMON
element?
2. Are the elements in the COMMON in the right sequence?
3. Do EQUIVALENCE statements force any unintended shared data storage?
4. Is each EQUIVALENCE commented?
C. Arrays
1. Are all arrays DIMENSIONed?
2. Are array subscript references in column, row order? (Check all
indices in multi-dimensioned arrays.)
3. Are array subscript references within the bounds of the array?
4. Are array subscript references checked in critical cases?
5. Is each array used for only one purpose?
D. Variables
1. Are the variables initialized in DATA statements, BLOCK DATA, or
previously defined by assignments or COMMON usage?
2. Should variables initialized in DATA statements actually be initialized
by an assignment statement; that is, should the variable be initialized
each time the module is invoked?
3. Are variables used for only one purpose?
4. Are variables used for logical unit assignments?
5. Are the correct types (REAL, INTEGER, LOGICAL, COMPLEX) used?
E. Input and Output
1. Do FORMATs correspond with the READ and WRITE lists?
2. Is the intended conversion of data specified in the FORMAT?
3. Are there redundant or unused FORMAT statements?
4. Should this module be doing any I/O? Should it be using a message
facility?
5. Are messages understandable?
6. Are messages phrased with the correct grammar? Do messages read
like a robot or person talking? Robot: "Mount tape on drive. Turn on."
Person: "Mount the tape on the tape drive. Then turn the tape drive on."
7. Does each line of a message fit on all of the expected output devices?
F. Data
1. Are all logical unit numbers and flags assigned correctly?
2. Is the DATA statement used and not the PARAMETER statement?
3. Are constant values constant?
CONTROL
A. Loops
1. Are the loop parameters expressed as variables?
2. Is the initial parameter tested before the loop in those cases where
the initial parameter may be greater than the terminal parameter?
3. Is the loop index within the range of any array it is subscripting?
Is there a check in critical cases such as COMMONs?
4. Is the index variable only used within the DO loop?
5. If the value of the index variable is required outside the loop,
is it stored in another location?
6. Does the loop handle all the conditions required?
7. Does the loop handle error conditions?
8. Does the loop handle cases which may "fall through"?
9. Is loop nesting in the correct order?
10. Can loops be combined?
11. If possible, do nested loops process arrays as they are stored,
with the innermost loop processing the first index (column index) and outer
loops processing the row index?
B. Branches
1. Are branches handled correctly?
2. Are branches commented?
3. When using computed GO TOs, is the fall-through case tested, checked,
and handled correctly?
4. Are floating point comparisons done with tolerances and never made
to an exact value?
LINKAGE
1. Does the CALLing program have the same number of parameters as each
routine?
2. Are the passed parameters in the correct order?
3. Are the passed parameters the correct type? If not, are mismatches
proper?
4. Are constant values passed via a symbol (variable) rather than being
passed directly?
5. Is an unused parameter named DUMMY, or some name which reflects
its inactive status?
6. Is an array passed to a subroutine only when an array is defined
in the subroutine?
7. Are the input parameters listed before the output parameters?
8. Does the subroutine return an error status output parameter?
9. Do the return codes follow conventions?
10. Are arrays used as intended?
11. If array dimensions are passed (dynamic dimensioning) are they
greater than 0?
12. If a subroutine modifies an array, are the indices checked, or
are the dimensions passed as parameters?
13. Does a subroutine modify any input parameter? If so, is this fact
clearly stated?
14. Do subroutines end with a RETURN statement and not a STOP or a
CALL EXIT?
15. Does a FUNCTION routine have only one output value?
COMPUTATION
1. Are arithmetic expressions evaluated as specified?
2. Are parentheses used correctly?
3. Is the use of mixed-mode expressions avoided?
4. Are intermediate results stored instead of recomputed?
5. Is all integer arithmetic involving multiplication and division
performed correctly?
6. Do integer comparisons account for truncation?
7. Are complex numbers used correctly?
8. Is the precision length selected adequate?
9. Is arithmetic performed efficiently?
10. Can a multiplication be used instead of a division? If so, is it
commented so as not to obscure the process?
MAINTENANCE
1. Are library routines used?
2. Is non-standard FORTRAN isolated in subroutines and well documented?
3. Is the use of EQUIVALENCE limited so that it does not impede understanding
the module?
4. Is the use of GO TOs limited so that it does not impede understanding
the module?
5. Does each module have one exit point?
6. Is there no self-modifying code? (No ASSIGN statements, or PARAMETER
statements.)
7. Is the module easy to change?
8. Is the module independent of specific devices where possible?
9. Where possible, are the CALLing routine parameter names the same
as the subroutine parameter names?
10. Are type declarations implicit rather than explicit when possible?
CLARITY
1. Is the module header informative and complete?
2. Are there sufficient comments to understand the code?
3. Are the comments in the modules informative?
4. Are comment lines used to group logically-related statements?
5. Are the functions of arrays and variables described?
/********************************************************************************************
Training Program simple sort.cc
Specification for program \simple sort"
Name: simple sort { sort a list of numbers
entered by the user
Usage: simple sort
Description:
simple_sort starts by prompting for the number of items to be sorted. The
program then reads reads in the list of numbers from the user, sorts
them into ascending numerical order, then prints the sorted list.
Options: None.
Example: Sorting a list of ten numbers:
% simple_sort
Enter the number of data values: 10
Data item 0: 5
Data item 1: 6
Data item 2: 7
Data item 3: 8
Data item 4: 2
Data item 5: 3
Data item 6: 9
Data item 7: 1
Data item 8: 4
Data item 9: 10
Sorted list:
Data item 0: 1
Data item 1: 2
Data item 2: 3
Data item 3: 4
Data item 4: 5
Data item 5: 6
Data item 6: 7
Data item 7: 8
Data item 8: 9
Data item 9: 10
Restrictions: The number of elements which can be sorted is currently
limited to 100.
Author: Fraser Macdonald
1 #include <iostream.h>
2
3 const int TABLESIZE = 100;
4
5 void swap(int& x, int&
y)
6 {
7 int
temp = x;
8 x
= y;
9 y
= temp;
10 }
11
12 int max(int x, int y)
13 {if (x > y) return x; else return y;}
14
15 int main()
16 {
17 int size;
18 int table[TABLESIZE];
19
20 cout <<
"Enter the number of data values: ";
21 cin >> size;
22
23 if(size >= TABLESIZE)
24
cout << "Too many elements, maximum is " << TABLESIZE <<
endl;
25 else {
26
for(int i = 0; i < size; i++){
27
cout << "Data item " << i << ": ";
28
cin >> table[i];
29
}
30
for(i = size - 1; i > 0; i--)
31
for(int j = 0; j <=i - 1; j++)
32
if(table[j] > table[j+1])
33
swap(table[j], table[j+1]);
34
35
cout << endl << "Sorted lits:" << endl;
36
for(i = 0; i < size; i++)
37
cout << "Data item " << i << ": " << table[i] <<
endl;
38 }
39 return(0);
40 }
41
C/C++ Code Checklist
1. Specification
Is the functionality described in the speci
cation fully implemented by the code?
Is there any excess functionality implemented
by the code but not described in the specification?
Is the program interface implemented as described
in the speci cation?
2. Initialisation and Declarations
Are all local and global variables initialised
before use?
Are variables and class members i) required,
ii) of the appropriate type and iii) correctly scoped?
3. Function Calls
Are parameters presented in the correct order?Are
pointers and & used correctly?
Is the correct function being called, or should
it be a di erent function with a similar name?
4. Arrays
Are there any o -by-one errors in array indexing?
Can array indexes ever go out-of-bounds?
5. Pointers and Strings
Check that pointers are initialised to NULL
Check that pointers are never unexpectedly
NULL
Check that all strings are identi ed by pointers
and are NULL-terminated at all points in the program
6. Dynamic Storage Allocation
Is too much/too little space allocated?
7. Output Format
Are there any spelling or grammatical errors
in displayed output?
Is the output formatted correctly in terms
of line stepping and spacing?
8. Computation, Comparisons and Assignments
Check order of computation/evaluation, operator
precedence and parenthesising
Can the denominator of a division ever be
zero?
Is integer arithmetic, especially division,
ever used inappropriately, causing unexpected truncation/rounding?
Are the comparison and boolean operators correct?
If the test is an error-check, can the error
condition actually be legitimate in some cases?
Does the code rely on any implicit type conversions?
9. Flow of Control
In a switch statement, is any case not terminated
by break or return?
Do all switch statements have adefault branch?
Are all loops correctly formed, with the appropriate
initialisation, increment and termination expressions?
10. Files
Are all les properly declared and opened?
Is a le not closed in the case of an error?
Are EOF conditions detected and handled correctly?
Inspection Preparation Log | ||||
Product | Simple Sort | |||
Author/Team | Fraser Macdonald | |||
Defect # | Description | Type | Location | Severity |
1 | Parameters are passed by value, not by reference. "swap" doesn't correctly swap the numbers, so the sort is not carried out correctly. | function calls | line 5, function swap() | failure |
2 | Function max() is defined, but never used. No failure apparently, but checklist violation. | function calls | line 12, function max() | trivial |
3 | >= should be >. The program only accepts one less than the true maximum number of elements. | comparisons | line 23, function main() | sometimes errors |
4 | "list" is spelled incorrectly in the message. The program displays incorrect output. | output format | line 35, function main() | bad quality of output |
5 | ||||
6 | ||||
7 | ||||
8 | ||||
9 | ||||
10 | ||||
Inspector | Ge Fan | Date Received | October 23, 1999 | |
Date Completed | October 23, 1999 | |||
Time Spent | 20 min |
Inspection Defect Log | ||||
Product | Simple Sort | Date | October 23, 1999 | |
Author/Team | Fraser Macdonald | |||
Defect # | Description | Type | Location | Severity |
1 | Parameters are passed by value, not by reference. "swap" doesn't correctly swap the numbers, so the sort is not carried out correctly. | function calls | line 5, function swap() | failure |
2 | Function max() is defined, but never used. No failure apparently, but checklist violation. | function calls | line 12, function max() | trivial |
3 | >= should be >. The program only accepts one less than the true maximum number of elements. | comparisons | line 23, function main() | sometimes errors |
4 | "list" is spelled incorrectly in the message. The program displays incorrect output. | output format | line 35, function main() | bad quality of output |
Moderator | Noname1 | Inspectors | Ge Fan | |
Recorder | Noname2 | |||