Each SDLC Phase
The concept of
verification is an important part of the overall system validation.
Lewis  defines verification as follows:
Verification is an
iterative process aimed at determining whether the product of each
step in the SDLC fulfills all the requirements levied on it by the
previous step and is internally complete, consistent, and correct
enough to support the next phase.
can be thought of as testing the final system against its original
requirements, verification looks at the intermediate work product
from each SDLC phase. Verification is like in-process inspection in
manufacturing, where small errors introduced in the process are
corrected before the product moves farther in the process.
best performed in a team approach. To verify a given phase of the
SDLC, a review team should be formed, with representatives from the
current phase (the phase being verified), the prior phase (that
provided the input specification), and the next phase (that will
rely on the output from the current phase). This puts verification
responsibility on the individuals that have the greatest vested
interest the correctness of the current phase. In software
development, this is sometimes called "n-plus-and-minus-one
All the verification methods to be used in the development of a
system should be documented as part of a comprehensive test plan. As
requirements and design features are defined, a corresponding test
plan is developed to ensure that these requirements and features are
satisfied. The test plan is started early in the project and is
fleshed out in more detail as the project proceeds through each
phase, as illustrated in Figure 1 and Figure 2.
There are several
general verification methods that can be used at any point of the
SDLC. Although simple in concept, these methods are valuable,
especially in the requirements, design, and program coding phases.
Industry studies have shown that problems uncovered early in the
SDLC are much less costly to correct than if they were allowed to go
undetected until system testing, or worse, until live operation .
These general verification methods should be formally employed with
the results documented and archived along with the rest of the
system validation records.
1. By means of
inspection , a person other than the author reviews the
requirements documentation, system design, or program code in a
step-by-step effort to find missing requirements, internal
contradictions, evidence of weak analysis, errors, violation of
standards, or other problems. Pre-defined check lists are often
2. During walk-throughs
[2,3], the designer or programmer leads other members of the review
team through the design document or program code, allowing them to
question techniques, style, possible errors, violation of standards,
or other problems. Walk-throughs are effective in detecting
misunderstandings of system requirements or design specifications
before significant effort is invested in development.
3. By means of desk
checking and peer review  a member of the review team mentally
simulates program execution in an attempt to detect errors in
logic, syntax, or programming conventions. A significant number of
program logic errors can be detected though desk checking that are
never discovered through actual program test execution.
Protocols, Test Scripts, and Test Cases
The test plan for
the Integrated System Test and Acceptance Test phases consists of a
series of test protocols. Test protocols define what is being
tested, why it is being tested, and how the test is to be carried
out. The test protocol should also include a detailed description of
the test data, test conditions, and any special equipment required.
It should include a description of the expected results and how the
tester should analyze the results to determine whether the test was
passed. Unless the test protocol is quite simple, it must be
supplemented by detailed test scripts and test cases. Test scripts
provide step by step instructions to carry out the test. Test cases
provide the exact test data required. The best written test plans
usually employ a set of standard forms or standard formats for
defining test protocols, test scripts, and test cases. A numbering
system is often employed to facilitate tracing back to design
elements and original requirements.
definition of test protocols contrasts sharply with the ad-hoc
testing method employed by many MIS departments and system users.
Rather than formally plan and define test protocols, they simply
sign on arid "test
drive" the system. This "on the fly" approach almost always leads to
inadequate testing and higher overall system maintenance costs. With
the informal approach, system errors are frequently discovered only
after the system is in the use. In some cases, errors may not be
discovered for years. For systems entrusted with regulatory data,
this can be a time bomb.
To Be Continued