8.
Software Qualities
8.1
Software quality and quality assurance
8.1.1
Software quality factors
8.1.2
Software quality assurance
8.1.3
SQA activities
8.1.4
Software quality standards: SEI, ISO
8.2
Software reviews
8.2.1
Cost impact of software defects
8.2.2
Defect amplification and removal
8.3
Formal technical reviews
8.3.1
The review meeting
8.3.2
Review reporting and record keeping
8.3.3
Review guidelines
8.3.4
A review checklist
8.4
Formal approaches to SQA
8.4.1
Proof of correctness
8.4.2
Statistical quality assurance
8.4.3
The cleanroom process
Software
Quality
What
is Quality?
Quality
can be defined in different manner. Quality definition may differ from person
to person. Quality can be defined as
·
Degree of excellence or
·
Fitness for purpose or
·
Best for the customer‘s use and selling price or
·
The totality of characteristics of an entity that bear on its ability to
satisfy stated or implied needs – ISO
How
a Product developer will define quality – The product which meets the customer
requirements. How Customer will define Quality – Required functionality
is provided with user friendly manner.
Software
quality
The
software quality can be defined as:
- Conformance
to explicitly stated functional and performance requirements: software
requirements are foundation from which quality is measured. Lack of conformance
to requirements is lack of quality.
- Explicitly
documented development standards: specified standards define a set of
development criteria that guide the manner in which software is engineered. If
the criteria are not followed, lack of quality will almost surely result.
- Implicit
characteristics that are expected of all professionally developed software:
There is set of implicit requirements that often goes unmentioned (e.g.
maintainability). If software conforms to its explicit requirements, but fails
to meet implicit requirements, software quality is suspect.
Software Quality Factors
The
modern view of a quality associates with a software product several quality
factors such as the following:
• Portability: A
software product is said to be portable, if it can be easily made to work in
different operating system environments, in different machines, with other
software products, etc.
• Usability: A
software product has good usability, if different categories of users (i.e.
both expert and novice users) can easily invoke the functions of the product.
• Reusability: A
software product has good reusability, if different modules of the product can
easily be reused to develop new products.
• Correctness: A
software product is correct, if different requirements as specified in the SRS
document have been correctly implemented.
• Maintainability: A
software product is maintainable, if errors can be easily corrected as and when
they show up, new functions can be easily added to the product, and the
functionalities of the product can be easily modified, etc.
Reliability: A software product is
reliable if the user can depend on it. The software reliability means
reliability in terms of statistical behavior that is the probability that the
software will operate as expected over a specified time interval.
Verifiability: a software system is
verifiable if its properties can be verified easily.
Interoperability: a software system is
interoperable if it coexists and cooperates with the other system.
Software
quality assurance
What
is Quality Assurance?
Software
Quality Assurance (SQA) is defined as a planned and systematic approach to the
evaluation of the quality of software product standards, processes, and
procedures. SQA includes the process of assuring that standards and procedures
are established and are followed throughout the software acquisition life
cycle. Consistency with agreed standards and procedures is evaluated through
process monitoring, product evaluation, and audits. Software development and
control processes should include quality assurance approval points, where an
SQA evaluation of the product may be done in relation to the applicable
standards.
SQA
activities
Product
evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the project's
Management Plan are correctly carried out and that the project's procedures and
standards are followed. Products are monitored for conformance to standards and
processes are monitored for conformance to procedures. Audits are a key
technique used to perform product evaluation and process monitoring.
Product
evaluation is an SQA activity that assures standards are being followed.
Products monitored by SQA should be the project's standards and procedures. SQA
assures that clear and achievable standards exist and then evaluates
consistency of the software product to the established standards. Product
evaluation assures that the software product reflects the requirements of the
applicable standard(s) as identified in the Management Plan.
Process
monitoring is an SQA activity that ensures that appropriate steps to carry out
the process are being followed. SQA monitors processes by comparing the actual
steps carried out with those in the documented procedures. The Assurance
section of the Management Plan specifies the methods to be used by the SQA
process monitoring activity.
A
fundamental SQA technique is the audit, which looks at a process and/or a
product in depth, comparing them to established procedures and standards.
Audits are used to review management, technical, and assurance processes to
provide an indication of the quality and status of the software product.
SQA
activities during the Software Life Cycle
There
are phase-specific SQA activities that should be conducted during the Software
Life Cycle. Suggested activities for each phase are described below.
1.
Software Concept and Initiation Phase
SQA
should be involved in both writing and reviewing the Management Plan in order
to assure that the processes, procedures, and standards identified in the plan
are appropriate, clear, specific, and auditable. During this phase, SQA also
provides the QA section of the Management Plan.
2.
Software Requirements Phase
During
the software requirements phase, SQA assures that software requirements are
complete, testable, and properly expressed as functional, performance, and
interface requirements.
3.
Software Architectural (Preliminary) Design Phase
SQA
activities during the architectural (preliminary) design phase include:
-
Assuring adherence to approved design standards as designated in the Management
Plan.
-
Assuring all software requirements are allocated to software components.
-
Assuring that a testing verification matrix exists and is kept up to date.
-
Assuring the approved design is placed under configuration management.
4.
Software Detailed Design Phase
SQA
activities during the detailed design phase include:
-
Assuring that approved design standards are followed.
-
Assuring that allocated modules are included in the detailed design.
-
Assuring that results of design inspections are included in the design.
5.
Software Implementation Phase
SQA
activities during the implementation phase include the audit of:
-
Results of coding and design activities including the schedule contained in the
Software Development Plan.
-
Status of all deliverable items.
-
Configuration management activities and the software development library.
6.
Software Integration and Test Phase
SQA
activities during the integration and test phase include:
-
Assuring that all tests are run according to test plans and procedures and that
any nonconformances are reported and resolved.
-
Assuring that test reports are complete and correct.
-
Certifying that testing is complete and software and documentation are ready
for delivery.
7.
Software Acceptance and Delivery Phase
SQA
activities during the software acceptance and delivery phase include assuring
the performance of a final configuration audit to demonstrate that all
deliverable items are ready for delivery.
Software
quality standard and procedure
Establishing
standards and procedures for software development is critical, since these
provide the framework from which the software evolves. Standards are the
established criteria to which the software products are compared. Procedures
are the established criteria to which the development and control processes are
compared. Standards and procedures establish the prescribed methods for
developing software. Proper documentation of standards and procedures is
necessary since the SQA activities of process monitoring, product evaluation and
auditing rely upon clear definitions to measure project conformity.
SEI
(Software Engineering Institute) Standard
SEI
is a standard that is developed by software engineering institute to measure
the quality of the software. The Software Engineering Institute recommends a
set of SQA activities that address quality assurance planning, mistake, record
keeping, analysis, and reporting. These activities are performed by an
independent SQA group that are:
Prepares
an SQA plan for a project
The
plan is developed during project planning and is reviewed by all interested
parties. Quality assurance activities performed by the software engineering
team and the SQA group are governed by the plan. The plan identifies
•
Evaluations to be performed
•
Audits and reviews to be performed
•
Standards those are applicable to the project
•
Procedures for error reporting and tracking
•
Documents to be produced by the SQA group
•
Amount of feedback provided to the software project team
Participates
in the development of the project’s software process description
The
software team selects a process for the work to be performed. The SQA group reviews
the process description for consistency with organizational policy, internal
software standards, externally imposed standards (e.g., ISO-9001), and other
parts of the software project plan.
Reviews
software engineering activities to verify compliance with the defined software
process
The
SQA group identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits
designated software work products to verify compliance with those defined as
part of the software process
The
SQA group reviews selected work products; identifies, documents, and tracks
deviations; verifies that corrections have been made; and periodically reports
the results of its work to the project manager.
Ensures
that deviations in software work and work products are documented and handled
according to a documented procedure
Deviations
may be encountered in the project plan, process description, applicable
standards, or technical work products.
Records
any noncompliance and reports to senior management.
ISO
International
Organization for Standards‘ – The ISO 9001, 9002, and 9003 standards concern
quality systems that are assessed by outside auditors, and they apply to many
kinds of production and manufacturing organizations, not just software. The
most comprehensive is 9001, and this is the one most often used by software
development organizations. It covers documentation, design, development,
production, testing, installation, servicing, and other processes. To be ISO
9001 certified, a third-party auditor review an organization.
Software
reviews
Software
reviews are a "filter" for the software engineering process. That is,
reviews are applied at various points during software development and serve to
uncover errors and defects that can then be removed. Software reviews
"purify" the software engineering activities (analysis, design, and coding). A
review—any review—is a way of using the diversity of a group of people to:
1.
Point out needed improvements in the product of a single person or team;
2.
Confirm those parts of a product in which improvement is either not desired or
not needed;
3.
Achieve technical work of more uniform, or at least more predictable, quality
than can be achieved without reviews, in order to make technical work more
manageable.
Cost
impact of software defects
The
major benefit of software reviews is the early discovery of software defects so
that each defect may be corrected prior to the next step in the software
engineering process. For example, a number of industry studies indicate that
design activities introduce between 50 and 65 percent of all errors or defects
during the development phase of the software engineering process. However,
formal review techniques have been shown to be up to 75 percent effective in
uncovering design flaws. By detecting and removing a large percentage of these
errors, the review process substantially reduces the cost of subsequent steps
in the development and maintenance phases.
To
illustrate the cost impact of early error detection, consider a series of
relative costs that are based on actual cost data collected for large software
projects. Assume that an error uncovered during design will cost 1.0 monetary
unit to correct. Relative to this cost, the same error uncovered just before
testing commences will cost 6.5 units; during testing, 15 units; and after
release, between 60 and 100 units.
Defect
amplification and removal
Defect
amplification can be used to illustrate the generation and detection of errors
during the preliminary design, detail design, and coding steps of the software
engineering process. During the development step, errors may be inadvertently
generated. Review may fail to uncover newly generated errors and errors from
previous steps, resulting in some number of errors that are passed through the
next phase. In some cases, errors passed from previous steps are amplified by
current work and some errors are recorded by software reviewer and removed by
software producers.
Formal
technical reviews
A
formal technical review is a software quality assurance activity performed by
software engineers (and others). The FTR serves as a training ground to observe
different approaches to software analysis, design, and implementation. Each FTR
is conducted as a meeting and will be successful only if it is properly
planned, controlled, and attended. The obvious benefit of formal technical
reviews is the early discovery of errors so that they do not propagate to the
next step in the software process. The objectives of the FTR are:
(1)
To uncover errors in function, logic, or implementation for any representation
of the software.
(2)
To verify that the software under review meets its requirements.
(3)
To ensure that the software has been represented according to predefined
standards.
(4)
To achieve software that is developed in a uniform manner and
(5)
To make projects more manageable.
The
review meeting
Every
review meeting should abide by the following constraints:
•
Between three and five people (typically) should be involved in the review.
•
Advance preparation should occur but should require no more than two hours of
work for each person.
•
The duration of the review meeting should be less than two hours.
The
main objective of the review meeting is to uncover the faults in the software
product. To do this, the reviewer should focus on a specific (and small) part
of the overall software. The focus of the FTR is on a work product (e.g., a
portion of a requirements specification, a detailed component design, a source
code listing for a component). At the end of the review, all attendees of the
FTR must decide whether to
(1)
Accept the product without further modification,
(2)
Reject the product due to severe errors (once corrected, another review must be
performed), or
(3)
Accept the product conditionally (minor errors have been encountered and must
be corrected, but no additional review will be required).
Review
process:
The
individual who has developed the work product—the producer—informs
the project leader that the work product is complete and that a review is
required. The project leader contacts a review leader, who
evaluates the product for readiness, generates copies of product materials, and
distributes them to two or three reviewers for advance preparation. Each
reviewer is expected to spend between one or two hours reviewing the product,
making notes. The review meeting is attended by the review leader, all
reviewers, and the producer. One of the reviewers takes on the role of the recorder; that
is, the individual who records (in writing) all important issues raised during
the review
Review
reporting and record keeping
Record
keeping and reporting are other quality assurance activities. It is applied to
different stages of software development. During the FTR, it is the
responsibility of reviewer (or recorder) to record all issues that have been
raised. At the end of review meeting, a review issue list that summarizes all
issues is produced. A review summary report answers the following questions.
1.
What was reviewed?
2.
Who reviewed it?
3.
What were the findings and conclusions?
Review
guidelines
The
following represents a minimum set of guidelines for formal technical reviews
-
Review the product, not the producer
-
Set an agenda and maintain it
-
Limit debate and rebuttal
-
Enunciate problem areas but don‘t attempt to solve every problem noted
-
Take written notes
-
Limit the number of participants and insist upon advance preparation
-
Develop a check list each work product that is likely to be reviewed
-
Allocate resources and time schedule for FTRs.
-
Conduct meaningful training for all reviewers
-
Review your earlier reviews
1. Review the product, not the
producer. An
FTR involves people and egos. Conducted properly, the FTR should leave all
participants with a warm feeling of accomplishment. Conducted improperly, the
FTR can take on the aura of an inquisition. Errors should be pointed out
gently; the tone of the meeting should be loose and constructive; the intent
should not be to embarrass or belittle. The review leader should conduct the
review meeting to ensure that the proper tone and attitude are maintained and
should immediately halt a review that has gotten out of control.
2. Set an agenda and maintain
it. One
of the key problems of meetings of all types is drift. An FTR
must be kept on track and on schedule. The review leader is chartered with the
responsibility for maintaining the meeting schedule and should not be afraid to
nudge people when drift sets in.
3. Limit debate and rebuttal. When an issue is raised by
a reviewer, there may not be universal agreement on its impact. Rather than
spending time debating the question, the issue should be recorded for further
discussion off-line.
4. Enunciate problem areas,
but don't attempt to solve every problem noted. A review is not a
problem-solving session. The solution of a problem can often be accomplished by
the producer alone or with the help of only one other individual. Problem
solving should be postponed until after the review meeting.
5. Take written notes. It is sometimes a good idea
for the recorder to make notes on a wall board, so that wording and priorities
can be assessed by other reviewers as information is recorded.
6. Limit the number of
participants and insist upon advance preparation. Two heads are better than
one, but 14 are not necessarily better than 4. Keep the number of people
involved to the necessary minimum. However, all review team members must
prepare in advance. Written comments should be solicited by the review leader
(providing an indication that the reviewer has reviewed the material).
7. Develop a checklist for
each product that is likely to be reviewed. A checklist helps the
review leader to structure the FTR meeting and helps each reviewer to focus on
important issues. Checklists should be developed for analysis, design, code,
and even test documents.
8. Allocate resources and
schedule time for FTRs. For reviews to be effective, they should be
scheduled as a task during the software engineering process. In addition, time
should be scheduled for the inevitable modifications that will occur as the
result of an FTR.
9. Conduct meaningful training
for all reviewers. To be effective all review participants should receive
some formal training. The training should stress both process-related issues
and the human psychological side of reviews.
10. Review your early reviews. Debriefing can be
beneficial in uncovering problems with the review process itself. The very
first product to be reviewed should be the review guidelines themselves.
A
review checklist
Every
work product that is produced during software engineering should be reviewed by
someone. The problem is that "someone" may not know what questions to
ask or may simply forget to ask them, even if the proper questions are known.
This page contains pointers to a variety of software engineering checklists
that can help to ask the right questions at the right time. The result will be
higher software quality.
System
Engineering:
The
following are the important checklist questions which can be asked during the
system engineering process:
1.
Are major functions are defined in a bounded and unambiguous fashion?
2.
Are interfaces between system elements defined?
3.
Have performance bounds been established for the system as a whole and for each
element?
4.
Are design constraints established for each element?
5.
Has the best alternative been selected?
6.
Is the solution technologically feasible?
7.
Has a mechanism for system validation and verification been established?
8.
Is there consistency among all system elements?
Software
Project Planning:
Software
project planning assesses risk and develops estimates for resources, cost and
schedule based on the software allocation established as a part of the system
engineering activity. The review of the software project plan establishes the
degree of risk. The following checklist is applicable:
1.
Is software scope unambiguously defined and bounded?
2.
Is terminology clear?
3.
Are resources adequate for scope?
4.
Are resources readily available?
5.
Have risks in all important categories been defined?
6.
Is a risk management plan in place?
7.
Are tasks properly defined and sequenced?
8.
Is the basis for cost estimation reasonable? Has the cost estimate been
developed using two independent methods?
9.
Have differences in estimates been reconciled?
10.
Have historical productivity and quality data been used?
11.
Are pre-established budgets and deadlines realistic?
12.
Is the schedule consistent?
Software
Requirement analysis:
The
following topics are considered during FTRs for analysis:
1.
Is information domain analysis complete, consistent and accurate?
2.
Is problem partitioning complete?
3.
Are external and internal interfaces properly defined?
4.
Does the data model properly reflect data objects, their attributes and
relationships?
5.
Are all requirements traceable to system level?
6.
Has prototyping been conducted for the user/customer?
7.
Is performance achievable within the constraints imposed by other system
elements?
8.
Are requirements consistent with schedule, resources and budget?
9.
Are validation criteria complete?
Software
Design:
Review
for software design focus on data design, architecture design and procedure
design. The following checklists are useful for each data and architecture
design review:
1.
Are software requirements reflected in the software architecture?
2.
Is effective modularity achieved? Are modules functionally independent?
3.
Is the program architecture factored?
4.
Are interfaces defined for modules and internal system elements?
5.
Is the data structure consistent with the information domain?
6.
Is the data structure consistent with software requirements?
7.
Has maintainability been considered?
8.
Have quality factors been explicitly accessed?
Review
checklist for procedural design review:
1. Does the algorithm
accomplish the desired function?
2. Is the algorithm
logically correct?
3. Is the interface
consistent with the architectural design?
4. Is the logical
complexity reasonable?
5. Are local data structure
properly defined?
6. Are structured programming
constructs used throughout?
7. Is design detail
amenable to implementation language?
8. Which are used:
operating system or language-dependent features?
9. Is compound or inverse
logic used?
10.
Has maintainability been considered?
Coding:
The
review checklist for coding stage of software development is:
1.
Has the design properly been translated into code?
2.
Are there misspellings and typos?
3.
Has proper use of language conventions been made?
4.
Is there compliance with coding standards for language style, comments, module
prologue?
5.
Are there incorrect or ambiguous comments?
6.
Are data types and data declaration proper?
7.
Are physical constants correct?
Software
Testing
The
review checklist for software testing is:
1.
Have major test phases properly been identified and sequenced?
2.
Are major functions demonstrated early?
3.
Is the test plan consistent with the overall project plan?
4.
Has a test schedule been explicitly defined?
5.
Are test resources and tools identified and available?
6.
Has a test record-keeping mechanism been established?
Checklist
for the test procedures:
1.
Have both white and black box tests been specified?
2.
Have all the independent logic paths been tested?
3.
Have test cases been identified and listed with their expected results?
4.
Is error handling to be tested?
5.
Are boundary values to be tested?
6. Are timing and
performance to be tested?
7.
Has an acceptable variation from the expected results been specified?
Maintenance
The
review checklist for maintenance is
1. Have side effects
associated with changes been considered?
2. Has the request for
change been documented, evaluated and approved?
3. Has the change, once
made, been documented and reported to all interested parties?
4. Have appropriate FTRs
been conducted?
5.
Has a final acceptance review been conducted to ensure that all software has
been properly updated, tested and replaced?
Formal
approaches to SQA
Software
quality can be achieved through competent analysis, design, coding, and
testing, as well as through the application of formal technical reviews, a
multitier testing strategy, better control of software work products and the
changes made to them, and the application of accepted software engineering
standards. In addition, quality can be defined in terms of a broad array of
quality factors and measured (indirectly) using a variety of indices and
metrics.
Over
the past two decades, a small, but vocal, segment of the software engineering
community has argued that a more formal approach to software quality assurance
is required. It can be argued that a computer program is a mathematical object.
A rigorous syntax and semantics can be defined for every programming language,
and work is underway to develop a similarly rigorous approach to the
specification of software requirements. If the requirements model
(specification) and the programming language can be represented in a rigorous
manner, it should be possible to apply mathematic proof of correctness to
demonstrate that a program conforms exactly to its specifications.
Proof
of correctness
This
is one of the formal techniques that is used to improve the quality of the
product and process. When a software company is developing new software, it
needs to ensure the product is error free and reliable. Proof of correctness
uses the technique of a formal logic or mathematical proof to prove that the
program conforms exactly to its specification. Explain with example
Statistical
Quality Assurance
Statistical
quality assurance reflects a growing trend throughout industry to become more
quantitative about quality. For software, statistical quality assurance implies
the following steps
1. Information about
software defects is collected and categorized
2. An attempt is made to
trace each defect to its underlying cause
3. Using Pareto principle
(80% of the defects can be traced to 20% of all possible causes), isolate the
20% (the ―vital few‖)
4.
Once the vital few causes have been identified, move to correct the problems
that have caused the defects.
What
steps are required to perform statistical SQA?
This
relatively simple concept represents an important step toward the creation of
an adaptive software engineering process in which changes are made to improve
those elements of the process that introduce errors. To illustrate the process,
assume that a software development organization collects information on defects
for a period of one year. Some errors are uncovered as software is being
developed. Other defects are encountered after the software has been released
to its end user. Although hundreds of errors are uncovered all can be
tracked to one of the following causes.
- Incomplete or Erroneous
Specification (IES)
- Misinterpretation of
Customer Communication (MCC)
- Intentional Deviation
from Specification (IDS)
- Violation of Programming
Standards ( VPS )
- Error in Data
Representation (EDR)
- Inconsistent Module Interface
(IMI)
- Error in Design Logic
(EDL)
- Incomplete or Erroneous
Testing (IET)
- Inaccurate or Incomplete
Documentation (IID)
- Error in Programming
Language Translation of design (PLT)
- Ambiguous or inconsistent
Human-Computer Interface (HCI)
-
Miscellaneous (MIS)
The
cleanroom process
This
is another software quality assurance technique that combines both correctness
proof and statistical SQA. With cleanroom process software is engineered under
statistical quality control. The first priority of the cleanroom process is
defect prevention rather than defect removal, this can be achieved by using
mathematical verification (proof of correctness). Next priority is to provide
valid, statistical certification of the software‘s quality. The measure of quality
is the mean time to failure. A software project with lines of code between 1000
and 50,000, developed using the cleanroom approach reduces 90 percent of defect
before the first executable tests were conducted.
No comments:
Post a Comment