Software Quality Assurance Solved Past Paper 2015

Software Quality Assurance Solved Past Paper 2015


Q#1 Explain data flow guided testing with suitable examples. 

Data Flow Guided Testing: 

Data-flow-guided testing is a method for obtaining structural information about programs that have found wide applicability in compiler design and optimization. Control flow information about a program is used to construct test sets for the paths to be tested.

For example: 

Pick enough paths to assure that every data object has been initialized prior to use or that all defined objects have been used for something.

 

Q#2 Describe the following briefly:  a) Regression Testing: 

Regression Testing is a type of testing that is done to verify that a code change in the software does not impact the existing functionality of the product. This is to make sure the product works fine with new functionality, bug fixes, or any change in the existing feature.

Examples, bug regression, old fix regression testing, port testing, configuration testing, localization testing, smoke testing.


b) Stress Testing: 

Stress testing refers to the testing of software or hardware to determine whether its performance is satisfactory under any extreme and unfavorable conditions, which may occur as a result of heavy network traffic, process loading.


Q#3 Difference between Testing and Debugging

Testing Debugging:

The purpose of testing is to find bugs and errors. The purpose of debugging is to correct those bugs found during testing.

Testing is done by the tester. Debugging is done by the programmer or developer.

It can be automated. It can’t be automated.

It can be done by outsiders like clients. It must be done only by an insider i.e. programmer.

Most of the testing can be done without design knowledge. Debugging can’t be done without proper design knowledge.


Q#4 Describe the ethical basis for software quality?

Ethical Basis for Quality

  • Technical Issues
  • Professional Issues
  • Social Issues


Q#5 Define the Software Quality Assurance. Give some basis.

Software Quality Assurance (SQA): 

Software Quality Assurance (SQA) is defined as “the process of making sure that the software is free from defects or mistakes and performs all the functionalities without complaints just before the delivery.” 

Bases of SQA:

  1. Software Quality Assurance is measured based on the internal and external quality features of the software. 
  2. The external quality is measured based on the real-time activities in operational mode and how the software is useful for the end-users.


Q#6 Briefly describe the Software Process with respect to Software Quality Assurance. 

Software Process w.r.t SQA: 

A software process (also known as software methodology) is a set of related activities that leads to the production of the software. These activities may involve the development of the software from modifying an existing system.

  • Any software process must include the following four activities:
  • Software Specification (or Requirements Engineering) 
  • Software design and implementation
  • Software verification and validation 
  • Software evolution (software maintenance)


Q#7 Write short notes on errors, faults, and failures? 

Errors: 

An error is a mistake, misconception, or misunderstanding on the part of a software developer. In the category of the developer, we include software engineers, programmers, analysts, and testers. 

For example, a developer may misunderstand a de-sign notation, or a programmer might type a variable name incorrectly.

Faults: 

An incorrect step, process, or data definition in a computer program causes the program to perform in an unintended or unanticipated manner. A fault is introduced into the software as the result of an error. 

For example, an anomaly in the software may cause it to behave incorrectly, and not according to its specification.

Failures: 

A failure is the inability of a software system or component to perform its required functions within specified performance requirements. During development, Failures are usually observed by testers. 

For example, when a defect reaches the end customer it is called a Failure.


Q#8 Write a short note on the origins of defects. 

Origins of Defects: 

The origins of defects are as follows: 

  • Requirements 
  • Design 
  • Source Code 
  • User Manuals/Training Material 
  • “Bad Fixes” or mistakes made during repairs 
  • Flawed Test Cases used by the application 


Q#9 What are test case design strategies?

There are various types of test case design strategies, each of which is suitable for identifying a particular type of error.

Software test design techniques can be broadly classified into two major categories: 

  1. Static Strategies
  2. Dynamic Strategies


Q#10 Define Software reliability. How is it associated with testing?

Software Reliability: 

According to ANSI, Software Reliability is defined as “the probability of failure-free software operation for a specified period of time in a specified environment.” The reliability of software is measured in terms of Mean Time Between Failure (MTBF). For e.g. if MTBF = 10000 hours for an average software, then it should not fail for 10000 hours of continuous operation.


Long Questions


Q1. What are software metrics? How these are useful in testing? Also, discuss the source code metrics with relative advantages and disadvantages?

Software Metrics:

A software metric is a measure of software characteristics that are quantifiable or countable. Software metrics are important for many reasons, including measuring software performance, planning work items, measuring productivity, and many other uses.

Usage of Metrics in Testing: 

Software testing metrics provide a quantitative approach to measure the quality and effectiveness of the software development and testing process. It helps the team to keep track of the software quality at every stage in the software development cycle and also provides information to control and reduce the number of errors.

For example:

A test manager must measure the effectiveness of a test process to identify the areas of improvement.


Advantages of Source Code Metrics: 

1. Scope for Automation of Counting: 

Since Line of Code is a physical entity manual counting effort can be easily eliminated by automating the counting process. 

2. An Intuitive Metric: 

Line of Code serves as an intuitive metric for measuring the size of software because it can be seen and the effect of it can be visualized.

3. Ubiquitous Measure:

LOC measures have been around since the earliest days of software. As such, it is arguable that more LOC data is available than any other size measure. 


Disadvantages of Source Code Metrics: 

  1. Lack of Accountability 
  2. Lack of Cohesion with Functionality 
  3. Adverse Impact on Estimation 
  4. Developer’s Experience 
  5. Difference in Languages 
  6. The advent of GUI Tools 
  7. Problems with Multiple Languages 
  8. Lack of Counting Standards 
  9. Psychology of Programmer 


Q#2 What is Software Quality? How is it related to testing? Also, explain software quality models briefly. 

Software Quality: 

The degree to which a system, component, or process meets the customer or user needs or expectations with specified requirements.

Quality software is reasonably bug or defect-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.


Relation of Software Quality with Testing: 

Software testing is the quality measures conducted to provide stakeholders with information about the quality of the product or service. It is an important part of the entire Software Development ensuring that the functionalities of the system are tested to the finest and assures the quality, correctness and completeness of the product.


Software Quality Models: 

There are five basic software quality models these are as follows:


1. McCall’s Quality Model (1977):

McCall identified three main perspectives for characterizing the quality attributes of a software product. These perspectives are: -

  • Product revision (ability to change).
  • Product transition (adaptability to new environments).
  • Product operations (basic operational characteristics).


2. Boehm Model: 

Boehm establishes large-scale characteristics that constitute an improvement over McCall’s model because adds factors at different levels. 

The high-level factors are: 

  • Utility indicating the easiness, reliability, and efficiency of use of a software product. 
  • Maintainability that describes the facilities to modify, the testability, and the aspects of understanding. 
  • Portability in the sense of being able to continue being used with a change of environment. 


3. Dromey’s Model: 

This model emphasizes evaluating one software's quality with another. It helps to find out defects if any, and also to point out the factors that caused such defects. This model is designed on the basis of the relationship that exists between software properties and their quality attributes.

Dromey’s focused on the relationship between quality attributes and sub-attributes to connect software product properties with software quality attributes.

4. FURPS Model: 

FURPS is an acronym representing a model for classifying software quality attributes. FURPS model was originally presented by Robert Grady.

FURPS stands for Functionality, Usability, Reliability, Performance, and Supportability.


5. ISO 9126 Model: 

ISO-9126 is one of the best software quality standards in the world. It is an international standard for the evolution of software. ISO 9126 specifies and evaluates the quality of a software product in terms of internal and external software qualities and their connection to attributes. This standard was based on the McCall and Boehm models. 

This standard is divided into four parts as follows: 

  • Quality Model 
  • Internal Metrics 
  • External Metrics 
  • Quality in use Metrics 


Q#3 What is testing? What are its advantages? Also, discuss the significance of Unit Testing and Integration Testing with examples. 

Testing:

Software Testing is a process of identifying bugs or faults in your existing product before it is in the hand of our end users. It is the process of evaluating a software item to detect differences between given input and expected output. In other words, software testing is a verification and validation process. 

Verification: 

Verification is the process to make sure the product satisfies the conditions imposed at the start of the development phase. In other words, to make sure the product behaves the way we want it to. 

Validation: 

Validation is the process to make sure the product satisfies the specified requirements at the end of the development phase. In other words, to make sure the product is built as per customer requirements.


Advantages of Testing: 

  1. Software testing helps in identifying and fixing bugs before the software becomes operational and the risk of failure can be reduced considerably. 
  2. Any software does not necessarily work alone. Sometimes it has to integrate and function with other existing legacy systems, as need be. In such cases, software testing gives a much-needed assurance that it will work suitably and its performance won’t get affected due to the integration. 
  3. Software testing is a part of the software development process. It performs a root cause analysis for which helps in making it more efficient.

Unit Testing: 

Unit Testing is defined as a type of software testing where individual components of the software are tested. Unit testing is such a type of testing technique that is usually performed by developers.

The objective of Unit Testing:

  • To isolate a section of code.
  • To verify the correctness of the code.
  • To test every function and procedure.
  • To help with code reuse.

Unit Testing Techniques:

  • Black Box Testing - Using which the user interface, input, and output are tested.
  • White Box Testing - Used to test each one of those functions’ behavior is tested.
  • Gray Box Testing - Used to execute tests, risks, and assessment methods.


Integration Testing:

Integration testing is a type of software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements.

Types:

There are mainly two types of integration testing:

Component integration testing: 

Testing was performed to expose defects in the interfaces and interactions between integrated components.

System integration testing: 

Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).


Software Quality Assurance Solved Past Paper 2014

Software Quality Assurance Solved Past Paper 2018

Get updates in your Inbox
Subscribe