STLC ( Software Testing Life Cycle )

The Software Testing Life Cycle (STLC) is a sequence of specific actions performed during the testing process to ensure that the software quality objectives are met. The STLC includes both verification and validation. Contrary to popular belief, software testing is not just a separate activity. It consists of a series of methodological activities to help certify your software product. STLC is part of SDLC.

SDLC = Requirement analysis > Designing > Coding > Testing > Deployment > Maintenance

The STLC has several interconnected phases and is generally very similar to the SDLC system. These phases are sequential and are called:

  • Requirement Analysis
  • Test Planning
  • Test Design
  • Environment Setup
  • Test Execution
  • Defect / Bug Reporting & Tracking
  • Test Closure

A Test Plan is a document that describes the test strategy, objectives, schedule, estimation and deliverables, and resources required for testing. Test Plan helps us determine the effort needed to validate the quality of the application under test. The test plan serves as a blueprint to conduct software testing activities as a defined process which is minutely monitored and controlled by the test manager.

Test plan template contents :

  • Overview
  • Scope
  • Inclusions
  • Test Environments
  • Exclusions
  • Test Strategy
  • Defect Reporting Procedure
  • Roles/Responsibilities
  • Test Schedule
  • Test DeliverablesØPricing
  • Entry and Exit Criteria
  • Suspension and Resumption Criteria
  • Tools
  • Risk and Mitigations
  • Approvals

Use Case, Test Scenario & Test Case
Use Case :

Use case describes functional requirement, prepared by Business Analyst ( BA )

Use case contains THREE items.

Actor, which is the user, which can be a single person or a group of people, interacting with a process.

Action, which is to reach the final outcome.

Goal/Outcome, which is the successful user outcome.

Sample use case :

Test Scenario :

A possible area to be tested ( What to test ).

Test Case :

Step by step actions to be performed to validate functionality of AUT ( How to test ).

Test case contains test steps, expected result & actual result.

Describes Test Steps / Procedure, prepared by Test Engineer

Test Scenario Vs Test Case

Test Scenario :

Test Scenario is “What to be tested”

Test Case :

Test Case is “How to be tested”

Example :

Test Scenario : Checking the functionality of Login button

Test Cases :

–TC1 : Click the button without entering user name and password.

–TC2 : Click the button entering User name.

–TC3 : Click the button while entering wrong user name and wrong password.

Test Suite

Test Suite is a group of test cases which belongs to same category.

What is Test Case ?

A Test Case is a set of actions executed to validate particular feature or functionality of your software application.

Description of Test Steps / Procedure, prepared by Test Engineer

Test Case Contents
  • Requirement ID
  • Test Case ID
  • Test Case Title / Description
  • Pre-condition / Pre-requisite§
  • Steps / Actions
  • Expected Result
  • Actual Result
  • Test Data
  • Priority ( P0, P1, P2, P3 )
  • Severity ( High, Medium, Low)
Test Case Template
Requirement Traceability Matrix ( RTM )

What is RTM ( Requirement Traceability Matrix ) ?

RTM describes the mapping of Requirement’s with the Test Cases.

The main purpose of RTM is to see that all test cases are covered so that no functionality should miss while doing Software testing.

Requirement Traceability Matrix – Parameters include –

  • Requirement ID
  • Requirement Description
  • Test Case ID’s

Sample RTM

Test Environment is a platform specially build for test case execution on the software product.

It is created by integrating the required software and hardware along with proper network configurations.

Test environment simulates production/real time environment.

Another name of test environment is Test Bed.

During this phase test team will carry out the testing based on the test plans and the test cases prepared.

Entry Criteria :

Test cases, Test data and Test plan.

Activities :

Test cases are executed based on the test planning.

Status of the test cases are marked, liked passed, Failed, Blocked, Run and Others.

Documentation of test results and log defects for failed cases is daone.

All the blocked and failed test cases are assigned bug ids.

Retesting once the defects are fixed.

Defects are tracked till closure.

Deliverables :

Provides defect and test case execution report with completed results.

Guidelines for Test Execution

The Build being deployed to the QA environment is the most important part of the test execution cycle.

Test execution is done in Quality Assurance (QA) environment.

Test execution happens in multiple cycles.

Test execution phase consists executing the test cases + test scripts (If automation).

Defects / Bugs

Any mismatched functionality found in a application is called as Defect/Bug/Issue.

During Test Execution Test engineers are reporting mismatches as defects to developers through templates or using tools.

  • Defect Reporting Tools :
  • Clear Quest
  • DevTrack
  • Jira
  • Quality Center
  • Bug Jilla etc
Defects Report Contents
  • Defect Id – Unique identification number for the defect.
  • Defect Description – Detailed description of the defect including information about the module in which defect was found.
  • Version – Version of the application in which defect was found.
  • Steps – Detailed steps along with screenshots with which the developer can reproduce the defect.
  • Date Raised – Date when the defect is raised.
  • Reference – Where you provide reference to the document like requirements, design, architecture or may be even screenshots of the error to help understand the defect.
  • Detected by – Name/ID of the tester who raised the defect.
  • Status – Status of the defect, more on the defect.
  • Fixed by – Name/Id of the developer who fixed it.
  • Date closed – Date when defect is closed.
  • Severity – Severity which describes the impact of the defect on the application.
  • Priority – Priority which is related to defect fixing urgency, Severity Priority could be High/Medium/Low based on the impact urgency at which the defect should be fixed respectively.

Defects Classification
Defects Severity

Severity describes the seriousness of defect and how much impact on business workflow.

Defect severity can be categorized into four class :

Blocker ( Show Stopper ) : This defect indicates nothing can proceed further.

Example – Application crashed, login not worked.

Critical : The main / basic functionality is not working. Customer business workflow is broken. They cannot proceed further.

Example 1 – Fund transfer is not working in net banking.

Example 2 – Ordering product in ecommerce application is not working.

Major : It cause some undesirable behavior, but the future /application is still functional.

Example 1 – After sending email there is no confirm message.

Example 2 – After booking cab there is no confirmation.

Minor : It won’t cause any major break-down of the system.

Example – Look and feel issue, spellings and alignments.

Defects Priority

Priority describe the importance of defect.

Defect priority states the order in which a defect should be fixed.

Defect priority can be categorized into three class :

  • P0 ( High ) : The defect muse be resolved immediately as it affects the system severely and cannot be used until it is fixed.
  • P1 ( Medium ) : It can wait until a new versions/build is created.
  • P2 ( Low ) : Developer can fix it in later releases.
High Severity, Priority and Low Severity, Priority Defects
More Examples….
Defects Resolution

After receiving the defect report from the testing team, development team conduct a review meeting to fix defects. Then they send a Resolution Type to the testing team for further communication

Resolution Types :

–Accept

–Reject

–Duplicate

–Enhancement

–Need more information

–Not Reproducible

–Fixed

–As Designed

Defect Life Cycle

Test Metrics

Required Data

  • Number of Requirements
  • Average number of test cases written per requirement
  • Total number of test cases written for all requirement
  • Total number of test cases executed
  • Number of test cases passed
  • Number of test cases failed
  • Number of test cases blocked
  • Number of test cases unexecuted
  • Total number of defects identified
  • Critical defects count
  • Higher defects count 
  • Higher defects count
  • Medium defects count
  • Low defects count
  • Customer defects
  • Number of defects found in UAT

% of Test cases executed :

(Number of test cases executed / Total number of test cases written ) * 100

% of Test cases NOT executed :

( Number of test cases NOT executed / Total number of test cases written ) * 100

% Test cases passed :

( Number of test cases passed / Total test cases executed ) * 100

% Test cases failed :

( Number of test cases failed / Total test cases executed ) * 100

% Test cases blocked :

( Number of test cases blocked / Total test cases executed ) * 100

Defect Density : Number of defects identified per requirements

  Number of defects found / size ( Number of requirements )

Defect Removal Efficiency ( DRE ) :

( A/A+B ) * 100

( Fixed Defects / ( Fixed Defects + Missed Defects ) ) * 100

A – Defects identified during testing / Fixed defects

B – Defects identified by the customer / Missed defects

Defect Leakage :

( Number of defects found in UAT / Number of defects found in Testing ) * 100

Defect Rejection Ratio :

( Number of defect rejected / Total number of defects raised ) * 100

Defect Age :

 Fixed date – Reported date

Customer Satisfaction :

Reduce number of complaints per period of time

Activities
  • Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical Business Objectives and Quality.
  • Prepare test metrics based on the above parameters.
  • Document the learning out of the project
  • Prepare Test summary report
  • Qualitative and quantitative reporting of quality of the work product to the customer.
  • Test result analysis to find out the defect distribution by type and severity.
Deliverables
  • Test closure report
  • Test metrics

  • Understanding the requirements and functional specifications of the application.
  • Identifying required test scenarios.
  • Designing test cases to validate application.
  • Setting up test environment (Test Bed )
  • Execute test cases to valid application
  • Log test results ( How many test cases pass / fail )
  • Defect reporting and tracking.
  • Retest fixed defects of previous build.
  • Perform various types of testing’s in application.
  • Reports to test lead about the status of assigned tasks.
  • Participated in regular team meetings.
  • Creating automation scripts.
  • Provides recommendation on whether or not the application/system is ready for production.