Testing Posts

How to Test Android Applications - part 2

In the previous article, we learned 4 cases for how to  test  Android Applications.

In this article, we will learn more cases for how to  test  Android Applications.

5. Compatibility testing test cases

Compatibility testing is performed to protect against mobile application failures as devices have different operating systems, sizes, resolutions, and so on.  Determines that an app works consistently across various platforms and environments. 

Six compatibility test case scenarios questions:

  1. Have you tested on the best test devices and operating systems for mobile apps?
  2. How does the app work with different parameters such as bandwidth, operating speed, capacity, etc.?
  3. Will the app work properly with different mobile browsers such as Chrome, Safari, Firefox, Microsoft Edge, etc.
  4. Does the app's user interface remain consistent, visible and accessible across different screen sizes?
  5. Is the text readable for all users?
  6. Does the app work seamlessly in different configurations? 

6. Security testing test cases

Security testing ensures that the application data and network security requirements are met per the guidelines.  It focuses on identifying possible risks and security vulnerabilities so that the application is not exploited and the data is protected. 

Twenty-four security testing scenarios for mobile applications:

  1. Can the mobile app resist any brute force attack to guess a person's username, password, or credit card number?
  2. Does the app allow an attacker to access sensitive content or functionality without proper authentication?
  3. This includes making sure communications with the backend are properly secured.  Is there an effective password protection system within the mobile app?
  4. Verify dynamic dependencies.
  5. Measures taken to prevent attackers from accessing these vulnerabilities.
  6. What steps have been taken to prevent SQL injection-related attacks?
  7. Identify and repair any unmanaged code scenarios
  8. Make sure certificates are validated and whether the app implements certificate pinning
  9. Protect your application and network from denial of service attacks
  10. Analyze data storage and validation requirements
  11. Create  session management to prevent unauthorized users from accessing unsolicited information
  12. Check if the encryption code is damaged and repair what was found.
  13. Are the business logic implementations secure and not vulnerable to any external attack?
  14. Analyze file system interactions, determine any vulnerabilities and correct these problems.
  15. What protocols are in place should hackers attempt to reconfigure the default landing page?
  16. Protect from client-side harmful injections.
  17. Protect yourself from but vicious runtime injections.
  18. Investigate and prevent any malicious possibilities from file caching.
  19. Protect from insecure data storage in app keyboard cache.
  20. Investigate and prevent malicious actions by cookies.
  21. To provide regular checks for the  data protection analysis
  22. Investigate and prevent malicious actions from custom-made files
  23. Preventing memory corruption cases
  24. Analyze and prevent vulnerabilities from different data streams 

7. Localization testing test cases

Localization testing ensures that the mobile app provides a flawless user experience in a specific locale based on the target language and country.  It aims to ensure that the functionality and content of the application is fully tailored to meet the needs of users in a particular location.  Since fully localized apps and websites outperform their competition, this is a test case that shouldn't be overlooked.  Partnering with a respected third party with global reach alleviates some stresses and unknown variables when using localization.  For example, Testis covers over 100 countries and over 140 languages. 

Eleven localization testing scenarios for mobile applications:

  1. The translated content must be checked for accuracy.  This should also include all verification or error messages that may appear. 
  2. The language should be formatted correctly.(e.g. Arabic format from right to left, Japanese writing style of Last Name, First Name, etc.) 
  3. The terminology is consistent across the user interface. 
  4. The time and date are correctly formatted. 
  5. The currency is the local equivalent. 
  6. The colors are appropriate and convey the right message. 
  7. Ensure the license and rules that comply with the laws and regulations of the destination region.
  8. The layout of the text content is error free. 
  9. Hyperlinks and hotkey functions work as expected. 
  10. Entry fields support special characters and are validated as necessary (ie. postal codes)
  11. Ensure that the localized UI has the same type of elements and numbers as the source product. 

8. Recoverability testing test cases

The recovery test is a non-functional testing technique that determines how quickly a mobile application can recover after a system crash or hardware failure. 

Five recoverability testing scenarios questions:

  1. Will the app continue on the last operation in the event of a hard restart or system crash?
  2. What, if any, causes crash recovery and transaction interruptions?
  3. How effective is it at restoring the application after an unexpected interruption or crash?
  4. How does the application handle a transaction during a power outage?
  5. What is the expected process when the app needs to recover data directly affected by a failed connection? 

9. Regression testing test cases

QA and mobile app testing doesn't end once an app is launched.  When an application is updated, even small changes can create unexpected problems.  This is why regression testing is key.  The purpose of regression testing is to ensure that new code changes in the software do not cause errors or interruptions. 

Four regression testing scenarios for mobile applications:

  1. Check the changes to existing features
  2. Check the new changes implemented
  3. Check the new features added 
  4. Check for potential side effects after changes start 

That's it. If you want a good application, take these tips and follow cases for Android Application test. It will help to make quality & standardize your Applications.

November 15, 20203 minutesBhumi KhimaniBhumi Khimani
How to Test Android Applications

Few main things remember to test an Android Application which is mention below:

1. Functional testing test cases

There are many hands involved in creating a mobile app. These stakeholders may have different expectations. Functional testing determines whether a mobile app complies with these various requirements and uses. Examine and validate all functions, features, and competencies of a product.

Twelve functional test case scenario questions:

  1. Does the application work as intended when starting and stopping?
  2. Does the app work accordingly on different mobile and operating system versions?
  3. Does the app behave accordingly in the event of external interruptions?
  4. (i.e. receiving SMS, minimized during an incoming phone call, etc.)
  5. Can the user download and install the app with no problem?
  6. Can the device multitask as expected when the app is in use or running in the background?
  7. Applications work satisfactorily after installing the app.
  8. Do social networking options like sharing, publishing, etc. work as needed?
  9. Do mandatory fields work as required? Does the app support payment gateway transactions?
  10. Are page scrolling scenarios working as expected?
  11. Navigate between different modules as expected.
  12. Are appropriate error messages received if necessary?

There are two ways to run functional testing: scripted and exploratory.

Scripted

Running scripted tests is just that - a structured scripted activity in which testers follow predetermined steps. This allows QA testers to compare actual results with expected ones. These types of tests are usually confirmatory in nature, meaning that you are confirming that the application can perform the desired function. Testers generally run into more problems when they have more flexibility in test design.

Exploratory

Exploratory testing investigates and finds bugs and errors on the fly. It allows testers to manually discover software problems that are often unforeseen; where the QA team is testing so that most users actually use the app. learning, test design, test execution, and interpretation of test results as complementary activities that run in parallel throughout the project. Related: Scripted Testing Vs Exploratory Testing: Is One Better Than The Other?

2. Performance testing test cases

The primary goal of benchmarking is to ensure the performance and stability of your mobile application

Seven Performance test case scenarios ensure:

  1. Can the app handle the expected cargo volumes?
  2. What are the various mobile app and infrastructure bottlenecks preventing the app from performing as expected?
  3. Is the response time as expected? Are battery drain, memory leaks, GPS, and camera performance within the required guidelines?
  4. Current network coverage able to support the app at peak, medium, and minimum user levels?
  5. Are there any performance issues if the network changes from/to Wi-Fi and 2G / 3G / 4G?
  6. How does the app behave during the intermittent phases of connectivity?
  7. Existing client-server configurations that provide the optimum performance level?

2020-12-26-5fe7040267f4c

3. Battery usage test cases

While battery usage is an important part of performance testing, mobile app developers must make it a top priority. Apps are becoming more and more demanding in terms of computing power. So, when developing your mobile app testing strategy, understand that battery-draining mobile apps degrade the user experience.

Device hardware - including battery life - varies by model and manufacturer. Therefore, QA testing teams must have a variety of new and older devices on hand in their mobile device laboratory. In addition, the test environment must replicate real applications such as operating system, network conditions (3G, 4G, WLAN, roaming), and multitasking from the point of view of the battery consumption test.

Seven battery usage test case scenarios to pay special attention to:

  1. Mobile app power consumption
  2. User interface design that uses intense graphics or results in unnecessarily high database queries
  3. Battery life can allow the app to operate at expected charge volumes
  4. Battery low and high-performance requirements
  5. Application operation is used when the battery is removed Battery usage and data leaks
  6. New features and updates do not introduce new battery usage and data
  7. Related: The secret art of battery testing on Android

2020-12-26-5fe7040b04c51

4. Usability Testing Test Cases

Usability testing of mobile applications provides end-users with an intuitive and user-friendly interface. This type of testing is usually done manually, to ensure the app is easy to use and meets real users' expectations.

Ten usability test case scenarios ensure:

  1. The buttons are of a user-friendly size.
  2. The position, style, etc. of the buttons are consistent within the app
  3. Icons are consistent within the application
  4. The zoom in and out functions work as expected
  5. The keyboard can be minimized and maximized easily.
  6. The action or touching the wrong item can be easily undone.
  7. Context menus are not overloaded.
  8. Verbiage is simple, clear, and easily visible.
  9. The end-user can easily find the help menu or user manual in case of need.
  10. Related: High impact usability testing that is actually doable

We will see more points in our next articles.

December 03, 20204 minutesBhumi KhimaniBhumi Khimani
Bug Life Cycle in Software Testing

Introduction to Bug Life Cycle

The fault life cycle or defect life cycle is the specific set of states that a fault goes through before it is closed or resolved. When a fault is detected - by a tester or someone else on the team - the life cycle provides a tangible way to track the progress of a bug fix, and during the fault's life, multiple individuals touch it - directly or indirectly. Troubleshooting is not necessarily the responsibility of a single individual. At different stages of the life cycle, several members of the project team will be responsible for the error. This blog will help you understand how many cases the error goes through vary from project to project. The life cycle diagram covers all possible situations.

What’s The Difference Between Bug, Defect, Failure, Or Error?

Bug: If testers find any mismatch in the application/system in the testing phase then they call it a Bug.

As I mentioned earlier, there is a contradiction in the usage of Bug and Defect. People widely say the bug is an informal name for the defect.

Defect: The variation between the actual results and expected results is known as a defect.

If a developer finds an issue and corrects it by himself in the development phase then it’s called a defect.

Failure: Once the product is deployed and customers find any issues then they call the product a failure product. After release, if an end-user finds an issue then that particular issue is called a failure

Error: We can’t compile or run a program due to coding mistakes in a program. If a developer is unable to successfully compile or run a program then they call it an error.

Software Defects Are Basically Classified According To Two Types:

Severity

Bug Severity or Defect Severity in testing is a degree of impact a bug or a Defect has on the software application under test. A higher effect of bug/defect on system functionality will lead to a higher severity level. A Quality Assurance engineer usually determines the severity level of a bug/defect.

Types of Severity

In Software Testing, Types of Severity of bug/defect can be categorized into four parts:

Critical: This defect indicates complete shut-down of the process, nothing can proceed further

Major: It is a highly severe defect and collapses the system. However, certain parts of the system remain functional

Medium:/ It causes some undesirable behavior, but the system is still functional

Low: It won't cause any major break-down of the system

Priority

Priority is defined as the order in which a defect should be fixed. The higher the priority the sooner the defect should be resolved.

Priority Types

Types of Priority of bug/defect can be categorized into three parts:

Low: The Defect is an irritant but a repair can be done once the more serious Defect has been fixed

Medium: During the normal course of the development activities, defects should be resolved. It can wait until a new version is created

High: The defect must be resolved as soon as possible as it affects the system severely and cannot be used until it is fixed

(A) High Priority, High Severity

An error occurs on the basic functionality of the application and will not allow the user to use the system. (E.g. A site maintaining the student details, on saving record if it doesn't allow saving the record then this is a high priority and high severity bug.)

(B) High Priority, Low Severity

High Priority and low severity status indicate defects have to be fixed on an immediate basis but do not affect the application while High Severity and low priority status indicate defects have to be fixed but not on an immediate basis.

(C) Low Priority Low Severity

A minor low severity bug occurs when there is almost no impact on the functionality, but it is still a valid defect that should be corrected. Examples of this could include spelling mistakes in error messages printed to users or defects to enhance the look and feel of a feature.

(D) Low Priority High Severity

This is a high severity error, but it can be prioritized at a low priority as it can be resolved with the next release as a change request. On the user experience. This type of defect can be classified in the category of High severity but Low priority.

Bug Life Cycle

2020-12-18-5fdca76589590

New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.

Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team

Open: The developer starts analyzing and works on the defect fix

Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug status as "Fixed."

Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the tester. Since the software testing remains pending from the tester's end, the status assigned is "pending retest."

Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and changes the status to "Re-test."

Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is "verified."

Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to "reopened". Once again the bug goes through the life cycle.

Closed: If the bug no longer exists then the tester assigns the status "Closed."

Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the status is changed to "duplicate."

Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to "rejected."

Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then the status "Deferred" is assigned to such bugs

Not a bug: If it does not affect the functionality of the application then the status assigned to a bug is "Not a bug".

Defect Life Cycle Explained

2020-12-18-5fdca75fd9a1a

  1. The tester finds the defective status assigned to the defect.

  2. A defect is forwarded to the project manager for analysis.

  3. The project manager decides whether a defect is valid.

  4. Here the defect is invalid. The status is "Rejected".

  5. The project manager assigns a rejected status.

  6. If the bug is not resolved, the next step is to check that it is in scope.

Suppose we have another function - email functionality for the same application, and you find a problem with it. However, it is not part of the current version if such errors are assigned as a deferred or deferred status.

  1. Next, the manager checks to see if a similar error has occurred earlier. If so, a duplicate status is assigned to the error.

  2. If not, the bug is assigned to the developer, who starts correcting the code.

  3. During this phase, the defect is assigned a status in the process,

  4. Once the code is fixed. A defect is assigned a status fixed.

  5. Next, the tester tests the code again. If the test case is passed, the defect is closed. If the test cases fail again, the bug is reopened and assigned to the developer.

  6. Consider a situation where, during the first release of the flight reservation, an error was detected in the fax order, which has been fixed and the status of closed has been assigned. The same error occurred again during the second upgrade version.

In such cases, a closed defect is opened again.

"That's all to Bug Life Cycle"

October 13, 20206 minutesBhumi KhimaniBhumi Khimani
Software Testing Types

Different Types Of Software Testing

Given below is the list of some common types of Software Testing:

Accessibility Testing:

The purpose of accessibility testing is to determine whether the software or application is accessible to people with disabilities or not. Here disability means deaf, color-blind, mentally disabled, blind, elderly, and other disabled groups. Various checks are performed, such as font size for the visually impaired, color and contrast for color blindness, etc.

Ad-Hoc Testing:

The name itself suggests that this test will be conducted on an ad hoc basis, i.e. without reference to the test case and also without a plan or documentation for such type of test. The aim of this test is to find the flaws and breaks the application is executed by executing any sequence of application or random functionality. Ad hoc testing is an informal way of finding bugs and can be done by anyone on the project. It is difficult to identify bugs without a test case, but sometimes it is possible that bugs found in ad hoc tests were not identified using existing test cases.

Alpha Testing:

It is the most common type of test used in the software industry. The objective of this test is to identify all possible problems or defects before launching it to the market or to the user. Alpha Testing takes place at the end of the software development phase but before Beta Testing. Still, minor design changes can be made as a result of such testing. Alpha testing is done on the developer's site. An internal virtual user environment can be created for this type of test.

API Testing:

API testing is done for the system, which has a collection of APIs that should be tested. During the test, a test of the following things is examined: explore the boundary conditions and ensure that the test harness varies the parameters of the API calls in order to verify functionality and expose faults. Parameters. Checks the API behavior that is considering external environment conditions such as files, peripheral devices, etc. Check the sequence of API calls and check if the APIs produce useful results from subsequent calls.

Beta Testing:

Beta tests, also known as user tests, are carried out by the end-users at the end user's location to verify the usability, functionality, compatibility, and reliability tests to provide input on the design, functionality, and usability of a product. These inputs are not only critical to the success of the product but also an investment in future products if the data collected is effectively managed.

Boundary Testing:

Boundary value analysis is a type of black box or specification-based testing technique in which tests are performed using the boundary values.

Example:

An exam has a pass boundary at 50 percent, merit at 75 percent, and distinction at 85 percent. The Valid Boundary values for this scenario will be as follows:

49, 50 - for pass
74, 75 - for merit
84, 85 - for distinction

Boundary values are validated against both valid boundaries and invalid boundaries.

The Invalid Boundary Cases for the above example can be given as follows:

0 - for lower limit boundary value
101 - for upper limit boundary value

Bottom-Up Integration Testing:

Each component at the lower hierarchy is tested individually and then the components that rely upon these components are tested.

Bottom-Up Integration - Flow Diagram

2020-12-14_5fd6f466c1813

The order of Integration by Bottom-down approach will be:

4,2
5,2
6,3
7,3
2,1
3,1

Top-Down Integration Testing:

Top-down integration testing is an integration testing technique used in order to simulate the behavior of the lower-level modules that are not yet integrated. Stubs are the modules that act as temporary replacements for a called module and give the same output as that of the actual product.

The replacement for the 'called' modules is known as 'Stubs' and is also used when the software needs to interact with an external system.

2020-12-14_5fd6f4bf2e648

The above diagrams clearly state that Modules 1, 2, and 3 are available for integration, whereas below modules are still under development that cannot be integrated at this point in time. Hence, Stubs are used to test the modules. The order of Integration will be:

1,2
1,3
2,Stub 1
2,Stub 2
3,Stub 3
3,Stub 4

Unit Testing:

Unit testing, which is a testing method by which individual units are tested to determine if there are any issues on the part of the developer themselves. It is concerned with the functional health of the independent units.

The main aim is to isolate each unit of the system to identify, analyze and fix the defects.

Unit Testing - Advantages:

Reduces errors in the newly developed functions or reduces errors when changing the existing functionality.

Reduces test costs as errors are detected at a very early stage.

Improves the design and allows for better code redesign. Unit tests also show the quality of the build when integrated into Build.

Unit Testing Life Cycle:

2020-12-14_5fd6f5139750d

Unit Testing Techniques:

  • Black Box Testing - Using which the user interface, input, and output are tested.
  • White Box Testing - used to test each one of those functions' behavior.
  • Gray Box Testing - Used to execute tests, risks, and assessment methods.

System Testing:

System testing (ST) is a black-box testing technique performed to assess the complete system's compliance with specified requirements. In system testing, the functionalities of the system are tested from an end-to-end perspective. System testing is generally carried out by a team that is independent of the development team to measure the quality of the system in an unbiased manner. Includes functional and non-functional tests.

Types of System Tests:

2020-12-14_5fd6f54692474

Sanity Testing:

Sanity testing, which is a software testing method that the testing team performs for some basic tests. The goal of the core test is to perform it whenever a new test architecture is received. Terms such as smoke test, build validation test, basic acceptance test, or health test are used interchangeably, however, each of them is used under a slightly different scenario,

The sanity test is usually unwritten and helps to identify missing dependent functions. It is used to determine if the app partition is still working after making a slight change.

The general health test can be both narrow and profound. A mind test is a narrow regression test that focuses on one or several domains of functions.

Smoke Testing:

The smoke test is a testing technique that is inspired by the hardware test, which checks for smoke from hardware components once the hardware is turned on. Similarly, in the context of software testing, the smoke test refers to testing the basic functionality of the build. If the test fails, the build is declared unstable and is NOT tested again until the build smoke test is passed.

Smoke Testing - Features:

  • Identify the business-critical functions that a product must perform.
  • Design and run the basic functions of the application.
  • Make sure the smoke test passes each and every build to continue testing.
  • Smoke testing enables obvious errors to be revealed, saving time and effort.
  • Smoking testing can be manual or automated.

Interface Testing:

Interface testing is performed to assess whether systems or components are passing data and properly controlling each other. It is to check if all interactions between these modules are working properly and errors are handled properly.

Interface Testing - Checklist

  • Check that communication between systems is done correctly
  • Check if all supported hardware/software has been tested
  • Check if all related documents are supported/open on all platforms
  • Check security requirements or encryption when communicating between application server systems

Regression Testing:

Regression testing for a black-box testing technique consists of re-executing those tests that are affected by code changes. These tests should be performed as much as possible throughout the software development life cycle.

Types of Regression Tests:

  • Final Regression Tests: - A "final regression testing" is performed to validate the build that hasn't changed for a period of time. This build is deployed or shipped to customers.
  • Regression Tests: - A normal regression testing is performed to verify if the build has NOT broken any other parts of the application by the recent code changes for defect fixing or for enhancement.

Selecting Regression Tests:

  • Requires knowledge of the system and its effects on the existing functions.
  • Tests are selected based on the area of ​​common failure.
  • Tests are chosen to include the area where code changes have been made multiple times.
  • Tests are selected based on the criticality of the features.

Regression Testing Steps:

  • Regression tests are the ideal cases of automation that result in better Return On Investment (ROI).
  • Select the regression tests.
  • Choose an apt tool and automate regression testing.
  • Verify applications with checkpoints.
  • Manage regression testing and update as needed.
  • Schedule tests.
  • Integrate with builds.

Load Testing:

Load testing is a performance testing technique using which the response of the system is measured under various load conditions. The load testing is performed for normal and peak load conditions

Load Testing Approach:

  • Evaluate performance acceptance criteria.
  • Identify critical scenarios.
  • Design the workload model.
  • Identify target load levels.
  • Design the tests.
  • Run the tests.
  • Analyze the results

Objectives of Load Testing:

  • Response time.
  • Resource usage rate.
  • Maximum user load.
  • Work-related metrics

Stress Testing:

Stress testing is a non-functional testing technique performed as part of performance testing. During stress testing, the system is monitored after overloading the system to ensure that the system can withstand the stress. System recovery from this phase (after stress) is very critical as it is very likely to occur in the production environment.

Reasons for performing stress tests:

  • This allows the test team to monitor the performance of the system during failures.
  • To check if the system saved data before crashing or NOT.
  • To check if the system is printing messages Significant error during a failure or if it has printed random exceptions
  • To check whether unexpected failures do not lead to safety issues

Stress Tests - Scenarios:

  • Monitor the behavior of the system when the maximum number of 'users are logged in at the same time
  • All users performing critical operations at the same time
  • All users accessing the same file
  • Hardware issues such as the downed database server or some of the servers in a downing farm breakdown.

Compatibility Testing:

Compatibility testing is non-functional testing conducted on the application to evaluate the application's compatibility within different environments. It can be of two types - forward compatibility testing and backward compatibility testing.

  • Operating system Compatibility Testing - Linux, macOS, Windows
  • Database Compatibility Testing - Oracle SQL Server
  • Browser Compatibility Testing - IE, Chrome, Firefox
  • Other System Software - Web server, networking/ messaging tool, etc.

Localization Testing:

Localization testing is a software testing technique by which software behavior is tested for a specific region, locality, or culture. The purpose of conducting the localization test for a program is to test the appropriate linguistic and cultural aspects of a particular site.

Software Testing Methods

There are various methods for testing software. These methods are chosen by testers based on their requirements and methodologies. But three fundamental software testing methods are used in every project development.

Types of Software Testing Methods and Levels

  • White Box Testing
  • Black Box Testing
  • Grey Box Testing

White Box Testing & Levels

The White Box Test is also known as the Open / Clear Box Test / Glass Box Test. From a developer's perspective, it is known as Code Oriented Testing / Structural Testing. In This type of testing, technical tests can be performed within the internal structure, logical design, and implementation of different modules. Here, the tester uses preferred input/exercise paths via code to determine the correct or exact output. is known as code-oriented testing, it contains technical tests and script-based tests as part of its testing phase.

White Box Testing Levels

  • Unit Testing
  • Integration Testing
  • System Testing

Black Box Testing & Levels

This test is known as a behavior test, in which the software tests the internal structure, design, and implementation, as well as the user interface and UX of the software under test, which is not known to the tester. Black box tests are both functional and non-functional, but most of the time they are functional. This testing technique is known as black-box testing because the software or product is not known/confirmed to the tester in advance.

Using this technique of testing to find errors in these mentioned categories:

  • Software malfunction.
  • Error in the interface.
  • Errors in concepts.
  • Errors related to the database.
  • Performance or behavior errors.
  • Errors in product startup or termination

Black Box Testing Levels

  • Integration Testing
  • System Testing
  • Acceptance Testing

Grey Box Testing & Levels

This software testing technique combines the concept of black box and white box testing. In the gray box test, the inside of your product is partly known to the tester. This has partial access to data structures residing internally to design different test cases, but at the same time testing from a user's perspective or as a black box tester.

“Still There are various methods for testing software. These methods are chosen by testers based on their requirements and methodologies.”

September 18, 202011 minutesBhumi KhimaniBhumi Khimani
Software Testing Life Cycle

Software Testing Life Cycle (STLC) identifies the test activities to perform and when to perform those test activities. While testing differs between organizations, there is a test lifecycle.

There are mainly eight phases of STLC

  1. Requirement Analysis
  2. Test Planning And Control
  3. Test Analysis
  4. Test Case Development
  5. Test Environment Setup
  6. Test Execution
  7. Exit Criteria Evaluation And Reporting
  8. Test Closure

2020-12-10_5fd21d73795c8

Requirement Analysis:-

The entry criteria for this phase is the BRS document (Business Requirements Specification). During this phase, the test team studies and analyzes the requirements from a test perspective.

This phase helps to identify whether the requirements are likely or not. If any requirement is not verifiable, the test team can communicate with various stakeholders (customer, business analyst, technical leaders, system architects, etc.) during this phase so that the mitigation strategy can be planned.

Entry criteria: BRS (Business Requirement Specification) Results

Deliverables: list of all verifiable requirements, automation feasibility report (if applicable)

Test Planning And Control:-

Test planning is the first step in the testing process. At this stage typically Test Manager Test Lead involves determining the effort and cost estimates for the entire project. The preparation of the Test Plan will be made on the basis of the requirements analysis. Activities such as resource planning, determination of roles and responsibilities, selection of tools (if automation), training requirements, etc., are carried out at this stage.

The deliverables of this phase are Test Plan & Effort estimation documents.

Entry Criteria: Requirements Documents

Deliverables: Test Strategy, Test Plan, and Test Effort estimation document.

Test Analysis:-

Test Analysis Is the process of analyzing the test basis (all documents from which the requirements of a component or system can be inferred) and defining test objectives. It covers WHAT is to be tested in the form of test conditions and can start as soon as the basis for testing is established for each test level.

Following Is The Document Which Is Use In Test Analysis:-

  • CRS (Customer Requirement Specification)
  • SRS (Software Requirement Specification)
  • BRS (Business Requirement Specification)
  • Functional Design Documents

Test Case Development:-

This phase begins after the test planning and analysis phase is completed. From test analysis, we can understand how to test and what the test condition is. So easily understand and develop the test case. In This phase, the evaluator creates the manual / automation test scripts. The test data is prepared in this phase and the data is used to find the defect. The Requirement Traceability Matrix (RTM) is also ready Because the evaluator understands tracking the test case for the particular requirement.

2020-12-10_5fd21d8345df1

Activities in the Test Case Development Phase

Following are the three activities that are carried out in the Test Case Development phase

Test Scenarios Identification

  1. Scenarios ease the testing and evaluation of a complex system. The following strategies help in creating good scenarios −
  2. Enumerate potential users, their actions, and their goals.
  3. Evaluate Users who have a hacker mindset and listed possible scenarios for abuse of the system.
  4. List System events and how the system handles these requests.
  5. List Benefits and create comprehensive tasks to verify.
  6. Read about similar systems and their behavior.
  7. Studying Complaints about competitors' products and their predecessors.

Test Cases Writing

A test case is a document, which includes test data, preconditions, expected results, and postconditions, developed for a particular test case in order to verify compliance with a specific requirement.

Test Case serves as a starting point for running the test. After a set of input, values ​​are applied; the application is final and leaves the system at an endpoint also known as a post-execution condition.

Test Data Preparation

Test Data is used to run the tests for test ware. Test Data must be precise and complete in order to detect the shortcomings. To achieve these three goals, follow a step-by-step approach as given below -

  1. Identify resources or test requirements
  2. Identify conditions/functionality to be tested
  3. Set priority test conditions
  4. Select conditions to test
  5. Determine the expected result of test case processing
  6. Create Test cases
  7. Document test
  8. conditions Conduct test
  9. Verify and correct test cases based on modifications

Activity Block Diagram

The following diagram shows the different activities that form part of Test Case Development.

2020-12-10_5fd21d921ebbe

Test Environment Setup:-

A test environment is a software and hardware configuration that allows test teams to run test cases. In other words, it supports running tests with configured hardware, software, and network. The testbed or environment is configured as needed Application Under Test. On some occasions, the test may be the combination of the test environment and the test data that it operates. Setting a good test environment guarantees successful software testing. Any Loopholes in this process can result in additional costs and time for the customer.

Process of Software Test environment setup

Tests are limited to what can be tested and what not should be tested.

Following people are involved in test environment setup

  • System Admins,
  • Developers
  • Testers
  • Sometimes users or techies with an affinity for testing.

The test environment requires setting up of a various number of distinct areas like,

Setup of Test Server

Test Every may not be performed on a local machine. It may need to create a test server that can support applications.

For example, Fedora configuration for PHP, Java-based applications with or without mail server, cron configuration, Java-based applications, and so on.

Network

Network set up as per the test requirement. It includes,

  • Internet setup
  • LAN Wi-Fi setup
  • Private network setup

It ensures that the congestion that occurs during testing doesn't affect other members. (Developers, designers, content writers, etc.)

Test PC setup

For web testing, you may need to set up different browsers for different testers. For desktop applications, you need various types of OS for different testers' PCs.

For example, Windows phone app testing may require.

  • Visual Studio installation
  • Windows phone emulator
  • Alternatively, assigning a Windows phone to the tester.

Bug Reporting

Bug reporting tools should be provided to testers.

Creating Test Data for the Test Environment

Many companies use a separate test environment to test the software product. The common approach used is to copy production data to test. This helps the tester to detect the same issues as a live production server, without corrupting the production data.

The approach for copying production data to test data includes,

  • Set up production jobs to copy the data to a common test environment
  • All PII (Personally Identifiable Information) is modified along with other sensitive data. The PII is replaced with logically correct, but non-personal data.
  • Remove data that is irrelevant to your test.

Testers or developers can copy this to their individual test environments. They can modify it as per their requirement.

Privacy is the main issue in copy production data. To overcome privacy issues you should look into obfuscated and anonymized test data.

For Anonymization of data two approaches can be used,

  • Blacklist: In this approach, all the data fields are left unchanged. Except those fields specified by the users.
  • Whitelist: By default, this approach, anonymizes all data fields. Except for a list of fields that are allowed to be copied. A whitelisted field implies that it is okay to copy the data as it is and anonymization is not required.

Also, if you are using production data, you need to be smart about how to source data. Querying the database using SQL script is an effective approach.

Every test may not be executed on a local machine. It may need establishing a test server, which can support applications.

For example, Fedora set up for PHP, Java-based applications with or without mail servers, cron set up, Java-based applications, etc.

Network

Network set up as per the test requirement. It includes,

  • Internet setup
  • LAN Wi-Fi setup
  • Private network setup

It ensures that the congestion that occurs during testing doesn't affect other members. (Developers, designers, content writers, etc.)

Test Execution:-

After The test plan, the development of the test case, and the configuration of the test environment are complete, and then the execution phase of the test is executed.

This manual test/automation script phase is executed. If any defect is detected during the execution of the test case, it will be reported to the developer through the bug tracking system.

If any test case result is a failure then this particular test case is marked as a failure.

If any test case result is matched to the expected result then a particular test case is marked as Pass.

If all module dependencies are tested and any fault is detected, the particular module test case is marked as blocked, first corrects the main module fault, and then runs the associated module test case. For example, B depends on module A.

If any fault is found in module A, the test case of module B is not executed. First correct the fault of module A then rerun the module A test case, If module A the result of the test case is Pass then run the module B execution of the test case.

Blocked Test noticed cases are executed after the fault is corrected by the developer.

2020-12-10_5fd21da3e8622

Exit Criteria Evaluation And Reporting:-

In this phase, if the exit criteria match the test result. In Termination Criteria, There is one condition that is predefined. At this stage, The test summary report is generated. A Document containing a summary of testing activities and final test results is called Test Summary Report.

Test Closure:-

In the final stage where we prepare the Test Closure Report, Test Metrics.

The testing team will be called out for a meeting to evaluate cycle completion criteria based on Test coverage, Quality, Time, Cost, Software, Business objectives.

The test team analyses the test artifacts (such as Test cases, defect reports, etc.,) to identify strategies that have to be implemented in the future, which will help to remove process bottlenecks in the upcoming projects.

Test metrics and Test closure reports will be prepared based on the above criteria.

Entry Criteria: Test Case Execution report (make sure there are no high severity defects opened), Defect report

Deliverables: Test Closure report, Test metrics

August 31, 20208 minutesBhumi KhimaniBhumi Khimani
Software Testing Technology

Software testing is a process of checking and validating the functionality of an application to determine if it meets specified requirements. It is about finding application faults and verifying where the application is operating according to the end-user needs.

Important Software Testing Techniques:-

  • Boundary Value Analysis (BVA)
  • Equivalence Class Partitioning
  • Decision Table based testing.
  • State Transition
  • Error Guessing
  • Boundary Value Analysis (BVA)
  1. The marginal cost assessment is based entirely on trying the boundaries between partitions. It includes maximum, minimum, internal, or external barriers, typical values, and error values. It is generally seen that numerous errors occur at the obstacles of the defined input values as opposed to the center.

  2. It is also known as BVA and offers a selection of test entities that train limit values. This black container test method improves equivalence partitioning. This software program trying one approach is based on the principle that if a machine works well for these exact values, it will paint error-free for all values that are between the two limits.

Let's see one example:

Input condition is valid between 1 and 10 Boundary values 0,1,2 and 9,10,11

Equivalence Class Partitioning

The equal class phase lets you divide a hard and fast check state into separate sections which are taken into consideration identical. The equal class division software program test technique breaks the input area of a program into a class of information, so a test case has to be designed.

Let’s see one example:

Input conditions are valid between

1 to 10 and 20 to 30

Hence, there are five equivalence classes

--- to 0 (invalid)

1 to 10 (valid)

11 to 19 (invalid)

20 to 30 (valid)

31 to --- (invalid)

You select values from each class, i.e.,

-2, 3, 15, 25, 45

Decision Table Based Testing

The decision table is known as the cause and effect table. This testing technique is suitable for input features that have a logical relationship between inputs. In this technique, combinations of inputs are processed. In order to identify the test case with the decision table, it is necessary to consider the terms, actions, and procedures. Conditions are taken as input and actions as output.

Testing Using The Decision Table In The Login Form

software-testing-technology/1

CONDITIONS - CASE 1, CASE 2, CASE 3, CASE 4

| EMAIL | F | T | F | T | PASSWORD | F | F | T | T | OUTPUT | ERROR | ERROR | ERROR | HOME SCREEN

CASE 1: Email And Password Wrong: Error Message Displayed.

CASE 2: Email True And Password Wrong, Error Message Displayed.

CASE 3: Email Wrong And Password True, Error Message Displayed.

CASE 4: Email And Password True, Redirect to Home screen

State Transition Testing Technique

In-State Transition technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In-State transition technique, the testing team provides positive as well as negative input test values for evaluating the system behavior.

Presently We Make A Diagram For Forgot Password/OTP Proces

software-testing-technology/2

software-testing-technology/3

  1. First Enter the Right Number In This Text Box and Click RESET PASSWORD Button. One OTP Comes On Mobile Number.
  2. To reset the secret word you should experience the "OTP'' framework. The first run-through the client enters is "OTP '', they will be permitted to go to the secret phrase change page.
  3. In the event that the client enters mistaken "OTP '' unexpectedly and second, the framework will request the third time "OTP'' is entered.
  4. In the event that "OTP'' is valid, it will be permitted to go to the secret phrase change page, in any case, if the OTP is off base the third time, an Error Message Displayed Like "Your OTP has expired!!".

State Transition Table

ATTEMPT CORRECT PIN INCORRECT PIN

| [B1] Start | B5 | B2

| [B2] First attempt | B5 | B3

| [B3] Second attempt | B5 | B4

| [B4] third attempt | B5 | B4

| [B5] Access granted | - | -

| [B6] Account blocked | - | -

Error Guessing Technique

Error guessing is a software testing technique based on guessing the error that can appear in the code. The technique relies heavily on experience, with the test analysts using their experience to guess the problematic part of the test application. Therefore, test analysts need to be competent and experienced in order to better guess errors. The technique counts a list of possible errors or error-prone situations. Then the tester writes a test case to uncover these errors. To design test cases based on this software testing technique, the analyst can use the experience to identify the conditions.

This technique can be used at any level of testing and for testing the common mistakes like:

  • Divide by zero
  • Inserting blanks in text fields
  • Pressing the enter button without entering values
  • Uploading files that exceed the maximum limits
  • Exception null pointer.
  • Invalid parameters

Let’s see one example:

Suppose there is a requirement that the phone number be numeric and not less than 10 characters. And the software application has a phone number.

The following are the error estimation techniques:

  1. What will be the result if the cellphone number is left blank?
  2. What is the result if a character other than a digit is entered?
  3. What is the result if fewer than 10 digits are entered?
August 22, 20204 minutesBhumi KhimaniBhumi Khimani
Manual Testing like a Pro
However advanced automation testing may get, we can't live without manual testing. Because not everything can be and should be, tested with code. There is a certain level of human interference required.

Here's how to be great at manual testing:

Understand the product which you are going to test

  • Before you begin testing any app/website, you should know the concepts of the product, what problems the product solves and how are users going to use it.
  • Here the steps to rectify the concept of product 
    • Identifying customer needs.
    • Defining the problem and objectives.
    • Drafting and analysis.
    • Ask for detailed design and drawings.
    • Testing.
    • Final successful delivery.

Have a clear understanding of requirements

  • ‘’First how we know to understand the requirements here the steps mentioned below’’
  • There are mainly two types of requirements: 1. Functional 2. Non-functional
  • What are Functional Requirements?
  • Functional requirements define the basic system behavior. Essentially, they are what the system does or must not do, and can be thought of in terms of how the system responds to inputs. Functional requirements usually define if/then behaviors and include calculations, data input, and business processes.
testing
  • What are the Non-Functional Requirements?
  • While functional requirements define what the system does or must not do,
  • non-functional requirements specify how the system should do it. Non-functional requirements do not affect the basic functionality of the system Even if the non-functional requirements are not met.
  • the system will still perform its basic purpose.

Changelog and Impacting Area

Ask the developer of changelog detailing & made the product changes which are listing the impact areas

  • This will help to customize or change the task from the bug
  • This will give ease to know the bug
  • It will give a glance at the task and the flow in which it is working
  • This will help you prioritize where to look for potential bugs

Test Scenario and Cases

Write down test scenario/cases in Excel for easy reference

  • Understand the Learners: To write concrete and effective scenarios you must understand your learners and know their needs and expectations.
  • Create Real Life and Relevant Situations: Make your scenarios as real as possible.
  • Motivate the Learner: A well-written scenario should motivate the learner to action.
  • ‘’How to write the test cases’’ 
    • Title Must be strong
    • Include a Strong Description with Assumptions & Preconditions
    • Keep the Test Steps Clear and Concise
    • The result must be Expected
    • Also, Make is Reusable

Critical Flows

Check the product critical flows & code impacted flows twice

  • This will ensure, that in case something goes wrong in production, it would be any business-critical flows
  • Reason to do: for uncertain changes of code, some regression will occur in code, if we didn't check & deployed in the server.to rid the problem in the product needs to check the critical flow twice.

Don't test along with the developer

  • If you test with the developer, you may miss out on edge cases due to the developer's bias or perspective. So make sure you test the app/website once while the developer is not with you.

If in doubt, ask the Developer or Product Lead

  • It always helps to communicate any doubt that you have
  • As the perspective varies and as well as a method so in case of any doubt ask the developer and correct it.
  • Also communicate with the product lead in case of doubt so the better output can be generated of the task with minimal bugs and well-defined task
July 16, 20203 minutesBhumi KhimaniBhumi Khimani