Testing Posts

What is A/B testing?

What is A/B testing?

A/B testing (also known as split testing or bucket testing) is a methodology for comparing two versions of a webpage or app against each other to determine which one performs better.

A/B testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.

Markdown Logo

Running an A/B test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app and then collect data about the impact of that change.

Testing takes the guesswork out of website optimization and enables data-informed decisions that shift business conversations from "we think" to "we know." By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results.

How A/B testing works

In an A/B test, you take a webpage or app screen and modify it to create a second version of the same page. This change can be as simple as a single headline button, or be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as control or A) and half are shown the modified version of the page (the variation or B).

Markdown Logo

As visitors are served either the control or variation, their engagement with each experience is measured and collected in a dashboard and analyzed through a statistical engine. You can then determine whether changing the experience (variation or B) had a positive, negative or neutral effect against the baseline (control or A).

Markdown Logo

Why you should A/B test

A/B testing allows individuals, teams and companies to make careful changes to their user experiences while collecting data on the impact it makes. This allows them to construct hypotheses and to learn what elements and optimizations of their experiences impact user behaviour the most. In another way, they can be proven wrong—their opinion about the best experience for a given goal can be proven wrong through an A/B test.

More than just answering a one-off question or settling a disagreement, A/B testing can be used to continually improve a given experience or improve a single goal like conversion rate optimization (CRO) over time.

A B2B technology company may want to improve their sales lead quality and volume from campaign landing pages. In order to achieve that goal, the team would try A/B testing changes to the headline, subject line, form fields, call-to-action and overall layout of the page to optimize for reduced bounce rate, increased conversions and leads and improved click-through rate.

Testing one change at a time helps them pinpoint which changes had an effect on visitor behaviour, and which ones did not. Over time, they can combine the effect of multiple winning changes from experiments to demonstrate the measurable improvement of a new experience over the old one.

This method of introducing changes to a user experience also allows the experience to be optimized for a desired outcome and can make crucial steps in a marketing campaign more effective.

By testing ad copy, marketers can learn which versions attract more clicks. By testing the subsequent landing page, they can learn which layout converts visitors to customers best. The overall spend on a marketing campaign can actually be decreased if the elements of each step work as efficiently as possible to acquire new customers.

Markdown Logo

A/B testing can also be used by product developers and designers to demonstrate the impact of new features or changes to a user experience. Product onboarding, user engagement, modals and in-product experiences can all be optimized with A/B testing, as long as goals are clearly defined and you have a clear hypothesis.

A/B testing process

The following is an A/B testing framework you can use to start running tests:

Collect data:

Your analytics tool (for example Google Analytics) will often provide insight into where you can begin optimizing. It helps to begin with high traffic areas of your site or app to allow you to gather data faster. For conversion rate optimization, make sure to look for pages with high bounce or drop-off rates that can be improved. Also consult other sources like heatmaps, social media and surveys to find new areas for improvement.

Identify goals:

Your conversion goals are the metrics that you are using to determine whether the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases.

Generate test hypothesis:

Once you've identified a goal, you can begin generating A/B testing ideas and test hypotheses for why you think they will be better than the current version. Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.

Create different variations:

Using your A/B testing software (like Optimize Experiment), make the desired changes to an element of your website or mobile app. This might be changing the colour of a button, swapping the order of elements on the page template, hiding navigation elements, or something entirely custom. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to test run your experiment to make sure the different versions as expected.

Run experiment:

Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each experience is measured, counted and compared against the baseline to determine how each performs.

Wait for the test results:

Depending on how big your sample size (the target audience) is, it can take a while to achieve a satisfactory result. Good experiment results will tell you when the results are statistically significant and trustworthy. Otherwise, it would be hard to tell if your change truly made an impact.

Analyse results:

Once your experiment is complete, it's time to analyse the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed, and whether there is a statistically significant difference. It is important to achieve statistically significant results, so you’re confident in the outcome of the test.

If your variation is a winner, congratulations 🎉🎉🎉! See if you can apply learnings from the experiment on other pages of your site, and continue iterating on the experiment to improve your results. If your experiment generates a negative result or no result, don't worry. Use the experiment as a learning experience and generate new hypothesis that you can test.

Markdown Logo

Whatever your experiment's outcome, use your experience to inform future tests and continually iterate on optimizing your app or site's experience.

November 06, 20236 minutesVirendra HarkhaniVirendra Harkhani
Regression Testing Techniques & Tools

Regression Testing Techniques:

Retest All:

Retest All In this technique, the entire test suite is executed again to verify that no existing functionality has been affected by recent code changes. It's a comprehensive but time-consuming approach.

Selective Regression Testing:

Selective Regression Testing This technique involves selecting a subset of test cases from the existing test suite that is most likely to be affected by the code changes. It's a more efficient approach compared to retesting all test cases.

Test Case Prioritization:

Test Case Prioritization Test cases are prioritized based on factors like their likelihood of failure, criticality, and importance to the application. High-priority test cases are executed first to quickly identify issues.

Test Automation:

Test Automation tools are used to create and execute test scripts that can be easily rerun whenever there are code changes. Popular automation tools include Selenium, Appium, and JUnit for Java applications.

Regression Testing Tools

Regression Testing Tools:

Selenium:

Selenium is one of the most popular open-source automation testing tools for web applications. It supports multiple programming languages and browsers and allows testers to create robust regression test suites.

Appium:

Appium is an open-source tool for automating mobile applications on Android and iOS platforms. It can be used for regression testing of mobile apps.

JUnit:

JUnit is a widely-used testing framework for Java applications. It's especially useful for unit and regression testing in the Java ecosystem.

TestNG:

TestNG is another Java-based testing framework that offers more advanced testing features compared to JUnit, making it suitable for regression testing.

Jenkins:

Jenkins is a popular open-source automation server that can be used to automate the execution of regression test suites. It integrates with various testing tools and can schedule test runs based on code changes.

Postman:

Postman is a popular tool for testing RESTful APIs. It can be used for API regression testing to ensure that changes in the API do not break existing functionality.

TestRail:

TestRail is a test management tool that helps teams organize and manage their regression test cases, track test results, and collaborate on testing efforts.

JIRA:

JIRA, developed by Atlassian, is an issue and project tracking tool that can be used for managing and tracking regression test cases and defects.

Travis CI:

Travis CI is a continuous integration tool that can be used to automate the execution of regression tests whenever code changes are pushed to a version control system like GitHub.

CircleCI:

CircleCI is another continuous integration and continuous delivery (CI/CD) platform that supports automated regression testing as part of the software development pipeline.

The choice of regression testing technique and tool depends on the specific requirements of your project, the nature of the application, and the resources available. It's essential to select the most appropriate combination to ensure effective regression testing and maintain the quality of your software.

September 11, 20232 minutesVirendra HarkhaniVirendra Harkhani
Verification and Validation: What’s the difference?

Differences between Verification and Validation

Verification Validation
It includes checking documents, design, codes and programs. It includes testing and validating the actual product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, walkthroughs, inspections and desk-checking. Methods used in validation are Black Box Testing, White Box Testing and non-functional testing.
It checks whether the software conforms to specifications or not. It checks whether the software meets the requirements and expectations of a customer or not.
It can find the bugs in the early stage of the development. It can only find the bugs that could not be found by the verification process.
The goal of verification is application and software architecture and specification. The goal of validation is an actual product.
Quality assurance team does verification. Validation is executed on software code with the help of testing team.
It comes before validation. It comes after verification.
It consists of checking of documents/files and is performed by human. It consists of execution of program and is performed by computer.
Verification refers to the set of activities that ensure software correctly implements the specific function. Validation refers to the set of activities that ensure that the software that has been built is traceable to customer requirements.
After a valid and complete specification, the verification starts. Validation begins as soon as project starts.
Verification is for prevention of errors. Validation is for detection of errors.
Verification is also termed as white box testing or static testing as work product goes through reviews. Validation can be termed as black box testing or dynamic testing as work product is executed.
Verification finds about 50 to 60% of the defects. Validation finds about 20 to 30% of the defects.
Verification is based on the opinion of reviewer and may change from person to person. Validation is based on the fact and is often stable.
Verification is about process, standard and guideline. Validation is about the product.

Markdown Logo

Verification :

Verification is the process of checking that a software achieves its goal without any bugs. It is the process to ensure whether the product that is developed is right or not. It verifies whether the developed product fulfils the requirements that we have. Verification is static testing.

Validation :

Validation is the process of checking whether the software product is up to the mark or in other words product has high level requirements. It is the process of checking the validation of product, i.e. it checks what we are developing is the right product. It is validation of actual and expected product. Validation is the dynamic testing.

December 02, 20232 minutesVirendra HarkhaniVirendra Harkhani
Regression Testing Vs Retesting

In this article, We will see the difference between regression testing vs retesting.

The concepts of regression testing and retesting are generally confused within the field of test automation. They sound similar and they have correlations too.

The main difference is that regression testing is designed to test or verify for bugs you don’t expect to be there. Retesting is designed to test or verify for bugs you do expect to be there.

What is Regression Testing?

Regression testing is a type of software testing executed to check whether a code change has not critically disturbed the current functions & features of an application/software, what does it mean that the point of regression testing is to make sure that new updates or features added/release to software don’t break any previously released updates or features or functions.

What is Retesting?

Retesting is done to ensure that the bug is fixed and even if failed functionality is working fine or not, this is a kind of verification process followed in the testing field for the fixed bugs. Most of the testers having confused with Regression and Retesting.

Boost Customer Satisfaction, Find out Hidden Bugs In Your Software

Generally, testers find bugs while testing the software application or website and assign them to the developers to fix them. Then the developers fix the bug and assign it back to the testers/QA for verification. This continuous process is called Retesting.

Difference between Regression Testing and Retesting.

We could say that regression testing is a type of retesting. Retesting really means testing something again. And when we are regression testing, we are testing something that we have tested numerous times before.

But determining what the two have in common or similar might confuse more than it will help. So, for the well-being of clarity, below is an overview of the key differences.

Regression Testing Vs Retesting

Regression Testing Retesting
Regression Testing is carried out to establish whether a recent program or code change has not negatively affected existing features. Re-testing is carried out to establish the test cases that failed in the final execution are passing after the defects are fixed.
The purpose of Regression Testing is that new code fixes should not have any side effects on existing functionalities. Re-testing is performed on the basis of the defect fixes.
Regression Testing can be carried out parallel with Re-testing, based on the project and availability of resources. Retesting is carried out before regression testing because the priority of re-testing is higher than regression testing.
Defect verification is not part of regression testing. Defect verification is part of re-testing.
Regression testing is known as genetic testing. Re-testing is planned to test.
Regression testing is executed for passed test cases. Retesting is executed for failed test cases only.
Regression testing checks for unexpected side effects. Re-testing ensures that the original fault has been corrected.
Regression testing is only done when there are any modifications or fixes become mandatory in an existing project. Re-testing executes a defect with the same data in the same environment with different inputs with a new build.
We can do automation for regression testing; Manual testing could be time-consuming and expensive. We can’t automate the test cases for retesting.
Test cases for regression testing can be captured from the functional specification, user manuals and tutorials, and defect reports in regard to corrected problems. Test cases for retesting cannot be captured before starting testing.

Conclusion:

In case you are still confused, it may be easier to think of retesting as checking to see whether the bug actually fixed and regression testing as whether you created any new bugs with your fix. While you may identify regression issues during a retest, they are in fact separate and should be treated as distinct forms of testing.

June 01, 20233 minutesVirendra HarkhaniVirendra Harkhani
Step by Step Mobile App Testing Process

What is Mobile App Testing?

Mobile app testing is the process of tests the functionality and usability of the mobile application to make sure that it meets the requirements and the application is ready for launch.

What are Mobile application testing requirements?

  • Resolutions of screen
  • OS Version (For android or iOS)
  • Orientation of Screen (landscape, portrait)
  • GPS On/Off
  • Type of application

Types of applications:

  • Mobile Web application:

In a mobile web application, the Website opens on the device with the help of the mobile browser. The Mobile web app does not require any installation.

  • Native application:

The native application is specifically developed for one platform (iOS, Windows 10 Mobile, Android)

  • Hybrid Application:

A hybrid application is the combination of a mobile web application and a native application. It can be defined as a mobile website content show in the application format.

Step by step Mobile App Testing Process

1. Planning:

Before start testing, we are required to planning what we have to test and for planning the test to analyse the requirements.

2. Testing Types Identification:

Before testing any mobile apps, we identify what testing is required to test the particular mobile app: functional, usability, compatibility, performance or security, etc. And also determine what functional requirements should be tested.

Identify what target devices to include:

  • Identify what devices the application will support;
  • Identify the earliest version of relevant operating systems will be supported;
  • Choosing different screen sizes.

3. Test Case and Script Design:

Make a test case document for each and every feature and functionality.

Make separate suites for manual test cases and automated test scripts as required. Make typical sets for manual test cases and automated test scripts. Define any reusable automation scripts and modify them as per the project requirements.

4. Environment Setup:

Download, install and configure the particular application on the different mobile devices to set up the testing environment. Before starting with the actual testing, make sure the test version of the application is established.

5. Manual and Automation Testing:

We are required to execute both manual and automation test cases.

You have already identified and created which tests and scripts to use. In this phase, you’ll actually run these on the basic functionalities to ensure that there are no bugs.

6. Usability Testing:

Usability testing purpose to uncover how much the product is easy to use, understandable, is it able to satisfy the user’s needs impressively. Usability testing is the way of how output can be used by users to reach specified goals.

7. UI Testing:

UI testing is one of the very important tests in mobile application testing.

Some characteristics that should be tested for every app:

1. Screen Resolutions:

Common screen resolutions are:

  • 640 × 480
  • 800 × 600
  • 1024 × 768
  • 1280 × 800
  • 1366 × 768
  • 1400 × 900
  • 1680 × 1050

Verification must be done starting from the smallest to the biggest resolution. If the application has a large list of cards with information, then those also need to be tested on a different resolution for their information wrapping.

2. Screen Size:

There are too many variations in screen sizes in smart devices especially.

Make sure the control size looks good, and the control is properly visible on the screen while testing.

8. Compatibility Testing:

Test the application with different browsers, mobile devices, screen resolutions, and OS versions as per the requirements.

9. Beta Testing:

When the regression testing is completed by the QA team, the build moves to User Acceptance Testing and this is done by the client. They make sure the application is bug-free and working as expected on every defined browser.

10. Performance Testing:

Performance Testing to the application using changing the different connections from 2G, 3G to WIFI, responsiveness, battery consumption, stability, etc.

Test the application to measure scalability and performance issues.

11. Localization Testing:

Localization Testing is used to test making a product, application, or document content adjustable to meet the cultural, lingual, and other requirements of a specific region or a locale.

12. Security Testing:

In Security Testing, ensure that the application is secure by validating SQL injection, data dumps, session hijacking, packet sniffing, and SSL.

13. Device Testing:

When a device is tested to ensure that it is working as expected.

Execute test cases and scripts in all the devices, in the cloud, and/or in physical devices in the lab or via testing tools.

Tips to test mobile application

  1. Learn the Whole app before going to the test.
  2. Remember, you are testing a mobile app and not a desktop or web application.
  3. Take into account the operating system and hardware specifications of the device which is you are testing.
  4. Test on real devices for better testing results.
  5. Use the mobile application testing tools that you are familiar with and do not use because of their popularity.
  6. Use cloud mobile testing.
  7. Mobile app testing with both portrait and landscape screen mode.
  8. Use Emulators and simulators whenever required.
  9. Verify the performance of the application.
  10. Do not automate everything.
  11. Get more accurate results using beta testing
  12. Time management for various testing activities.
May 05, 20234 minutesVirendra HarkhaniVirendra Harkhani
What is Test Plan? – A Complete Guide

A test plan often lists the requirements, risks, test cases, testing environments, business and quality objectives, test timelines, and other things.

What is Test Plan?

The strategy, goals, timetable, estimation, deliverables, and resources needed to carry out testing on a software product are all described in detail in a test plan. The test plan aids in estimating the amount of work required to verify the application’s quality. The test manager carefully monitors and controls every aspect of the test plan to ensure that software testing activities are carried out according to a defined methodology.

Types of Test Plans

  • Master Test Plan
  • Phase Test Plan
  • Testing Type-Specific Test Plans

Types of Test Plans

Master Test Plan

A test plan with many tiers of testing is called a master test plan. It has an entire test plan.

Phase Test Plan

One sort of test plan that addresses each element of the testing technique is a phase test plan. For instance, a list of test cases, a list of tools, etc.

Specific Test Plan

A specific test plan for major testing types like security testing, load testing, and performance testing, among others that is, a particular non-functional testing-specific test plan.

What is the Importance of a Test Plan?

There are numerous advantages to creating a test plan document, including assisting customers, business managers, and developers outside the test team in comprehending the specifics of testing.

Our thinking is guided by the Test Plan. It is like a set of rules that must be followed.

The Test Plan contains important details like test estimation, test scope, and test strategy so that it can be reviewed by the Management Team and used again for other projects.

How to write a Test Plan?

For the management team, the critical task is to make a test plan. Below steps are as follows:

How to write a Test Plan?

1. Analyse the product

Testing the product without any knowledge is next to impossible. One should learn about the product before testing it. You should look around the website and read the documentation for the product. You can learn how to use the website and all of its features by reading the product documentation. If you’re not sure about anything, you could talk to a customer, a developer, or a designer to learn more.

2. Develop a Test Strategy

In software testing, developing a Test Plan begins with developing a Test Strategy. A high-level document known as a Test Strategy is typically created by the Test Manager. This document explains:

The testing effort and costs are determined by the project’s testing objectives and methods. The below steps should be followed:

A. Define the Scope of Testing

Precise customer requirements, a project budget, product specifications, and the talents and skills of your test team are all necessary for determining the scope.

B. Identify the Testing Type

A typical test procedure with an anticipated outcome is referred to as a Testing Type.

Each type of testing is designed to find a particular kind of product bug. However, the objective of all testing types is the same: “Early detection of all defects before releasing the product to the customer.”

C. Document Risk & Issues

Risk is an uncertain future event that has a chance of happening and the potential to lose money. When the risk occurs, it becomes the “problem.”

D. Create Test Logistics

The Test Manager should respond to the following questions in Test Logistics:

  • Who will examine it?

Although the tester’s precise names may not be known, the type of tester can be identified.

  • When will the test take place?

Development activities must be matched to test activities.

3. Define the Test Objective

The overall objective and level of achievement of the test execution are the Test Objectives. The testing aims to uncover as many software flaws as possible; before releasing the software under test, make sure it doesn’t have any bugs.

The following two steps should be taken to define the test objectives:

  • List all software features (that might need to be tested.
  • Using the aforementioned characteristics, define the test’s objective or target.

4. Define Test Criteria

A standard or rule that a test procedure or test judgment can be based on is called Test Criteria. Two types of test criteria are:-

Suspension Criteria: Describe the essential suspension requirements for a test. The active test cycle will be suspended until the suspension criteria are resolved, if they are met during testing.

Exit Criteria: It defines the criteria for a test phase’s successful completion. Before moving on to the subsequent development phase, the exit criteria—the intended test results—are required. Example: All critical test cases must pass 95% of the time.

5. Resource Planning

A resource plan is a comprehensive list of all the resources necessary to complete a project task. The number of resources (employees, equipment, and materials) required to complete a project can be determined with the assistance of resource planning, which is an important part of the test planning process. As a result, the Test Manager can accurately estimate the project’s schedule and budget.

6. Plan Test Environment

A set of software and hardware on which the testing team will run test cases is called a testing environment. Real business and user environments, in addition to physical environments like servers and front-end running environments, make up the test environment.

7. Schedule & Estimation

A common term in project management is “making a schedule.” The Test Manager can use Test Planning as a tool for monitoring project progress and controlling cost overruns by creating a solid schedule.

Deadline for employee and project: The schedule is affected by working days, the project’s deadline, and the availability of resources.

Estimating the project: The Test Manager is aware of the project’s completion time based on the estimate So that he can make the right schedule for the project.

Project Threat: Because the Test Manager is aware of the risk, he or she can add sufficient additional time to the project schedule to address it.

8. Test Deliverables

A list of all the documents, tools, and other parts that need to be made and kept up to support the testing effort is called a Test Deliverable.

After the testing cycles have ended, test deliverables are provided.

Defect Report, Installation/Test Procedures Guidelines, Release Notes, and Test Reports.

April 10, 20235 minutesVirendra HarkhaniVirendra Harkhani
7 Principles of Software Testing

Software testing is the most common way of executing a program determined to track down the blunder. Our software needs to be error-free in order to perform well. The software will be free of all errors if the testing is successful.

7 Principles of Software Testing

There are seven principles of software testing as below:

  • Testing shows the presence of defects
  • Exhaustive testing is not possible
  • Early testing
  • Defect clustering
  • Pesticide paradox
  • Testing is context-dependent
  • Absence of errors fallacy
  • Principles of Software Testing

7 Principles of Software Testing

1) Testing shows the presence of defects

The application will be tested by the test engineer to ensure that there are no bugs or defects. During testing, we can only determine whether the software or application contains any errors. The majority of testing should be able to be traced back to the customer’s requirements, which means finding any flaws that might prevent the product from meeting the customer’s needs. This is the primary goal of testing, which uses a variety of methods and testing techniques to count the number of unknown bugs.

We can reduce the number of bugs in any application by testing it. However, this does not guarantee that the application is free of defects; software may appear bug-free after multiple types of testing. However, if the end-user encounters bugs that were not discovered during the testing process, they will be fixed at the time of deployment on the production server.

2) Exhaustive Testing is not possible

During the actual testing process, it sometimes appears to be very difficult to test all the modules and their features with effective and ineffective combinations of the input data.

As a result, rather than carrying out extensive testing, which necessitates endless calculations and results in failure for the majority of the effort, Because the product timelines prevent us from carrying out such testing scenarios, we are able to complete these variations based on the importance of the modules.

On-demand software testing pricing

3) Early Testing

In this context, “early testing” refers to all testing activities that should begin in the “requirement analysis stage” of the software development life cycle in order to find defects. This is because if we find bugs early enough, they can be fixed right away, which may save us a lot of money over bugs that are found later in the testing process.

We will need the documents for the requirement specification in order to carry out testing; Therefore, rather than addressing the issue at a later stage, such as the development phase, if the requirements are incorrectly defined, they can be addressed immediately.

4) Defect clustering

During the testing process, we can identify the number of bugs that are correlated to a small number of modules using defect clustering. This is due to a number of factors, including the modules’ potential complexity; The coding might be hard, and so on.

The Pareto Principle, states that we are able to identify that approximately, will apply to these kinds of software or applications. Twenty percent of the modules contain eighty percent of the complications. We can find the uncertain modules with this, but if the same tests are running on a regular basis, this method can be difficult, and the same test won’t be able to find new defects.

5) Pesticide paradox

This principle stated that the software or application will not be able to detect new bugs if the same set of test cases is run repeatedly over a predetermined period of time. It is critical to frequently review all test cases in order to overcome these pesticide paradoxes. Additionally, new and distinct tests must be written for the implementation of multiple software or application components to aid in the discovery of additional bugs.

6) Testing is context-dependent

According to the context-dependent principle of testing, there are a variety of market sectors, including commercial websites, e-commerce websites, and so forth. Because each application has its own requirements, features, and functionality, there is a certain method for testing commercial and e-commerce websites. To check this sort of use, we will take the assistance of different sorts of testing, different procedure, approaches, and various strategies. As a result, the application’s context determines the testing.

Functional vs non-functional testing

7) Absence of errors fallacy

We can say that the application is 99 percent bug-free once it has been tested thoroughly and no bugs have been found before it is released. However, there is a possibility that if the application is tested alongside the incorrect requirements, flaws will be discovered, and they will need to be fixed within a certain time frame. This is because the testing is done on the incorrect specification, which does not correspond to the client’s requirements. According to the absence of error fallacy, if the application is impractical and unable to fulfil the requirements and needs of the client, then identifying and fixing bugs would not be helpful.

March 03, 20234 minutesVirendra HarkhaniVirendra Harkhani
Manual Testing Interview Questions – Every QA Should Read [Part - 2]

In today’s competitive world, testing is critical to the success of any software product. Manual tests are important in software development because they can be used in situations where automated testing isn’t possible. This Blog about Manual Testing Interview Questions will help you learn software testing.

With this thorough list of over 120 manual testing interview questions and answers, you’ll be ready for your software testing interviews. These manual testing interview questions are appropriate for both fresher and experienced candidates.

Let’s start by going through some of the most common Manual Testing Interview Questions.

16) What are the advantages of manual testing?

  • Manual testing is cheaper as compared to automation testing.
  • Point of view of an end-user, product analysis is possible only in manual testing.
  • Using manual testing you can also be done GUI testing accurately because using automation difficult to test visual accessibility and preferences.
  • Manual testing is used where the test script is not repeated and reused more times and mainly for short-term projects.
  • Manual testing is best at an early stage of development.

17) What are the drawbacks of manual testing?

  • Some types of testing are not possible to do manually like load testing, performance testing, etc.
  • Sometimes testing is more time-consuming than manual testing like regression testing.
  • Manual testing has a limited scope as compared to automation testing.
  • For long-term projects, manual testing is very expensive.

18) What’s the role of documentation in Manual Testing?

Documentation plays an important role in achieving good software testing. In the documentation, we are including details like project requirements and specifications, designs, basic business rules, inspection reports, configurations, test planning, test cases, bug reporting, user manual, etc.

Using test cases documentation will easy to estimate the testing efforts that will need to spend with test tracking and tracing requirements. Some of the applied documentation associated with software testing are listed below:

  • Test Plan
  • Test Scenario
  • Test Case
  • Traceability Matrix

19)What makes a good test engineer?

A software test engineer is any professional who ensures that the product meets all the expectations and requirements. A software test engineer creates a process for testing a particular product.

  • A good tester should easily understand the priority of the task and should have the ability to take the requirements of the customer.
  • A good test engineer should have the ability to assert his ideas to maintain a cooperative relationship with developers Tester has the ability to communicate which he can report a bug for negative things positively with developers as well as with customers and management people also.
  • Ability to take a risk whenever they need to make important decisions

20) What is the test harness?

A test harness is the cluster of software and test information. Into the test harness test a program unit by running it in a different environment like pressure, load, data-driven, and observing its behavior, reaction, and outcomes. Test Harness is mainly divided into two parts:

  • A Test Execution Engine
  • Test script repository

21) What is test closure?

Test closure is a document that has a summary of all test cases which is made during the software development life cycle. Test closure has also detail about the analysis and remove bugs and errors found. Test closure also contains a report of executed test cases, total no. of open bugs, total no. of rejected bugs.

22) Do you know, the difference between Positive and Negative Testing?

Positive Testing Negative Testing
Positive testing ensures that the application working as an expected result, if not then the test is fails Negative testing ensures that the application can handle the input or unwanted user behaviour.
In this testing, the tester tests the application with a valid set of data. In this testing, tester test the application with an invalid set of data and check their creativity and validation against invalid data.

23) Define what is a critical bug.

A critical bug is a bug that is the impacts a major functionality of the given application. This means affecting a large area of the functionality or breaking any functionality and there is no other method to overcome this problem. The application cannot be delivered to the end-user unless the critical bug is fixed.

24) What is the pesticide paradox? How to overcome it?

Based on the pesticide paradox, if the same tests are carried out again and again then the outcome of these test cases are the same, so for the same test cases tester is not able to find new bugs. Developers will be extra careful in those parts where the tester found more bugs and might not look into the other areas.

Below describe Methods to prevent pesticide paradox are following:

  • Write a whole new different set of test cases continually to exercise different parts of the software.
  • On daily basis review the existing test cases and add new test cases to them.

Using these above methods, it is possible that we can find more bugs in the segment where bug numbers are dropped.

25) What is Defect Cascading in Software Testing?

Defect Cascading is the action of triggering other defects in the application. During testing, while defects go unnoticed then other defects are invoked. As an outcome, a greater number of defects crop up in the later stage of development. If defect cascading continues then impact on other components of the application and determining the affected component becomes more difficult. You can make different test cases for resolving this issue but it is very difficult and time-consuming.

26) What is the term ‘quality’ mean when testing?

Quality software is defect-free, delivered on time and within budget, meets conditions and expectations, and is maintainable. Still ‘Quality’ is a personal term. Quality will depend on who the ‘customer’ is and their overall influence in the scheme of things. The accounting department might define quality in terms of earnings while an end-user might describe quality as user-supportive and defect-free.

27) What is black box testing, and what are the various techniques?

Black Box testing also known as specification-based testing, analyses the functionality of the software without knowing about the internal structure of the application. The goal of this testing is to check the whole workflow of the system is works correctly and meets user demands. Various black box testing techniques are listed below:

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table Based Technique
  • Cause-effect Graphing
  • Use Case Testing

28) What is white box testing, and what are the various techniques?

White-box testing is also known as structure-based testing, for white box testing requires knowledge of the internal structure of the application. The purpose of this testing is to improve design and usability, check the flow of input/outputs, enhance security. Below are the various kind of white box testing techniques:

  • Statement Coverage
  • Decision Coverage
  • Condition Coverage
  • Multiple Condition Coverage

29) What are the Experience-based testing techniques?

Experienced-based testing is all about finding, research, and learning. The tester continuously studies and analyses the product and accordingly applies his skills, trick, and experience to develop test strategies and test cases to perform necessary testing. Various experience-based testing techniques are:

  • Exploratory testing
  • Error Guessing

30) What is a top-down and bottom-up approach to testing?

Top-Down – Testing occurs from top to bottom. This is, high-level modules are tested first and after that low-level modules. Lastly, the low-level modules are integrated into a high-level state to guarantee the framework is working as it is expected to.

Bottom-Up – Testing occurs from base levels to high-up levels. The lowest level modules are tested first and thereafter high-level state modules. Lastly, the high-level state modules are corresponded to a low level to guarantee the framework is filling in as it has been proposed to.

February 03, 20236 minutesVirendra HarkhaniVirendra Harkhani