Non-functional Testing Posts

What is A/B testing?

What is A/B testing?

A/B testing (also known as split testing or bucket testing) is a methodology for comparing two versions of a webpage or app against each other to determine which one performs better.

A/B testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.

Markdown Logo

Running an A/B test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app and then collect data about the impact of that change.

Testing takes the guesswork out of website optimization and enables data-informed decisions that shift business conversations from "we think" to "we know." By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results.

How A/B testing works

In an A/B test, you take a webpage or app screen and modify it to create a second version of the same page. This change can be as simple as a single headline button, or be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as control or A) and half are shown the modified version of the page (the variation or B).

Markdown Logo

As visitors are served either the control or variation, their engagement with each experience is measured and collected in a dashboard and analyzed through a statistical engine. You can then determine whether changing the experience (variation or B) had a positive, negative or neutral effect against the baseline (control or A).

Markdown Logo

Why you should A/B test

A/B testing allows individuals, teams and companies to make careful changes to their user experiences while collecting data on the impact it makes. This allows them to construct hypotheses and to learn what elements and optimizations of their experiences impact user behaviour the most. In another way, they can be proven wrong—their opinion about the best experience for a given goal can be proven wrong through an A/B test.

More than just answering a one-off question or settling a disagreement, A/B testing can be used to continually improve a given experience or improve a single goal like conversion rate optimization (CRO) over time.

A B2B technology company may want to improve their sales lead quality and volume from campaign landing pages. In order to achieve that goal, the team would try A/B testing changes to the headline, subject line, form fields, call-to-action and overall layout of the page to optimize for reduced bounce rate, increased conversions and leads and improved click-through rate.

Testing one change at a time helps them pinpoint which changes had an effect on visitor behaviour, and which ones did not. Over time, they can combine the effect of multiple winning changes from experiments to demonstrate the measurable improvement of a new experience over the old one.

This method of introducing changes to a user experience also allows the experience to be optimized for a desired outcome and can make crucial steps in a marketing campaign more effective.

By testing ad copy, marketers can learn which versions attract more clicks. By testing the subsequent landing page, they can learn which layout converts visitors to customers best. The overall spend on a marketing campaign can actually be decreased if the elements of each step work as efficiently as possible to acquire new customers.

Markdown Logo

A/B testing can also be used by product developers and designers to demonstrate the impact of new features or changes to a user experience. Product onboarding, user engagement, modals and in-product experiences can all be optimized with A/B testing, as long as goals are clearly defined and you have a clear hypothesis.

A/B testing process

The following is an A/B testing framework you can use to start running tests:

Collect data:

Your analytics tool (for example Google Analytics) will often provide insight into where you can begin optimizing. It helps to begin with high traffic areas of your site or app to allow you to gather data faster. For conversion rate optimization, make sure to look for pages with high bounce or drop-off rates that can be improved. Also consult other sources like heatmaps, social media and surveys to find new areas for improvement.

Identify goals:

Your conversion goals are the metrics that you are using to determine whether the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases.

Generate test hypothesis:

Once you've identified a goal, you can begin generating A/B testing ideas and test hypotheses for why you think they will be better than the current version. Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.

Create different variations:

Using your A/B testing software (like Optimize Experiment), make the desired changes to an element of your website or mobile app. This might be changing the colour of a button, swapping the order of elements on the page template, hiding navigation elements, or something entirely custom. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to test run your experiment to make sure the different versions as expected.

Run experiment:

Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each experience is measured, counted and compared against the baseline to determine how each performs.

Wait for the test results:

Depending on how big your sample size (the target audience) is, it can take a while to achieve a satisfactory result. Good experiment results will tell you when the results are statistically significant and trustworthy. Otherwise, it would be hard to tell if your change truly made an impact.

Analyse results:

Once your experiment is complete, it's time to analyse the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed, and whether there is a statistically significant difference. It is important to achieve statistically significant results, so you’re confident in the outcome of the test.

If your variation is a winner, congratulations 🎉🎉🎉! See if you can apply learnings from the experiment on other pages of your site, and continue iterating on the experiment to improve your results. If your experiment generates a negative result or no result, don't worry. Use the experiment as a learning experience and generate new hypothesis that you can test.

Markdown Logo

Whatever your experiment's outcome, use your experience to inform future tests and continually iterate on optimizing your app or site's experience.

November 06, 20236 minutesVirendra HarkhaniVirendra Harkhani
Regression Testing Techniques & Tools

Regression Testing Techniques:

Retest All:

Retest All In this technique, the entire test suite is executed again to verify that no existing functionality has been affected by recent code changes. It's a comprehensive but time-consuming approach.

Selective Regression Testing:

Selective Regression Testing This technique involves selecting a subset of test cases from the existing test suite that is most likely to be affected by the code changes. It's a more efficient approach compared to retesting all test cases.

Test Case Prioritization:

Test Case Prioritization Test cases are prioritized based on factors like their likelihood of failure, criticality, and importance to the application. High-priority test cases are executed first to quickly identify issues.

Test Automation:

Test Automation tools are used to create and execute test scripts that can be easily rerun whenever there are code changes. Popular automation tools include Selenium, Appium, and JUnit for Java applications.

Regression Testing Tools

Regression Testing Tools:

Selenium:

Selenium is one of the most popular open-source automation testing tools for web applications. It supports multiple programming languages and browsers and allows testers to create robust regression test suites.

Appium:

Appium is an open-source tool for automating mobile applications on Android and iOS platforms. It can be used for regression testing of mobile apps.

JUnit:

JUnit is a widely-used testing framework for Java applications. It's especially useful for unit and regression testing in the Java ecosystem.

TestNG:

TestNG is another Java-based testing framework that offers more advanced testing features compared to JUnit, making it suitable for regression testing.

Jenkins:

Jenkins is a popular open-source automation server that can be used to automate the execution of regression test suites. It integrates with various testing tools and can schedule test runs based on code changes.

Postman:

Postman is a popular tool for testing RESTful APIs. It can be used for API regression testing to ensure that changes in the API do not break existing functionality.

TestRail:

TestRail is a test management tool that helps teams organize and manage their regression test cases, track test results, and collaborate on testing efforts.

JIRA:

JIRA, developed by Atlassian, is an issue and project tracking tool that can be used for managing and tracking regression test cases and defects.

Travis CI:

Travis CI is a continuous integration tool that can be used to automate the execution of regression tests whenever code changes are pushed to a version control system like GitHub.

CircleCI:

CircleCI is another continuous integration and continuous delivery (CI/CD) platform that supports automated regression testing as part of the software development pipeline.

The choice of regression testing technique and tool depends on the specific requirements of your project, the nature of the application, and the resources available. It's essential to select the most appropriate combination to ensure effective regression testing and maintain the quality of your software.

September 11, 20232 minutesVirendra HarkhaniVirendra Harkhani
Verification and Validation: What’s the difference?

Differences between Verification and Validation

Verification Validation
It includes checking documents, design, codes and programs. It includes testing and validating the actual product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, walkthroughs, inspections and desk-checking. Methods used in validation are Black Box Testing, White Box Testing and non-functional testing.
It checks whether the software conforms to specifications or not. It checks whether the software meets the requirements and expectations of a customer or not.
It can find the bugs in the early stage of the development. It can only find the bugs that could not be found by the verification process.
The goal of verification is application and software architecture and specification. The goal of validation is an actual product.
Quality assurance team does verification. Validation is executed on software code with the help of testing team.
It comes before validation. It comes after verification.
It consists of checking of documents/files and is performed by human. It consists of execution of program and is performed by computer.
Verification refers to the set of activities that ensure software correctly implements the specific function. Validation refers to the set of activities that ensure that the software that has been built is traceable to customer requirements.
After a valid and complete specification, the verification starts. Validation begins as soon as project starts.
Verification is for prevention of errors. Validation is for detection of errors.
Verification is also termed as white box testing or static testing as work product goes through reviews. Validation can be termed as black box testing or dynamic testing as work product is executed.
Verification finds about 50 to 60% of the defects. Validation finds about 20 to 30% of the defects.
Verification is based on the opinion of reviewer and may change from person to person. Validation is based on the fact and is often stable.
Verification is about process, standard and guideline. Validation is about the product.

Markdown Logo

Verification :

Verification is the process of checking that a software achieves its goal without any bugs. It is the process to ensure whether the product that is developed is right or not. It verifies whether the developed product fulfils the requirements that we have. Verification is static testing.

Validation :

Validation is the process of checking whether the software product is up to the mark or in other words product has high level requirements. It is the process of checking the validation of product, i.e. it checks what we are developing is the right product. It is validation of actual and expected product. Validation is the dynamic testing.

December 02, 20232 minutesVirendra HarkhaniVirendra Harkhani