A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research method. [1] A/B tests consist of a randomized experiment that usually involves two variants (A and B), [2] [3] [4] although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective. [5]
Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. [1] A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows. [6]
The following example illustrates an A/B test with a single variable:
Suppose a company has a customer database of 2,000 people and decides to create an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase) and identifying promotional code.
The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance). [7]
In the example above, the purpose of the test is to determine which is the more effective way to encourage customers to make a purchase. If, however, the aim of the test had been to see which email would generate the higher click-rate —that is, the number of people who actually click onto the website after receiving the email—then the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to see which email would bring more traffic to the website, then the email containing code B1 might well have been more successful. An A/B test should have a defined outcome that is measurable such as number of sales made, click-rate conversion, or number of people signing up/registering. [8]
Two-sample hypothesis tests are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test.
Assumed distribution | Example case | Standard test | Alternative test |
---|---|---|---|
Gaussian | Average revenue per user | Welch's t-test (Unpaired t-test) | Student's t-test |
Binomial | Click-through rate | Fisher's exact test | Barnard's test |
Poisson | Transactions per paying user | E-test [9] | C-test |
Multinomial | Number of each product purchased | Chi-squared test | G-test |
Unknown | Mann–Whitney U test | Gibbs sampling |
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. That is, while a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base. [10]
For instance, in the above example, the breakdown of the response rates by gender could have been:
Gender | Overall | Men | Women |
---|---|---|---|
Total sends | 2,000 | 1,000 | 1,000 |
Total responses | 80 | 35 | 45 |
Variant A | 50/ 1,000 (5%) | 10/ 500 (2%) | 40/ 500 (8%) |
Variant B | 30/ 1,000 (3%) | 25/ 500 (5%) | 5/ 500 (1%) |
In this case, we can see that while variant A had a higher response rate overall, variant B actually had a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield an increase in expected response rates from to – constituting a 30% increase.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test. [11]
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute—for example, customers' age and gender—to identify more nuanced patterns that may exist in the test results.
The results of A/B tests are simple to interpret and use to get a clear idea of what users prefer, since it is directly testing one option over another. It is based on real user behavior, so the data can be very helpful especially when determining what works better between two options.
A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, they tested dozens of different hyperlink hues to see which color the users tend to click more on. [12]
A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications where active users are abundant, such as popular online social media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled-experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result. [13] [14]
Due to its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted results, such as negative or no impact to business metrics.
In December 2018, representatives with experience in large-scale A/B testing from thirteen different organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, and Stanford University) summarized the top challenges in a SIGKDD Explorations paper. [15] The challenges can be grouped into four areas: Analysis, Engineering and Culture, Deviations from Traditional A/B tests, and Data quality.
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial, to assess the effectiveness of a homeopathic drug, occurred in 1835. [16] Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early twentieth century. [17] The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. [18] Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test. [19] [20]
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. [5] The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran over 7,000 different A/B tests. [5]
In 2012, a Microsoft employee working on the search engine Microsoft Bing created an experiment to test different ways of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. [4] Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually. [4]
A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. [21] [22] [23] A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results.[ citation needed ] It is an increasingly common practice as the tools and expertise grow in this area. [24]
A/B tests have been used by large social media sites like LinkedIn, Facebook, and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another. [25]
On an e-commerce website, the purchase funnel is typically a good candidate for A/B testing, since even marginal-decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements like copy text, layouts, images and colors, [26] but not always. In these tests, users only see one of two versions, since the goal is to discover which of the two versions is preferable. [27]
A/B testing can be used to determine the right price for the product, as this is perhaps one of the most difficult tasks when a new product or service is launched. A/B testing (especially valid for digital goods) is an excellent way to find out which price-point and offering maximize the total revenue.
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing as a way to garner online attraction and understand what voters wanted to see from the presidential candidate. [28] For example, Obama's team tested four distinct buttons on their website that led users to sign up for newsletters. Additionally, the team used six different accompanying images to draw in users. Through A/B testing, staffers were able to determine how to effectively draw in voters and garner additional interest. [28]
A/B testing is very common when deploying a newer version of an API. [29] For real-time user experience testing, an HTTP Layer-7 Reverse proxy is configured in such a way that, N% of the HTTP traffic goes into the newer version of the backend instance, while the remaining 100-N% of HTTP traffic hits the (stable) older version of the backend HTTP application service. [29] This is usually done for limiting the exposure of customers to a newer backend instance such that, if there is a bug on the newer version, only N% of the total user agents or clients get affected while others get routed to a stable backend, which is a common ingress control mechanism. [29]
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences between groups. It uses F-test by comparing variance between groups and taking noise, or assumed normal distribution of group, into consideration by dividing by variance between elements in a group. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.
Biostatistics is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently supports a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p-value computed from the test statistic. Roughly 100 specialized statistical tests have been defined.
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
In scientific research, the null hypothesis is the claim that the effect being studied does not exist. The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data or variables being analyzed. If the null hypothesis is true, any experimentally observed effect is due to chance alone, hence the term "null". In contrast with the null hypothesis, an alternative hypothesis is developed, which claims that a relationship does exist between two variables.
In frequentist statistics, power is a measure of the ability of an experimental design and hypothesis testing setup to detect a particular effect if it is truly present. In typical use, it is a function of the test used, the assumed distribution of the test, and the effect size of interest. High statistical power is related to low variability, large sample sizes, large effects being looked for, and less stringent requirements for statistical significance.
Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters.
Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complex studies, different sample sizes may be allocated, such as in stratified surveys or experimental designs with multiple treatment groups. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.
In online marketing, a landing page, sometimes known as a "lead capture page", "single property page", "static page", "squeeze page" or a "destination page", is a single web page that appears in response to clicking on a search engine optimized search result, marketing promotion, marketing email or an online advertisement. The landing page will usually display directed sales copy that is a logical extension of the advertisement, search result or link. Landing pages are used for lead generation. The actions that a visitor takes on a landing page are what determine an advertiser's conversion rate. A landing page may be part of a microsite or a single page within an organization's main web site.
Marketing experimentation is a research method which can be defined as "the act of conducting such an investigation or test". It is testing a market that is segmented to discover new opportunities for organisations. By controlling conditions in an experiment, organisations will record and make decisions based on consumer behaviour. Marketing experimentation is commonly used to find the best method for maximizing revenues through the acquisition of new customers. For example; two groups of customers are exposed to different advertising (test). How did consumers react to advertising compared to the other group? (measurable). Did the advertising increase sales for each group? (result).
In statistical hypothesis testing, a type I error, or a false positive, is the rejection of the null hypothesis when it is actually true. A type II error, or a false negative, is the failure to reject a null hypothesis that is actually false.
In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.
Conversion rate optimization (CRO) is the process of increasing the percentage of users or website visitors to take a desired action.
Microarray analysis techniques are used in interpreting the data generated from experiments on DNA, RNA, and protein microarrays, which allow researchers to investigate the expression state of a large number of genes – in many cases, an organism's entire genome – in a single experiment. Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult – if not impossible – to analyze without the help of computer programs.
In marketing, multivariate testing or multi-variable testing techniques apply statistical hypothesis testing on multi-variable systems, typically consumers on websites. Techniques of multivariate statistics are used.
Twyman's law states that "Any figure that looks interesting or different is usually wrong", following the principle that "the more unusual or interesting the data, the more likely they are to have been the result of an error of one kind or another". It is named after the media and market researcher Tony Twyman and has been described as one of the most important laws of data analysis.
In the design of experiments, a sample ratio mismatch (SRM) is a statistically significant difference between the expected and actual ratios of the sizes of treatment and control groups in an experiment. Sample ratio mismatches also known as unbalanced sampling often occur in online controlled experiments due to failures in randomization and instrumentation.