/ Product Management

How to Ace A/B Testing Interview Questions

A/B testing is one of the most important parts of user experience research that is conducted by most companies. For many product managers, it may be a crucial part of experimenting with product changes and deciding which to move forward on. As such, aspiring PMs should expect to be asked several questions on A/B testing and should be prepared to answer them fully.

Quite frankly, if you do your homework, there's no reason you can't ace the A/B interview questions in your next PM interview. We even have a lesson dedicated to A/B testing in our PM interview course, check it out here! But that's what this article is also going to help you with. Here's a guide on what A/B testing is and how you can ace your A/B interview questions.

What is A/B Testing?

Source: Seobility

Unsurprisingly, the best way to ace the A/B portion of your PM interview is to understand what A/B testing is. Nobody can tell you exactly what A/B questions you'll be faced with come interview day. Nor is that actually necessary. A comprehensive understanding of what the methodology is and what it's used for will be the best way to adequately prepare for any unanticipated questions you may have to answer on the fly.

So what is A/B testing anyway? Well, quite simply, A/B testing, also called split testing or bucket testing, is the process of testing two versions, A and B, of a web page, product design, layout, etc, to compare the performance of each under a chosen metric. This is done by randomly showing one of the versions to users to see which is better suited under a chosen metric, whether it be page views, conversions, bounce rate, etc. Whether it be the language on a landing page, the style of a button, or the color of some element, A/B testing can help PMs make data-driven decisions that will lead to objective improvements in their product.

For example, imagine you have a call-to-action on the bottom of a landing page calling for visitors to sign-up for an email list. If a PM wanted to choose between two phrases that would best push visitors to subscribe to that list, they could run an A/B test. They would formulate two variants, let's say "Sign up today!" or "Join our newsletter!", run both simultaneously, and then simply measure which produces more sign-ups. That's A/B testing in a nutshell—the measuring and comparing of two variants to see which performs better under a certain metric.

Why is A/B testing important?

Data reporting dashboard on a laptop screen.
Photo by Stephen Dawson / Unsplash

While that's all well and good, why are A/B tests important, really? Ultimately, this form of experimental design is one of the many ways to objectively compare and contrast aspects of product choices that may otherwise be difficult to quantify. Many elements of a product may traditionally be thought to be subjective. Whether a button is red or blue could be chalked up to a preferential choice. Thus, differentiating which would be better from a conversion perspective may be difficult without a data-driven method like an A/B test.

In this way, A/B testing is important because it provides objective and quantifiable data regarding elements of a product that may otherwise be qualitative.

What is the Goal of AB Testing?

Il s’agit d’une photographie réalisée pour illustrer notre page des statistiques de l’agence web Olloweb Solution
Photo by Agence Olloweb / Unsplash

As such, the goal of A/B testing is just that: objectively measure which version of a product performs better. The purpose of A/B testing would be to provide actionable data so the most prudent decisions can be made. Should this button have rounded or square corners? Should it be flat or have a drop-down shadow? Should it be this color or that? A/B testing can be used to help objectively answer such questions.

While everything we've covered so far is relatively straightforward, there's some more complexity involved behind the scenes that aspiring PMs should be aware of.

What Every PM Should Know About A/B Testing

Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do.
Photo by Ruthson Zimmerman / Unsplash

Here are some important things every PM should know about A/B testing to best prepare for their next PM interview.

Different Types of Test Design

First and foremost, PMs should understand the difference between the related testing methodologies to determine if an A/B test is the right way to go. There are at least two other U/X test designs that PMs should know. These are:

A/B/N

Another similar kind of test is the A/B/N test. In this kind of test, (usually, this kind of test is done for web pages) more than two versions are tested, whereas an A/B test only tests two versions against each other. The N in A/B/N stands for "number," meaning the number of versions being tested. A/B/N tests are similar to that of multivariate tests, except multivariate tests involve the testing of all possible combinations of the different variables at once, whereas A/B/N, does not. Rather, A/B/N is used to test several different versions against each other.

These forms of tests are best used for major layout or design decisions, rather than testing the individual differences between specific elements.

Multivariate

Each experimental method is different from one another. More often than not, your interviewer will probably ask a question about when to use each and why. A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other. When multiple variables are needed to be tested, multivariate testing is the way to go.

Multivariate testing is when all possible combinations of versions and all their variables are tested at once. This form of test design is best used when several product changes are to be decided. Rather than running dozens of A/B tests on every single design change, a multivariate test can be performed in which every possible combination is tested against each other.

How to Properly Run A/B Tests

Source: Wikimedia

Aspiring PMs must also know how to successfully conduct an A/B test. Not every A/B test is made equally, and there are a few reasons why. Make sure you understand:

How to Evaluate Worthwhile Metrics

Before you start your A/B test, you need to evaluate worthwhile and relevant metrics to measure. Well, what are the usual suspects in the metric department? Generally speaking, A/B tests experiment with one of the following:

  • Impression count
  • Click-through rate
  • Button hover time
  • Time spent on page
  • Bounce rate on the button’s click-through link (assuming the button leads to a new webpage)

Ask yourself beforehand, which of these convey the most useful and relevant information for the product and engineering teams? It'll depend on the thing that's being tested. If you're testing a landing page with valuable information on your product, time spent on page and bounce rate may be the wisest choices. If you're testing a CTA, clickthrough rate is probably the way to go. If you're comparing two versions of a social media ad, impression count will be the most valuable. The worst-case scenario is that you spend all this time and effort conducting an A/B test, but the metrics you've chosen aren't that relevant or helpful, and the results are ultimately useless.

How to Interpret Your Results

By and large, the software you'll use for A/B testing will come with many tools and features to help you understand and measure the results of your tests. After the conclusion of your tests, the software will show you the difference in conversion rates, or whatever other metrics you've chosen to measure, between the two versions along with a margin of error.

More likely than not, the A/B testing software you use will make the results relatively easy to interpret. You'll see the total number of users for tests, the measure of your chosen metric, possibly the device type of the users tested, and the particular uplift of the versions. Deducing the winning version is as easy as comparing the conversion rates. Whichever Is highest is the better performing version.

Understand your P-Value

Because A/B testing falls into the category of statistical analysis, PMs must understand the importance of their P-value. This is a number between 0 and 1 that indicates if their test results are actually statistically significant rather than just a product of randomness. This is very important, as a successful A/B test must actually demonstrate which version performs better, not which randomly performs better during the tests.

The actual number is an indicator of the probability of your test results occurring by chance, rather than for a proposed hypothesis. So, for instance, if version A converts more website visitors than version B, you may think that it's because version A had a simpler layout, for example. However, the p-value will tell you if that hypothesis is actually true, or it just happened that version A performed better by chance. In other words, p-values measure the likelihood of the null hypothesis being true. Any p-value higher than .05 is considered strong evidence for the null hypothesis.

Read more about p-values and how to calculate them in A/B testing here.

Each of these aspects can determine the success and effectiveness of an A/B test. Therefore, some of the A/B testing interview questions you encounter will most likely be prodding your knowledge of them. Not only that, but U/X and experimental design can be time-consuming and expensive. Time and money are the most valuable assets a business has, neither of which can be wasted. Your interviewer's questions, then, will most likely be focused on whether you know how to successfully and efficiently conduct A/B tests.

A/B Test Example - From Beginning to End

Trading from a cheap AirBnB
Photo by Adam Nowakowski / Unsplash

If that's how you properly run an A/B test, let's look at an example from start to finish. Let's imagine we have a banner for a software product featuring a call-to-action. Now, let's say we had two versions we'd like to test, one with a very minimal amount of words-A, and one with a little more information-B.

First of all, we must design our A/B test for these banners. Given that the test in question features a CTA, we can figure that the most worthwhile metric, in this case, is click-through rate. The version that pushes more people to click is the clear winner. Next, we must decide on the adequate sample size for our experiment. The sample size of your tests has a lot to do with the p-value in your results. Ultimately, you need a large enough number of users to demonstrate a statistical significance after your experiment. Otherwise, the results may not be that trustworthy or accurate. Remember that the p-value is the measure of this significance, and if the sample size is too small, chances are that your p-value will be high, which says that your results are insignificant, and thus, probably a byproduct of randomness.

However, if there is a large difference in the variants, you can be confident in your results with a smaller sample size. In other words, if our CTA version A coverts 60% more users than version B after being tested on a sample of 1,000 users, we can be confident that version A performs better, and the sample size of 1,000 users is adequate. However, if version A only had a 10% difference, we'd probably need to increase the sample size substantially to be confident in the significance of our results. Again, the sample size necessary has everything to do with the p-value of your results, the size needs to be large enough to demonstrate that your results are statistically significant, not simply random. As we mentioned earlier, this means that your p-value is .05 or less, which demonstrates a 5% or less chance the outcomes could be a product of randomness and chance. Ultimately, the sample size in your A/B tests should be increased, and the runtimes should continue until your results can demonstrate this 95% level of confidence.

Determining sample size is the most statistically heavy part of the A/B testing process, and we understand it can be a little confusing, especially if you don't have much prior experience in statistics. This article can be very helpful in determining exactly what sample size you'd need for your A/B experiments.

After we've run our test on these CTA banners, we must interpret our results. Generally speaking, the A/B software of your choice will show you the difference in measured metrics, along with a margin of error. Let's imagine that version A converted 18% of its visitors and version B 10% with a margin of error of 2.3%. Therefore, implementing version A as the chosen banner results in an 8% lift or difference between the two versions. It should come as no surprise, then, that version A is the winner in this A/B test.

Examples of AB Testing Interview Questions

Design meeting
Photo by Charles Deluvio / Unsplash

Nobody can tell you exactly what A/B testing interview questions will be thrown your way during your next PM interview. However, there are some common ones that we've compiled below. Not only that but, if you get a good handle on all that we've mentioned so far, you should have the understanding necessary to answer any unexpected or off-the-cuff A/B questions that require you to improvise on the spot.

A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other. When multiple variables are needed to be tested at once, multivariate testing is the way to go.

The primary issue is not that these tests couldn't tell you which version would outperform the other, but rather that it becomes impossible to extrapolate the relationship between the versions and the metrics if too many factors are involved in the A/B test. For an A/B test to be successful and insightful, they need to be as focused as a possible-one winning metric with one variable. Obviously, you could still measure multiple metrics at once, but how would you then choose which version wins? Which metric should you look at to determine the winner? This is the primary issue with several different metrics during an A/B test.

How do you properly form an A/B testing hypothesis?

A successful A/B testing hypothesis follows a simple, yet important formula, with three crucial components. First, you have the variable. Then, there's the result. And finally, the rationale on why the variable produced the given result.

Typically speaking, you can structure your A/B testing hypotheses in this way:

If <variable>, then <result> because of <rationale>.

So, how would this look in reality? Let's take our previous example with the CTA banners. One version, A, has a CTA with fewer words, whereas the other, B, has more detailed information. Before your experiments, you can postulate the following hypothesis:

If the CTA contains fewer words (variable), then more visitors will be converted (result) because the CTA will be easier to digest and requires less reading on part of the customer (rationale). Using this formula, you can reliably create strong, successful hypotheses for your A/B tests.

Construct three A/B tests to improve user frustration with the Google Maps blue dot GPS icon

A detailed video answer can be found in our PM interview course, here.

Why do some A/B tests fail to provide insights, outcomes, or value? When do A/B tests provide the most value to a business?

An A/B test can fail for many reasons. There are many potential reasons but some common ones are choosing the wrong metrics, not understanding a statistically insignificant p-value, not having the proper sample size of users, or forming a misguided hypothesis. A/B tests provide the most value when the sample size is large enough to produce statistical significance, relevant metrics are chosen, a single variable is measured, and the test results in a clear winner.

Why is A/B Testing Important for Businesses?

Ultimately, A/B testing is very important for businesses to learn how to improve their operations, products, websites, and, most importantly, their bottom lines. A/B testing provides actionable insights for businesses so that they may reduce risks, improve customer engagement, convert more customers, and increase sales. A/B tests will demonstrate exactly how businesses can maximize the use of their resources in the most efficient way possible, which, in turn, will improve their ROI. Given that the most successful companies are the ones that can maximize the most profits, A/B testing is a crucial piece of any business's overall strategy to do just that.

Consult with An Exponent Coach

Photo by Priscilla Du Preez / Unsplash

Here at Exponent, we know better than anyone that it may be both extremely exciting and nerve-wracking when you have a PM or data science interview coming up. So, to help you boost your chances, we've designed several Interview Prep Courses for Product Management, Software Engineering, Data Science, Product Marketing Management, Technical Program Management, and Product Design.

Not only that, but we also offer industry-leading interviewing coaching to help you seal the deal. Book a session with an Exponent coach to:

  • Get an insider’s look from someone who’s been interviewed, got the offer, and worked at the companies you’re applying at.
  • Receive an objective evaluation of where you stand as a job candidate.
  • Obtain personalized feedback and coaching to help improve and get more job offers.

We've partnered with dozens of industry insiders and career experts in product management, program management, product design, software engineering, and data science fields who can help you ace your interviews and nail your dream job. Check out our list here and book a session today!

Anthony Pellegrino

Anthony Pellegrino

I’m a rather bohemian freelance journalist and tech content writer. Philosophy/CS student - A.I.,Consciousness, Social Sciences.

Read More