Modern Growth Teams have one primary goal - scale the usage of their product. In order to do so, growth teams have to analyze the impact of every part of their product, so that the product can be optimized to increase the relevant KPIs. 

The traditional way to measure the impact of a specific part of a product would be to run an A/B test - analyzing how two different versions of the product affect a particular KPI. However, not everything can be A/B tested - whether because of feasibility, ethical concerns, or practical constraints. So instead one usually turns to analytics tools like Google Analytics or Mixpanel, which approximate impact without running an A/B test by looking at the correlation between an action and a KPI. 

This can take many forms - calculating the Pearson correlation coefficient, measuring the conversion rate before and after users do an event, or the feature importances of a machine-learned model. While sometimes these correlated variables are correct to focus on, just as often they are noisy, irrelevant, and ignoring the fundamental truth that correlation is not causation.

Correlation vs Causation

The danger of relying on correlations can be shown through the following example:

Wendy leads growth at a fictional meal kit subscription company called Yellow Spatula. She wants to get customers who have bought at least one meal kit to now buy their subscription plan. Wendy runs some SQL queries, and notices that almost everyone who has viewed their “Advanced Cooking Tips” page is a subscriber. Excited, Wendy decides to raise awareness of the “Advanced Cooking Tips” page, so she places it more prominently on the web-site and launches an email campaign to notify non-subscribers of the existence of this page. Wendy checks back a few weeks later, and to her surprise, raising the awareness of the “Advanced Cooking Tips” actually resulted in fewer subscriptions. 

What happened?

While viewing the “Advanced Cooking Tips” page was correlated with subscribing, it did not have a causal relationship with subscribing. A correlation is defined as “A statistical measure ... that describes the size and direction of a relationship between two or more variables”, while causation “Indicates that one event is the result of the occurrence of the other event”. 

Thus, while viewing “Advanced Cooking Tips” is related to subscribing, it does not make users more likely to subscribe. In this scenario, it is most likely the case that the people who were viewing the “Advanced Cooking Tips” page were long-time Yellow Spatula subscribers and experienced chefs. By encouraging users who had not yet subscribed to view the “Advanced Cooking Tips” page, Wendy was intimidating novice cooks and dissuading them from subscribing.

The users’ pre-existing culinary knowledge is what’s known as a confounding variable. A confounding variable is one that influences both the independent and dependent variable, causing a spurious relationship. In this example, the user’s culinary knowledge influences both their likelihood to subscribe and their likelihood to view the “Advanced Cooking Tips” page.

Observational Studies to the Rescue!

While in this example Wendy unfortunately was not able to impact subscriptions, the good news is that it is possible to analyze the true causal effect of an event without running an A/B test, through what is known as an observational study. 

An observational study is a statistical technique that analyzes the causal effect of an action by taking into account a user’s pre-existing values for these confounding variables. There are many different techniques for doing so, such as propensity score matching, coarsened exact matching, and inverse propensity weighting. In the case of ClearBrain, the world’s first generalized causal analytics platform,  we perform a causal analysis via regression. Causal analysis via regression is the process of generating a linear model that can properly account for the confounding variables.

Suppose ci, i=1...n are the user’s values for the confounding variables, T is a boolean (true/false) indicating whether the user did the action for which you want to measure causal effect (known as the treatment variable), and y is the user’s propensity to convert (known as the outcome variable). A naive approach would be to just fit a line between your treatment variable and your outcome variable.


As we have shown, the coefficient B does not model causal effect and is purely a correlation, as it does not take into account the value of the confounding variables. A better, but still not sufficient approach, would include the value of these confounding variables in your model.


Unfortunately, this is still not an accurate estimate for causal effect, as it calculates the same causal effect B1for all users. In reality, different users will react to the treatment in different ways. To measure these heterogeneous effects, we need to include interaction terms that allow the effect of a treatment to differ based on the value of a user’s confounding variables.

To measure the causal effect (known as the average treatment effect) of your users, we use the trained formula above to effectively simulate an A/B Test on your customer’s historical data. We first simulate a potential outcome where each user was in the treatment by setting the value of the treatment variable to True for all of the users and calculating the average y. Then we simulate a potential outcome where each user was in the control by setting the value of the treatment variable to False for all users and calculating the average y. The average treatment effect is the difference in the average y between the two potential outcomes.

ClearBrain's Approach

Observational studies are usually manually conducted by a statistician. The statistician is able to use her domain knowledge to select the appropriate confounding variables for the observational study she is creating. This process is then manually repeated for each action for which you want to measure the causal effect.

At ClearBrain, our goal was to automate this process and build the first large-scale causal inference engine to allow growth teams to measure the causal effect of every action. There are two main difficulties towards accomplishing this goal - automating the selection of confounding variables and engineering a system that could calculate the causal effect of hundreds of different actions towards any selected KPI within minutes.

ClearBrain automatically collects all of your customers’ data via a tag manager or data warehouse - meaning there are thousands of possible confounding variables for our platform to choose.  Special care is made to ensure only pre-treatment data is used when selecting confounding variables. In order to reduce the amount of terms in our regression, we use what is known as Principal Component Analysis (PCA) to generate the confounding variables. PCA is a type of dimensionality reduction algorithm used to reduce a large amount of potential features down to a small number of synthetic features, while capturing as much of the variability in the data as possible. Each of these synthetic features (called “components”) are made up of a linear combination of the original features, and are uncorrelated with the other components. The result of running PCA is that we have reduced the large number of potential confounding variables down to a small number of confounders, and have made the regression tractable.

Even after reducing the number of confounding variables, the process for computing the treatment effect of each action is still prohibitively expensive, especially for some of our customers with tens of millions of users. ClearBrain’s patent-pending system uses Apache Spark, a distributed computing framework, to parallelize much of the required computation, which allows us to compute treatment effects efficiently. Even still, calculating the treatment effects required us to modify some of the internal Spark code, which we hope to get added to a future Spark release.

Using Causal Analytics

Now that you know how to estimate the causal effect of all of your events, what can you do with this information? At ClearBrain, we have seen our customers use causal analytics in three primary ways:

  1. Prioritization of A/B Tests: While an A/B test is still the gold standard for measuring causal effects, a website or mobile app with thousands of different pages and actions affords a near infinite amount of items that can be A/B tested. The actions that show high causal lift in ClearBrain can give an indicator on what items should be tested first. For example, we have multiple customers with subscription businesses where viewing the “cost savings” page was one of their top causal events. This is an indicator that the “cost savings” page is a key lever in convincing users to subscribe, so additional effort should be made to A/B test the content of the “cost savings” page and make it even more effective.
  1. Incentivize Users to perform the “Next Best Action”: Understanding that the “cost savings” page is highly causal towards subscription, you can in turn decide to more prominently display that page on your web-site, and launch targeting campaigns to incentivize more users to perform high causal actions in order to increase incremental conversions.
  1. Estimate Causal Effect for Items that Can’t be A/B Tested:  Not everything can be A/B tested. For items that can’t be A/B tested, ClearBrain’s causal lift can help retroactively give an estimate of how an A/B test would have performed. One of the most common things a  company is unable to test is platform type - e.g. does using iOS cause more conversions than Web? Testing this would require inhibiting certain customers from downloading the iOS app and comparing their conversion rate to those that do. Using ClearBrain instead, which simulates such an A/B test on historical data, our customers can assess the causal impact of cross-channel engagement. 

We are excited about the future of causal analytics at ClearBrain. If you’re interested in learning more about how causal analytics can change the way you approach growth, sign up now!