Applying Difference to Difference Analysis in Gaming
Scroll down

Applying Difference to Difference Analysis in Gaming

In every game, we carry out campaigns, adjusting the level difficulty and modifying the user interface (UI) constantly to provide the optimal gaming experience for players. In turn, these modifications increase player retention and game revenue. All these actions are powered by creativity and data, but at Happy Elements, the emphasis on data. Thus, we observe and evaluate the impact of each action by implementing random AB tests.

Even though we randomize the users in A/B tests, it is possible that pre-existing differences will exist between groups. For example, the individual revenue from a causal game can be sparse and statistically dispersed. In this case, randomly assigned Treatment and Control groups would have a pre-existing revenue gap before entering the A/B test. To eliminate the difference from our ATE we resort to Difference to Difference (DID). Here’s what this looks like in math:

When we do DID analysis, we generally assume that the pre-exp trend of the groups is parallel; otherwise, the resulting ATE would be biased. We therefore need to examine the assumption before we come to conclusions.

Furthermore, the robustness of our conclusions also matters. A simple method is to carry out an A/B test on the smaller scale of users; with the preliminary conclusion, we enlarge the test user scope and repeat the A/B test to check the consistency of the result.


You are using an outdated browser. You can update it on this page.