At Halfbrick Studios the “Rebel Alliance” team was working on Fruit Ninja Fight. They
had validated their Problem/Market fit and were now in the Product Validation
phase. Following a company-wide play test, they had refined the core game play
and were ready to start an alpha trial with external players.
There were the experiments they planned out to release into
the alpha over six weeks
- Baseline version, just basic game, no progression
- Improved tutorial
- UI/UX tweaks
- First trial of progression system
- Second trial of a different progression system
- Third trial of a different progression system
Looking at their experiments through the lens of a Total
Retention report (above).
- End of Week 2: Improved tutorial, we saw a slight improvement over the base version.
- End of Week 3: UI/UX tweaks, produced a solid increase in retained users
- End of Week 4: First trial of progression system, solid increase again. progression system is working
- End of Week 5: Second trial of different progression system, great improvement, seems like second progress system is the best.
- End of Week 6: Third trial of different progression system, some improvement, confirms second progress system was the best
Now let us look at those same experiments when we add Cohort
Size to the Retention report. By cohort I mean how many players did the add to
the Alpha test each week.
As you can see they started to add more and more players
each week as they went along.
What does this mean for the Total Retention report? Its
flawed, near useless for judging the outcomes of experiments. This is what the
Lean Start-up describes as a vanity metric.
It will always keep increasing, and by boosting the cohort
size the trend seems to change, so we can’t see what outcome we have achieved
from each experiment.
In the world of games just using this report is a death
sentence. Unless you work out what is keeping players in the game you need to
keep adding more and more players, the cost of find these players keeps
increasing and very soon the game becomes unprofitable.
Now let us look at those same experiments through the lens
of Cohort Analysis.
On the X Axis you can see the percentage of people retained
from each cohort. This automatically rules out influence by varying cohort
size.
You can see that the baseline version, version with improved
tutorial and version with UI/UX tweaks perform about the same. Meaning the
tutorial offered NO improvement and the UI/UX tweaks were a waste of time.
The first two progression systems show a meaningful jump
from the first three cohorts, but both performed similar to each other.
Cohort 6, the third progress system to be trialled, so far
appears to be the clear winner out of the three progression systems.
Cohort Analysis shows us the true story of how each of our
versions is working out. We learnt to avoid vanity metrics and focus on Cohort
Analysis focused on our validated learning.
Halfbrick Studios retains all rights over Fruit Ninja Fight
and all associated IP