Friday, October 18, 2019

Mindset for lean start-up success



The Lean start-up achieves success through experimentation and Experimentation involves following a set of processes. With an output focused mindset, the processes seem cumbersome, over the top and a waste of time. When you think you already know what will lead to your desired outcome, experimentation seems wasteful. Just going through the motions of running an experiment won’t aid your learning and will just slow you down; so the experimentation is wasteful thinking becomes a self-fulfilling prophecy. This is something that we need to flip on its head for the Lean Start-up to have a chance of success. It is with a “Learning will lead to Outcomes” mindset that the value of those processes becomes clear.





We can start by promoting belief in the idea that experimentation leads to learning, which leads to outcomes and outcomes are more valuable than outputs. As people increase their belief in experimentation they tend to practice experimentation more rigorously and more frequently. This tends them towards perfect practice, which leads to their experiments generating more knowledge. As they gather more knowledge from their experiments they gain clarity around which outputs are more likely to lead to outcomes. With that clarity they can focus in on a smaller set of outputs with a good chance of producing a positive outcome. The effort saved can either be put towards working on other outputs with a good chance of success or entirely different endeavours. Either way those that apply experimentation appropriately, learn more and importantly deliver improved outcomes.

Tuesday, October 1, 2019

Feedback Dojo

Doing the basics brilliantly is a foundation from which organisations can achieve greatness. Doing the basics brilliantly comes from lots of little, almost insignificant things, done really well, done really well each and every day. We are talking about behaviour, the ingrained behaviour of all of our staff. Some of this behaviour can be established through sharing a vision, holding shared values, establishing a sense of purpose, clear frameworks & process along with understanding how they contribute to the organisation. Yet there is still a large amount of behaviour that can only be refined in a nuanced, ongoing, day by day, bit by bit approach, by those close to the people in question. Feedback enables us to bridge that gap and steer our people towards doing the basics brilliantly.
To achieve positive changes in behaviour feedback needs to come from a foundation of trust, delivered at the right time, in a private space. It is also crucial that it is delivered in a neutral way with a focus on behaviour instead of opinion. With many aspects of this skill required for it to be applied successful, lots of people struggle to provide effective feedback.
The Feedback Dojo is proven to quickly develop the ability of participants to deliver effective feedback. That feedback leads to positive changes in behaviour in their peers, colleagues and direct reports.


Tuesday, September 24, 2019

How to dramatically improve your product


Let us image… you have found your spark, you have explored the market space and found a problem worth solving, you now even have part of the product that may solve that problem. Your objective is to make the product the best thing for solving that problem. You have been working on this for months maybe even a year or more. The product passes all of your automated test but how do you know customers will actually be able to use it to solve their problem? When you think about how your product works you view it as a clear path to success, similar to the image below. 



You enter some information, tweak this, change that, press a button and taa-dah, the problem is solved! Unfortunately, we are often blinded by our closeness to the product. What our users often see is similar to the image below. A bewildering array of choices, with no clear path forward.



How can we show them the path? This is where Observational Testing comes in. Observational Testing allows us to understand the pains of our user allowing us to remove those pains and improve our product.

On Metacritic.com Half life 2 is the highest rated PC game of all time; Half life 1 comes in at #4. Both games are made by Valve corporation. One of the key practices that Valve used to take their games from mediocre to great is Observational Testing. They call it Play Testing. Valve would get in volunteers to sit and play their partially finished game, while members of the team would observe them and take notes. The team was not allowed to say anything to the player.

Quoting from Ken Birdwell a senior designer there: “Nothing is quite so humbling as being forced to watch in silence as some poor play-tester stumbles around your level for 20 minutes, unable to figure out the "obvious" answer that you now realize is completely arbitrary and impossible to figure out.” 
A two-hour play test would result in 100 or so "action items" — things that needed to be fixed, changed, added, or deleted from the game. That is a phenomenal amount of feedback.



I personally ran many observational tests when developing prototype games “Planty”, “Bargain Variety Store” & “Siege Breakers”, at Halfbrick Studios. I can tell you that observational tests are easy to run, horribly painful and immensely beneficial all at once. That hair pulling frustration of the user seeing a forest of trees while you see a clear path really pushes you to improve your product.

Running an Observational Test is straight forward:
  1. Bring in a customer or potential customer. This bit is hard.
  2. Provide them an objective to achieve in the test, either verbally or written out. This could be a hypothesis you want to test.
  3. While they attempt to achieve the objective, video record over their shoulder (a smart phone will do just fine).
  4. Observe what they do/don’t do; while not saying anything or offering any guidance. This is the hard part.
  5. Afterwards ask what they were thinking at key steps (i.e. when they got stuck, when they achieved success).
Observational Testing is how you can dramatically improve your product. It brings three key benefits:
  1. Challenge your design approach. Are we tackling this problem in the right way?
  2. Validate hypothesis. As mentioned the objective you provide at the start could determine if they will use the product in the way you anticipated. Can they understand the information provided? Etc.
  3. Dramatically increase usability. This is moving them from the forest to the path, and is the most evident benefit when people start to use Observational Testing.


Halfbrick Studios maintains full Copyright over Siege Breakers, Planty and Bargain Variety Store.

Photo Reference: https://www.flickr.com/photos/eggrole/7524458398

Thursday, June 20, 2019

High Performance Agile Team Training Available

Get training in the skills that lead to high performance teams; skills that attendees will use every week. Basic agile training gives teams a good head-start and a significant boost in performance is often seen. However, that performance often stagnates well before high performance is achieved. How can you get your team to the next level? This training course addresses that gap. Attendees will build upon their foundation level agile training and be taught the skills that regularly lead to high-performance teams. Learning skills that are easy to replicate in their own team. Attendees will finish the course ready to add value to their team. 

Sustained high performance for their team will then be achieved through collaboration that harnesses the full strength of their team, clear customer centric goals and amplified delivery capability. The content and aims of this course closely align to the Heart of Agile (heartofagile.com) from Alistair Cockburn. Crammed full of interactive exercises, working in pairs or small groups gets you to experience the skill. The briefest of presentation material is used to introduce the exercises; this course is heavily skills focused.

Andrew Rusling will deliver the course, bringing with him, his experience of training over 400 people in agile, Lean, Scrum and Kanban; as well as transforming five companies. Andrew has the passion, experience and capability to provide an engaging and thought-provoking experience.

Attendee will Learn and Experience:

  1. Creating a Team Charter with Vision Statement, Values, Working Agreement, Decision Making Strategy and Proactive conflict management strategy. When they do this with their teams it will provides a foundation for their collaboration, reflection and customer centricity.
  2. Collaborative approaches to: ideation, design, problem solving, decision making, & planning.
  3. Easy to repeat skills for coaching and developing their team members. 
  4. Customer interviews - how to understand the world of their customers.
  5. Experiment design, and execution.
  6. Verifying User Stories will deliver value for the customer.
  7. Measuring Outcomes (customer behaviour) over Outputs (delivered product).
  8. Observational testing - how to dramatically improve the Customers Experience.
  9. Creating Continuous improvement actions that actually get completed
  10. Probabilistic forecasting for predictable planning
  11. Going faster by delivering less of the scope than we think we need.
  12. Visualise flow of work, removing waste & limiting work in progress to expedite delivery.

If you are located in South East Queensland, Australia and interested in this course, please contact me: andrewrusling@hotmail.com

Wednesday, January 30, 2019

Avoiding vanity metrics with Cohort Analysis



At Halfbrick Studios the “Rebel Alliance” team was working on Fruit Ninja Fight. They had validated their Problem/Market fit and were now in the Product Validation phase. Following a company-wide play test, they had refined the core game play and were ready to start an alpha trial with external players.

There were the experiments they planned out to release into the alpha over six weeks
  1. Baseline version, just basic game, no progression
  2. Improved tutorial
  3. UI/UX tweaks
  4. First trial of progression system
  5. Second trial of a different progression system
  6. Third trial of a different progression system




Looking at their experiments through the lens of a Total Retention report (above).
  • End of Week 2: Improved tutorial, we saw a slight improvement over the base version.
  • End of Week 3: UI/UX tweaks, produced a solid increase in retained users
  • End of Week 4: First trial of progression system, solid increase again. progression system is working
  • End of Week 5: Second trial of different progression system, great improvement, seems like second progress system is the best.
  • End of Week 6: Third trial of different progression system, some improvement, confirms second progress system was the best



Now let us look at those same experiments when we add Cohort Size to the Retention report. By cohort I mean how many players did the add to the Alpha test each week.

As you can see they started to add more and more players each week as they went along.
What does this mean for the Total Retention report? Its flawed, near useless for judging the outcomes of experiments. This is what the Lean Start-up describes as a vanity metric.

It will always keep increasing, and by boosting the cohort size the trend seems to change, so we can’t see what outcome we have achieved from each experiment.

In the world of games just using this report is a death sentence. Unless you work out what is keeping players in the game you need to keep adding more and more players, the cost of find these players keeps increasing and very soon the game becomes unprofitable.



Now let us look at those same experiments through the lens of Cohort Analysis.

On the X Axis you can see the percentage of people retained from each cohort. This automatically rules out influence by varying cohort size.

You can see that the baseline version, version with improved tutorial and version with UI/UX tweaks perform about the same. Meaning the tutorial offered NO improvement and the UI/UX tweaks were a waste of time.

The first two progression systems show a meaningful jump from the first three cohorts, but both performed similar to each other.

Cohort 6, the third progress system to be trialled, so far appears to be the clear winner out of the three progression systems.

Cohort Analysis shows us the true story of how each of our versions is working out. We learnt to avoid vanity metrics and focus on Cohort Analysis focused on our validated learning.

Halfbrick Studios retains all rights over Fruit Ninja Fight and all associated IP