Tuesday, October 1, 2019

Feedback Dojo

Doing the basics brilliantly is a foundation from which organisations can achieve greatness. Doing the basics brilliantly comes from lots of little, almost insignificant things, done really well, done really well each and every day. We are talking about behaviour, the ingrained behaviour of all of our staff. Some of this behaviour can be established through sharing a vision, holding shared values, establishing a sense of purpose, clear frameworks & process along with understanding how they contribute to the organisation. Yet there is still a large amount of behaviour that can only be refined in a nuanced, ongoing, day by day, bit by bit approach, by those close to the people in question. Feedback enables us to bridge that gap and steer our people towards doing the basics brilliantly.
To achieve positive changes in behaviour feedback needs to come from a foundation of trust, delivered at the right time, in a private space. It is also crucial that it is delivered in a neutral way with a focus on behaviour instead of opinion. With many aspects of this skill required for it to be applied successful, lots of people struggle to provide effective feedback.
The Feedback Dojo is proven to quickly develop the ability of participants to deliver effective feedback. That feedback leads to positive changes in behaviour in their peers, colleagues and direct reports.



Tuesday, September 24, 2019

How to dramatically improve your product


Let us image… you have found your spark, you have explored the market space and found a problem worth solving, you now even have part of the product that may solve that problem. Your objective is to make the product the best thing for solving that problem. You have been working on this for months maybe even a year or more. The product passes all of your automated test but how do you know customers will actually be able to use it to solve their problem? When you think about how your product works you view it as a clear path to success, similar to the image below. 



You enter some information, tweak this, change that, press a button and taa-dah, the problem is solved! Unfortunately, we are often blinded by our closeness to the product. What our users often see is similar to the image below. A bewildering array of choices, with no clear path forward.



How can we show them the path? This is where Observational Testing comes in. Observational Testing allows us to understand the pains of our user allowing us to remove those pains and improve our product.

On Metacritic.com Half life 2 is the highest rated PC game of all time; Half life 1 comes in at #4. Both games are made by Valve corporation. One of the key practices that Valve used to take their games from mediocre to great is Observational Testing. They call it Play Testing. Valve would get in volunteers to sit and play their partially finished game, while members of the team would observe them and take notes. The team was not allowed to say anything to the player.

Quoting from Ken Birdwell a senior designer there: “Nothing is quite so humbling as being forced to watch in silence as some poor play-tester stumbles around your level for 20 minutes, unable to figure out the "obvious" answer that you now realize is completely arbitrary and impossible to figure out.” 
A two-hour play test would result in 100 or so "action items" — things that needed to be fixed, changed, added, or deleted from the game. That is a phenomenal amount of feedback.



I personally ran many observational tests when developing prototype games “Planty”, “Bargain Variety Store” & “Siege Breakers”, at Halfbrick Studios. I can tell you that observational tests are easy to run, horribly painful and immensely beneficial all at once. That hair pulling frustration of the user seeing a forest of trees while you see a clear path really pushes you to improve your product.

Running an Observational Test is straight forward:
  1. Bring in a customer or potential customer. This bit is hard.
  2. Provide them an objective to achieve in the test, either verbally or written out. This could be a hypothesis you want to test.
  3. While they attempt to achieve the objective, video record over their shoulder (a smart phone will do just fine).
  4. Observe what they do/don’t do; while not saying anything or offering any guidance. This is the hard part.
  5. Afterwards ask what they were thinking at key steps (i.e. when they got stuck, when they achieved success).
Observational Testing is how you can dramatically improve your product. It brings three key benefits:
  1. Challenge your design approach. Are we tackling this problem in the right way?
  2. Validate hypothesis. As mentioned the objective you provide at the start could determine if they will use the product in the way you anticipated. Can they understand the information provided? Etc.
  3. Dramatically increase usability. This is moving them from the forest to the path, and is the most evident benefit when people start to use Observational Testing.


Halfbrick Studios maintains full Copyright over Siege Breakers, Planty and Bargain Variety Store.

Photo Reference: https://www.flickr.com/photos/eggrole/7524458398

Thursday, June 20, 2019

High Performance Agile Team Training Available

Get training in the skills that lead to high performance teams; skills that attendees will use every week. Basic agile training gives teams a good head-start and a significant boost in performance is often seen. However, that performance often stagnates well before high performance is achieved. How can you get your team to the next level? This training course addresses that gap. Attendees will build upon their foundation level agile training and be taught the skills that regularly lead to high-performance teams. Learning skills that are easy to replicate in their own team. Attendees will finish the course ready to add value to their team. 

Sustained high performance for their team will then be achieved through collaboration that harnesses the full strength of their team, clear customer centric goals and amplified delivery capability. The content and aims of this course closely align to the Heart of Agile (heartofagile.com) from Alistair Cockburn. Crammed full of interactive exercises, working in pairs or small groups gets you to experience the skill. The briefest of presentation material is used to introduce the exercises; this course is heavily skills focused.

Andrew Rusling will deliver the course, bringing with him, his experience of training over 400 people in agile, Lean, Scrum and Kanban; as well as transforming five companies. Andrew has the passion, experience and capability to provide an engaging and thought-provoking experience.

Attendee will Learn and Experience:

  1. Creating a Team Charter with Vision Statement, Values, Working Agreement, Decision Making Strategy and Proactive conflict management strategy. When they do this with their teams it will provides a foundation for their collaboration, reflection and customer centricity.
  2. Collaborative approaches to: ideation, design, problem solving, decision making, & planning.
  3. Easy to repeat skills for coaching and developing their team members. 
  4. Customer interviews - how to understand the world of their customers.
  5. Experiment design, and execution.
  6. Verifying User Stories will deliver value for the customer.
  7. Measuring Outcomes (customer behaviour) over Outputs (delivered product).
  8. Observational testing - how to dramatically improve the Customers Experience.
  9. Creating Continuous improvement actions that actually get completed
  10. Probabilistic forecasting for predictable planning
  11. Going faster by delivering less of the scope than we think we need.
  12. Visualise flow of work, removing waste & limiting work in progress to expedite delivery.

If you are located in South East Queensland, Australia and interested in this course, please contact me: andrewrusling@hotmail.com

Wednesday, January 30, 2019

Avoiding vanity metrics with Cohort Analysis



At Halfbrick Studios the “Rebel Alliance” team was working on Fruit Ninja Fight. They had validated their Problem/Market fit and were now in the Product Validation phase. Following a company-wide play test, they had refined the core game play and were ready to start an alpha trial with external players.

There were the experiments they planned out to release into the alpha over six weeks
  1. Baseline version, just basic game, no progression
  2. Improved tutorial
  3. UI/UX tweaks
  4. First trial of progression system
  5. Second trial of a different progression system
  6. Third trial of a different progression system




Looking at their experiments through the lens of a Total Retention report (above).
  • End of Week 2: Improved tutorial, we saw a slight improvement over the base version.
  • End of Week 3: UI/UX tweaks, produced a solid increase in retained users
  • End of Week 4: First trial of progression system, solid increase again. progression system is working
  • End of Week 5: Second trial of different progression system, great improvement, seems like second progress system is the best.
  • End of Week 6: Third trial of different progression system, some improvement, confirms second progress system was the best



Now let us look at those same experiments when we add Cohort Size to the Retention report. By cohort I mean how many players did the add to the Alpha test each week.

As you can see they started to add more and more players each week as they went along.
What does this mean for the Total Retention report? Its flawed, near useless for judging the outcomes of experiments. This is what the Lean Start-up describes as a vanity metric.

It will always keep increasing, and by boosting the cohort size the trend seems to change, so we can’t see what outcome we have achieved from each experiment.

In the world of games just using this report is a death sentence. Unless you work out what is keeping players in the game you need to keep adding more and more players, the cost of find these players keeps increasing and very soon the game becomes unprofitable.



Now let us look at those same experiments through the lens of Cohort Analysis.

On the X Axis you can see the percentage of people retained from each cohort. This automatically rules out influence by varying cohort size.

You can see that the baseline version, version with improved tutorial and version with UI/UX tweaks perform about the same. Meaning the tutorial offered NO improvement and the UI/UX tweaks were a waste of time.

The first two progression systems show a meaningful jump from the first three cohorts, but both performed similar to each other.

Cohort 6, the third progress system to be trialled, so far appears to be the clear winner out of the three progression systems.

Cohort Analysis shows us the true story of how each of our versions is working out. We learnt to avoid vanity metrics and focus on Cohort Analysis focused on our validated learning.

Halfbrick Studios retains all rights over Fruit Ninja Fight and all associated IP

Monday, December 3, 2018

High performance teams


Does your team have a reasonable stable throughput or velocity? Have they improved and optimised their way to what you would consider their peak velocity or throughput? Would you say that teams that are at their peak throughput are High Performing teams? They sure appear to be, relative to other teams that are less mature, have an unstable velocity or have not reached their peak velocity. Unfortunately, the assumption in all of that is that “velocity equates to performance”.





Looking at this race car, if we measured it on the horsepower of its engine would that equate to the outcome of a race? Of course not. It would be a contributing factor for sure, but so much else goes into deciding what place this car will finish in a race: fuel, suspension, transmission, tyre choice, on and on, and of course the driver.

It is the same for our teams, while velocity is a good measure of horsepower; it is a poor predictor of where the team would finish in the corporate race.



Velocity measures our outputs; such as deploying live features, updates or fixes.
For our outputs to be valuable they must produce a positive outcome. That is, they must change customer behaviour, such customers use our product for longer, write positive reviews, or we acquire new profitable customers. A feature that doesn’t change customers behaviour generally has no value.

For our outcomes to be valuable they must produce a positive impact: That is, they must increase revenue, increase profit, increase reputation or for charities deliver greater social benefit. A change in customer behaviour that doesn’t produce a positive impact for the company generally has no value.
While it is valuable to produce outputs, it is much more valuable to produce outcomes; as these have a much closer correlation to achieving impact; which what we are really here for.

Hence, I propose that a team that is regularly delivering positive outcomes, is a high performing team. When I think back on all of the great teams that I have been a part of we were regularly achieving positive behavioural changes in our customers.

Tuesday, October 30, 2018

Illusion of Choice


Let’s imagine that you have accepted an invite to hang out at my place. Creepy I know. Anyway, we are chatting and realise that it would be good to have some music playing. I say “pick an album from my collection, anything you like…”



Is there an album in there that you would choose to listen to? Was what you really wanted to listen to? This is the illusion of choice.

When I do this as a presentation, roughly half the attendees answer Yes to the first question, then roughly half of them drop their hand for the second question.

The illusion of choice is one sure way to ruin a Lean Start-up experiment. If you fall into the illusion of choice you are just re-enforcing your pre-existing notion of what is true. Should you continue to do this you will not learn the truth from your experiments. Read on to see what I mean.



When Telstra Wholesale started its journey to Open APIs; they came armed with a survey from their 200+ customers about which APIs were most important to them. Unfortunately, the list of APIs to choose from was provided by Telstra, a bit like my CD collection. The customers dutifully prioritise that list and there were some clear winners. Telstra built those APIs and deployed them, guess how many customers installed them? That’s right ZERO.

Thank fully Telstra Wholesale realised their mistake and went to their customers. This time they asked them how they used APIs, how APIs helped their business. Through this they found some common themes. They built and deployed the most needed API and got immediate uptake. The uptake increased as the expanded the first API and added more.

To apply this concept: Surveys need to be open not closed, otherwise we just confirm our own guesses.


The survey on the left is easier for our respondents to fill in and easier for you to analyse, however it is a closed survey. The survey on the left requires more effort from our respondent and a lot more analysis effort on your behalf; however, it is open and will generate more knowledge.

There are more approaches to keeping a survey open, but this is a key one.

Thursday, August 9, 2018

Who does the work requiring an expert in another team


Classic situation that tests our agile thinking… Team Neptune and Team Saturn are two mature agile teams. Team Neptune has a sizable chunk of upcoming work that centres around “System/Framework/Technology X” for which, one particular member of Team Saturn is the expert. The involvement of this expert will be crucial to the success of Team Neptune’s work. The challenge comes in how we can achieve the chunk of work without damaging / disrupting one or both teams.

“System/Framework/Technology X” It could be an ancient system that the expert helped to design and build with everyone else who worked on it now departed from the company. It could be a framework that the expert has deep experience in, etc, etc.

Generally what I see is that the expert is not needed for all of the work, however there is a central and crucial piece of work that they need to be involved in. You can see that in the diagrams below as the gray square “crucial piece” within the blue chunk of work.
I have seen three approaches used to handle this situation:



Approach A. For the duration of the chunk of work, the expert becomes a temporary member of Team Neptune and takes a leading hand in the work. They leave Team Saturn for the duration, attending none of their ceremonies.




Approach B. For the duration of the chunk of work, the expert takes a leading hand in the work; attending both teams ceremonies for the duration of the chunk of work. The expert remains a permanent member of Team Saturn. With a foot in both teams the expert is able to progress the work of both teams, with a focus on the Team Neptune work.



Approach C. Part of the work is allocated to Team Saturn who completes the work and hands it back to Team Neptune. The expert remains a permanent member of Team Saturn. Team Saturn also takes on a piece of work to provide knowledge transfer / training to Team Neptune. The expert attends design / planning ceremonies for Team Neptune and all of his Team Saturn ceremonies.

All three approaches involve sharing, helping each other, cross skilling and a big effort from the expert. Approach C has regularly proven to be the best approach when this situation has arisen. The reasons I believe delivers a good result are:
  • Both teams remain unchanged in regards to people; keeping their sense of team.
  • Clear focus for both teams, and especially for the expert.
  • No duplication of ceremonies eating into the experts’ time.
  • Keeps management mindset on split up the work to match the teams; i.e. promoting Stable teams.
  • Improved opportunities for members of Team Saturn to contribute to the work, hence improving the cross skilling.


How have you handled similar situations? What worked well for you?