Thursday, June 20, 2019

High Performance Agile Team Training Available

Get training in the skills that lead to high performance teams; skills that attendees will use every week. Basic agile training gives teams a good head-start and a significant boost in performance is often seen. However, that performance often stagnates well before high performance is achieved. How can you get your team to the next level? This training course addresses that gap. Attendees will build upon their foundation level agile training and be taught the skills that regularly lead to high-performance teams. Learning skills that are easy to replicate in their own team. Attendees will finish the course ready to add value to their team. 

Sustained high performance for their team will then be achieved through collaboration that harnesses the full strength of their team, clear customer centric goals and amplified delivery capability. The content and aims of this course closely align to the Heart of Agile (heartofagile.com) from Alistair Cockburn. Crammed full of interactive exercises, working in pairs or small groups gets you to experience the skill. The briefest of presentation material is used to introduce the exercises; this course is heavily skills focused.

Andrew Rusling will deliver the course, bringing with him, his experience of training over 400 people in agile, Lean, Scrum and Kanban; as well as transforming five companies. Andrew has the passion, experience and capability to provide an engaging and thought-provoking experience.

Attendee will Learn and Experience:

  1. Creating a Team Charter with Vision Statement, Values, Working Agreement, Decision Making Strategy and Proactive conflict management strategy. When they do this with their teams it will provides a foundation for their collaboration, reflection and customer centricity.
  2. Collaborative approaches to: ideation, design, problem solving, decision making, & planning.
  3. Easy to repeat skills for coaching and developing their team members. 
  4. Customer interviews - how to understand the world of their customers.
  5. Experiment design, and execution.
  6. Verifying User Stories will deliver value for the customer.
  7. Measuring Outcomes (customer behaviour) over Outputs (delivered product).
  8. Observational testing - how to dramatically improve the Customers Experience.
  9. Creating Continuous improvement actions that actually get completed
  10. Probabilistic forecasting for predictable planning
  11. Going faster by delivering less of the scope than we think we need.
  12. Visualise flow of work, removing waste & limiting work in progress to expedite delivery.

If you are located in South East Queensland, Australia and interested in this course, please contact me: andrewrusling@hotmail.com

Wednesday, January 30, 2019

Avoiding vanity metrics with Cohort Analysis



At Halfbrick Studios the “Rebel Alliance” team was working on Fruit Ninja Fight. They had validated their Problem/Market fit and were now in the Product Validation phase. Following a company-wide play test, they had refined the core game play and were ready to start an alpha trial with external players.

There were the experiments they planned out to release into the alpha over six weeks
  1. Baseline version, just basic game, no progression
  2. Improved tutorial
  3. UI/UX tweaks
  4. First trial of progression system
  5. Second trial of a different progression system
  6. Third trial of a different progression system




Looking at their experiments through the lens of a Total Retention report (above).
  • End of Week 2: Improved tutorial, we saw a slight improvement over the base version.
  • End of Week 3: UI/UX tweaks, produced a solid increase in retained users
  • End of Week 4: First trial of progression system, solid increase again. progression system is working
  • End of Week 5: Second trial of different progression system, great improvement, seems like second progress system is the best.
  • End of Week 6: Third trial of different progression system, some improvement, confirms second progress system was the best



Now let us look at those same experiments when we add Cohort Size to the Retention report. By cohort I mean how many players did the add to the Alpha test each week.

As you can see they started to add more and more players each week as they went along.
What does this mean for the Total Retention report? Its flawed, near useless for judging the outcomes of experiments. This is what the Lean Start-up describes as a vanity metric.

It will always keep increasing, and by boosting the cohort size the trend seems to change, so we can’t see what outcome we have achieved from each experiment.

In the world of games just using this report is a death sentence. Unless you work out what is keeping players in the game you need to keep adding more and more players, the cost of find these players keeps increasing and very soon the game becomes unprofitable.



Now let us look at those same experiments through the lens of Cohort Analysis.

On the X Axis you can see the percentage of people retained from each cohort. This automatically rules out influence by varying cohort size.

You can see that the baseline version, version with improved tutorial and version with UI/UX tweaks perform about the same. Meaning the tutorial offered NO improvement and the UI/UX tweaks were a waste of time.

The first two progression systems show a meaningful jump from the first three cohorts, but both performed similar to each other.

Cohort 6, the third progress system to be trialled, so far appears to be the clear winner out of the three progression systems.

Cohort Analysis shows us the true story of how each of our versions is working out. We learnt to avoid vanity metrics and focus on Cohort Analysis focused on our validated learning.

Halfbrick Studios retains all rights over Fruit Ninja Fight and all associated IP

Monday, December 3, 2018

High performance teams


Does your team have a reasonable stable throughput or velocity? Have they improved and optimised their way to what you would consider their peak velocity or throughput? Would you say that teams that are at their peak throughput are High Performing teams? They sure appear to be, relative to other teams that are less mature, have an unstable velocity or have not reached their peak velocity. Unfortunately, the assumption in all of that is that “velocity equates to performance”.




Looking at this race car, if we measured it on the horsepower of its engine would that equate to the outcome of a race? Of course not. It would be a contributing factor for sure, but so much else goes into deciding what place this car will finish in a race: fuel, suspension, transmission, tyre choice, on and on, and of course the driver.

It is the same for our teams, while velocity is a good measure of horsepower; it is a poor predictor of where the team would finish in the corporate race.



Velocity measures our outputs; such as deploying live features, updates or fixes.
For our outputs to be valuable they must produce a positive outcome. That is, they must change customer behaviour, such customers use our product for longer, write positive reviews, or we acquire new profitable customers. A feature that doesn’t change customers behaviour generally has no value.

For our outcomes to be valuable they must produce a positive impact: That is, they must increase revenue, increase profit, increase reputation or for charities deliver greater social benefit. A change in customer behaviour that doesn’t produce a positive impact for the company generally has no value.
While it is valuable to produce outputs, it is much more valuable to produce outcomes; as these have a much closer correlation to achieving impact; which what we are really here for.

Hence, I propose that a team that is regularly delivering positive outcomes, is a high performing team. When I think back on all of the great teams that I have been a part of we were regularly achieving positive behavioural changes in our customers.

Tuesday, October 30, 2018

Illusion of Choice


Let’s imagine that you have accepted an invite to hang out at my place. Creepy I know. Anyway, we are chatting and realise that it would be good to have some music playing. I say “pick an album from my collection, anything you like…”



Is there an album in there that you would choose to listen to? Was what you really wanted to listen to? This is the illusion of choice.

When I do this as a presentation, roughly half the attendees answer Yes to the first question, then roughly half of them drop their hand for the second question.

The illusion of choice is one sure way to ruin a Lean Start-up experiment. If you fall into the illusion of choice you are just re-enforcing your pre-existing notion of what is true. Should you continue to do this you will not learn the truth from your experiments. Read on to see what I mean.



When Telstra Wholesale started its journey to Open APIs; they came armed with a survey from their 200+ customers about which APIs were most important to them. Unfortunately, the list of APIs to choose from was provided by Telstra, a bit like my CD collection. The customers dutifully prioritise that list and there were some clear winners. Telstra built those APIs and deployed them, guess how many customers installed them? That’s right ZERO.

Thank fully Telstra Wholesale realised their mistake and went to their customers. This time they asked them how they used APIs, how APIs helped their business. Through this they found some common themes. They built and deployed the most needed API and got immediate uptake. The uptake increased as the expanded the first API and added more.

To apply this concept: Surveys need to be open not closed, otherwise we just confirm our own guesses.


The survey on the left is easier for our respondents to fill in and easier for you to analyse, however it is a closed survey. The survey on the left requires more effort from our respondent and a lot more analysis effort on your behalf; however, it is open and will generate more knowledge.

There are more approaches to keeping a survey open, but this is a key one.

Thursday, August 9, 2018

Who does the work requiring an expert in another team


Classic situation that tests our agile thinking… Team Neptune and Team Saturn are two mature agile teams. Team Neptune has a sizable chunk of upcoming work that centres around “System/Framework/Technology X” for which, one particular member of Team Saturn is the expert. The involvement of this expert will be crucial to the success of Team Neptune’s work. The challenge comes in how we can achieve the chunk of work without damaging / disrupting one or both teams.

“System/Framework/Technology X” It could be an ancient system that the expert helped to design and build with everyone else who worked on it now departed from the company. It could be a framework that the expert has deep experience in, etc, etc.

Generally what I see is that the expert is not needed for all of the work, however there is a central and crucial piece of work that they need to be involved in. You can see that in the diagrams below as the gray square “crucial piece” within the blue chunk of work.
I have seen three approaches used to handle this situation:



Approach A. For the duration of the chunk of work, the expert becomes a temporary member of Team Neptune and takes a leading hand in the work. They leave Team Saturn for the duration, attending none of their ceremonies.




Approach B. For the duration of the chunk of work, the expert takes a leading hand in the work; attending both teams ceremonies for the duration of the chunk of work. The expert remains a permanent member of Team Saturn. With a foot in both teams the expert is able to progress the work of both teams, with a focus on the Team Neptune work.



Approach C. Part of the work is allocated to Team Saturn who completes the work and hands it back to Team Neptune. The expert remains a permanent member of Team Saturn. Team Saturn also takes on a piece of work to provide knowledge transfer / training to Team Neptune. The expert attends design / planning ceremonies for Team Neptune and all of his Team Saturn ceremonies.

All three approaches involve sharing, helping each other, cross skilling and a big effort from the expert. Approach C has regularly proven to be the best approach when this situation has arisen. The reasons I believe delivers a good result are:
  • Both teams remain unchanged in regards to people; keeping their sense of team.
  • Clear focus for both teams, and especially for the expert.
  • No duplication of ceremonies eating into the experts’ time.
  • Keeps management mindset on split up the work to match the teams; i.e. promoting Stable teams.
  • Improved opportunities for members of Team Saturn to contribute to the work, hence improving the cross skilling.


How have you handled similar situations? What worked well for you?

Wednesday, May 30, 2018

Review of “Certified LeSS Practitioner” three day course by Venkatesh (Venki) Krishnamurthy


The first two days of Certified LeSS Practitioner were engaging and challenging. I went into the course thinking I could tweet interesting snippets as we went through; however there was no time for tweeting or distractions it constant thinking, doing, speaking. Hearing and following through the questions of others in the course was often very interesting.

The last day was not nearly as engaging due to several factors: I was tired, the content shifting to a light touch of the remaining rules of LeSS and Venki mentioning we were on track to finish early; consequent the participants but the brakes on.



Preparation
We were provided a long reading list of books, articles, videos and set a knowledge test. The test was not referenced/used in the training and some of the answer conflicted with Venki’s teaching. Several people in the class had fragile knowledge of scrum, did none of the pre reading and managed just fine. I had already consumed many items on the reading list several years ago. I re-read some of the articles, watched a few videos and found that they were not a cohesive set of learning materials. It seems they were every public article of LeSS published, with lots of duplication included. This reading list should be shortened significantly perhaps just to the rules of LeSS and a video or two for background.

We were told to bring laptops to work on, but only needed one per table of people to read the scrum guide. We were also encouraged not to take electronic notes, instead were handed 48-page exercise books and encouraged to take notes, which actually worked really well. My suggestion would be that laptops were discouraged during the lead up and printed copies of the Scrum Guide provided to each group.

Delivery
While the first two days were engaging, it felt like we were on a roller coaster blind folded; only Venki knew where we were going. Jumping from topic to topic without structure, without order, ignoring the printed slides, it was hard to know if we were making progress. While it sounds terrible I can’t make out if it is a strength or weakness of his delivery. As questions were raised we would deep dive on the topic, the reason why and potentially tangents to that topic.

Venki did make sure that everyone’s question were answered. Sometimes those answers were in the form of a question, or reference to a principle; forcing us to think through the question and find a suitable answer. I feel that this approach was key to the engaging nature of the training.

Content
The content covered in depth was
  • History of LeSS – If I have to hear “600 experiments” one more time…
  • Ten LeSS principles
  • LeSS is Scrum, what is Scrum, how are they the same, how are they different.
  • Systems Thinking
  • Causal Loop Diagrams
  • The why behind the LeSS Framework
  • The LeSS Framework (three pages of rules) and LeSS Huge Framework. Please note that this could be explained in two hours if done in one hit. Rightfully so it did not receive much more time than that through out the three days; after all we can always read the rules on the https://less.works


The content only lightly touched on was.
  • The LeSS guides


I found it interesting that Venki played funny videos after each break session. It surely lightened the mood, however only a few of them were directly related to the training course. Most of them had a tenuous connect at best.

Most of the interactive exercises were fun, interesting and embedded knowledge in us; such as designing a multiple team sprint planning approach, casual loop modelling of feature teams vs component teams. There were several interactive exercises that fell down in delivery and/or opportunities for learning. i.e. While searching for hidden post-its around the room with types of wastes written on them was fun, yet it delivered almost nothing in the way of knowledge.



What I took away
  • Use LeSS to descale / simply the target area of the organisation through empirical process control.
  • Don’ t use LeSS to scale up your existing agility.
  • LeSS is designed for a big bang change (limited to the target area of the organisation). i.e. The vast majority of teams MUST be fully cross functional feature teams, otherwise you are not doing LeSS.
  • The initial perfection vision of LeSS is a potentially shippable product increment every two weeks. Once that is achieved aim for one week.
  • Concept of understanding the System Optimising Goal of all systems you are interacting in / part of. I.e. What is the company/CEO’s System Optimising Goal? Is that the same goal as your area? Is it the same goal as the tool you are using?
  • Thinking in Systems: A Primer by Donella Meadows, is worth reading prior to the 5th Discipline by Peter Senge. “Thinking in Systems” will provide the though patterns that make it easier to digest “The 5th Discipline”.
  • Using a WIP limit indicates that there is a problem that you are not solving. Perhaps another team is flooding your team with work? Perhaps your own has an uneven flow?
  • Make all queues visible, then reduce/remove those queues.
  • Use LeSS Huge only when your one PO can’t handle the number of teams that you have. The 2-8 teams for LeSS is just a guide based on the worst and best PO’s they have seen. 9 teams is not the trigger point for LeSS Huge, it is purely down to the ability of the PO to handle the teams you have or not.
  • If you have LeSS huge try to only break out a new Product Area when you have 4 teams. This is so that the PO can keep those 4 teams effectively occupied. They have seen that having less then 4 teams for 1 PO often leads to starvation of those teams backlogs. So why do they suggest that use LeSS when you have 2 or 3 teams? The answer is that there is no better solution, better off using LeSS and potentially suffering some starvation than to not use LeSS when you have multiple teams.
  • Product Areas may be made up of multiple “themes” this is especially true when those themes only need a team or two to service them. E.g. your 4 team Product Area may be made up of 2 teams for theme A, 1 team for theme B and 1 team for theme C.
  • How can 1 PO handle 4 teams, let alone 8? The answer is to just to Strategy, Vision and Prioritisation. Leave the clarification of User Stories to the team who work directly with the customers. Cutting out this clarification effort frees up the PO to work with more teams.
  • When you need to seed knowledge across multiple teams; try creating a temporary “undone” team with someone who is strong in the desired skillset and stack the rest of the team with people who are keen to learn that skill. Temporarily they do all of the work related to the skill, once they have learnt the skill, the team is disbanded and they take their new found knowledge back to their feature team.
  • If you must have a distributed PO, place them in the same location as the customer(s) in preference to placing them in the same location of the team(s). The reason being they deliver the most value from better understanding the customers needs.
  • While LeSS demands that most teams are Feature Teams, it accepts that there will be some service teams, such as finance, admin, etc. It also accepts that the feature teams DOD may not be complete especially in the early days. That undone work will be covered by team(s) called “undone”. The name was chosen deliberately to be unappealing, because the aim is to get rid of those team(s) ASAP and have that work done by the features teams within the sprint.
  • Multi team Product Backlog refinement has recently been made the default over separate team Product Backlog refinement.
  • Have a clear product definition is crucial to a successful LeSS implementation. This definition should be as broad and as end user centric as possible.
  • The action plan from each team’s retrospective is shared at the overall retrospective. This is intended to prevent duplicated effort and worst still actions that interfere with each other.
  • The Overall Retrospective is held early in the next sprint, it is not held on the same day as end of sprint.
  • Every new role created in an organisation, disempowers another role somewhere in the organisation.
  • Financial matters are handled by the Product Owner in LeSS Huge it is still the single PO that handles the $$$.
  • Organisational agility is constrained by technical agility.
  • To constrain your causal loop diagram choose 3 to 5 parameters of interest before starting the diagram and focus on them. This worked during the training; however I am concerned that choosing/guessing those parameters up front indicates that you already understand the situation which you often don’t when you are creating a causal loop diagram in the first place. I will need to test this outside of a training environment


Overall Rating: 8/10

Saturday, September 23, 2017

Breaking down the Coder vs. QA divide

The Coders vs. QA divide is prevalent in almost all companies that are new to an agile way of working. The Coders camp out on one side of the wall, throwing work over to the testers. Creating cross functional teams does not automatically resolve the ingrained ‘over the wall’ mental model of development. Often two mini teams form within the agile team, with the wall still very much intact. This mental wall perpetuates ‘Us vs. Them’ adversarial behaviour; which generally leads to late delivery, reduced quality, stressed testers, limited collaboration and frustration on both sides. Thankfully this issue can be addressed in a reasonable time-frame when the appropriate actions are applied as a cohesive approach.



The long term goal regarding Coders vs. QA is usually to blur the line between Coders and QA to the point that they are all ‘Developers’. Some of the Developers have more of a QA focus; however all of the Developers are actively involved in testing and quality throughout the life-cycle of the product. These Developers create and maintain a test suite that adheres to the agile QA pyramid. This is a long and rewarding journey to take; with breaking down the Coder vs. QA wall as the first major step.

How to identify that the Coder vs. QA wall exists

When you notice two or more of the following situations, it is likely that there is a divide between the coders and the QA.
  • QA/Testers are the only people who test the software. No one else helps even when it appears likely the team will not complete a user story within the iteration.
  • Reviews and showcases where teams discuss user stories that have been built, yet the user story has not been tested.
  • Reviews and showcases where teams show user stories that have not been tested.
  • Inconsistent velocity from teams.
  • The testers are stressed at the end of iterations while the coders are idle looking for work, or worse still working on user stories from future sprints.
  • All of the testing occurs in the last 10% of the sprint.
  • Request to extend sprint duration because it takes too long to test the delivered features.
  • Use of phrases such as “It is done, I have coded it, it just needs to be tested.”


How to remove the Coder vs. QA wall

My favored approach to removing the wall involves some carefully executed company level actions, supported by team level coaching. While it can be addressed just via team coaching; that does not scale well, produces inconsistent results and takes a lot longer. I recommend considering the following actions, remembering that these actions need to work together to change the hearts of minds of many different people.

Company-wide minimum DOD includes “User Stories must be Tested”. All teams must have a DOD that includes the ‘minimum DOD’; they are free to build upon if they wish.

Company-wide training which emphasizes
  • Teams succeed or fail as a whole
  • The whole team is responsible for quality, not just the testers.
  • QA provide Test Thinking, however everyone in the team contributes to testing.
  • Value of completed stories over partially complete stories
  • WIP is waste
  • WIP reduces our ability to change direction
  • ATDD/BDD


Company-wide support for ATDD/BDD with
  • Tooling and environments
  • Expertise and coaching for the implementation
  • Specific training for QA to develop their automation skills


Coach Product Owners to
  • Value only completed stories.
  • Demand to see only completed stories in reviews/showcases
  • Demand to only see working software in reviews/showcases


Support team coaches/scrum masters to:
  • Re-enforce the messages from the Companywide training
  • Establish Coder/QA pairing
  • Establish ATDD / BDD
  • Work with QA to create a prioritise automation testing backlog. This backlog can be worked on by Coders/QA during slack time. Over time it will reduce the demand for manual testing, freeing up the QA to focus on automation, exploratory testing and building quality in.
  • Run team exercises where team members learn more of the details of what each other does and how they can help each other.
  • Provide training to the coders on basic of effective manual testing; so that they are better able to step in when needed.


Questions for you

  • What has your experience been with Coder vs. QA divides?
  • Have I missed any signs of the divide?
  • Have you taken different actions that worked well or taught you what not to do?

Image by Helen Wilkinson [CC BY-SA 2.0], via Wikimedia Commons