Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Sunday, January 24, 2021

Forecasting - Answering the question of “when will it be done?”

 “When will it be done?” is a question that a surprising number of IT professional struggle to answer with certainty. This is surprising to me, because in many situations it is reasonable easy to come up with a suitably certain answer. Yet no one has done so. Instead they are pushing ahead with little to no understanding of whether they are on track to achieve their goals. This blind faith is admirable even if misguided. This article aims to show you how to answer the question of “When will it be done?” with a suitable level of certainty. Doing this allows you to harness and direct that faith to achieve your goals sooner or better. 


Set-up

To answer the question of “When will it be done?” there are several pre-requisites; none of which are difficult when working in an agile manner.

Your Work items need to be potentially shippable increments of the product. They need to be vertical slices that ideally deliver some business value. Splitting up your work in this way means when something is “Done” it is really “Done”. This approach reveals the hidden work to achieve our goals. This article will not delve into how to accomplish this as there is plenty of existing literature on this subject.

We also need to know:

  • How much needs to be done? Aka. Work items to do.
  • How much has been done? Aka. Completed work items.
  • How fast are you completing work? Aka. Rate of completing work items.

The following sections will provide more detail on how you can answer these questions.


How much needs to be done?

The biggest hurdle I see to people answering “when will it be done?” is that they don’t know how much needs to be done, and they don’t know that because they don’t know what “it” is. The excuse of “we will know it when it’s done” often comes out, which does not help anyone. Instead you need to become comfortable with ambiguity; and focus on the reducing the unknowns that get us to a point that we can develop an initial high level view of what “it” is. Often you can develop a forecast with what the project team currently knows about “it”. Early on this forecast will be less certain than you want; however this forecast will be refined and improved as the project team learns more by doing.

How much you know about “it” should guide your choice of determining how much needs to be done. To figure out which approach to use, please determine which of the following three situations you are in. 

Situation 3: What needs to be done to achieve “it” is unknown and currently unknowable. Currently we cannot write out a bullet point list of what needs to be done; even at a high level. There are multiple significant unknowns, multiple risks, and likely more risks or issues we are not even aware of yet. You are deeply in “Research”. Suggestion: Don’t use forecasting yet. Instead focus on reducing your unknowns, reducing risks and doing spikes/investigations. Your aim should be to get to Situation 2.

Situation 2: What needs to be done to achieve “it”, is only understood at a high level. We can write a bullet point list of roughly ten or more work items that comprise “it” Those work items may be very large, this is not a problem. Suggestion: Determine how much needs to be done by elaborating a sample of work items. This means: 1. randomly select three of those ten plus work items. 2. For each selected work item hold a workshop with a cross functional group from your project team. In the workshop split up the large work item into smaller potentially shippable product increments. These need to be close to the size of work items that your team regularly completes. 3. All of these smaller work items need to be estimated. The result is that you know the size of three of the large work items. Importantly this size matches up to the way that you measure completed work. To determine how much needs to be done average the size of the three elaborated work items and use that size for the remaining unelaborated work items. This provides a roughly estimate of the total work be done.

Situation 1: What needs to be done to achieve “it” can be known with some certainty. The Product Owner has a vision for the Product; the technical people have a rough understanding of how they could build it; the ‘leads’ have a reasonable yet incomplete understanding of what “it” is.

Suggestion: Hold a half day User Story Mapping workshop with the whole team. During this workshop:

  • The whole team will form a shared understanding of what “it” is and is not.  
  • The work will be split up into work items of a size that your team regularly completes.
  • All of the work items will be estimated.

Having done the hard work of understanding what “it” is, determining how much needs to be done can easily be achieved by summing up the size of all of the work items.


How much has been done? 

This is simply a matter of adding up the size of the work items that we have completed that directly relate to “it”.


How fast are you completing work?

On average how much work are we completing within a cycle? If you are doing fortnightly Sprints and using Story Points; this question becomes “What is our Velocity?” or “What is the average of total Story Points completed within a fortnight?” If you are just starting out you may have to initially guess this figure. As you complete more sprints/iteration this figure will become more realistic; hence improving your forecast.


Create the forecast

Now that you know how much needs to be done, how much has been done and how fast you are completing work; you are ready to create a forecast. It is the forecast which will answer the question of “When will it be done?”



Example burn up chart showing forecast completion 1 day after the target release date.

For your first attempts at forecasting I recommend a Burn-up chart. A burn-up chart has work on the X axis and time on the Y axis. The burn-up chart is updated each unit of time (usually Sprints); providing a more accurate forecast each time it is updated.

 

The burn-up chart above shows five key pieces of information:

  1. The total scope (how much needs to be done) Purple - solid
  2. A projection of the scope into the future (what is the average increase in scope) Purple - dashed
  3. The total done work - Green solid
  4. A projection/forecast of the done work (based on your expected velocity) – Green dashed
  5. Where the two projections cross over, tells us the expected completion date - Orange

Please contact me should you want assistance in populating or interpreting your burn-up chart.

NOTE: Forecasting using a Monte Carlo simulation provides a richer and more realistic forecast; however they are complicated to set-up, and will not be addressed in this article. Please contact me if you would like to start using Monte Carlo simulations. 



Example of a completed project. Sprint review on Sept 1 indicated we were way off track. Note the large de-scoping that occurred shortly after and the increase of delivery from hiring one more experienced developer.


Continuous forecasting

Forecasting should not be a one-time activity; while it useful to do it once, its true value comes from continuous updating the forecast and holding regular conversations about what the forecast is telling us. For a Scrum team you should update your forecast at least once a Sprint.

Continuous forecasting will catch situations such as:

  • Our recent reduction in velocity has pushed out the expected completion date by 5 weeks.
  • Adding those new features has pushed out the expected completion date by 7 weeks. 
  • Each sprint we are finding more new work then we anticipated, at this rate we will never achieve our goal. 

If you started your forecast while in “Situation 2” there will come a time where the team has learned enough to move to “Situation 1”; this is a key time to update the forecast which could shift significantly. The earlier you can do this the earlier you can hold the potentially difficult conversation with stakeholders about when they are going to obtain their objective.


Wednesday, October 7, 2020

Project Steering for Games

The Games development is a highly competitive global industry; with hundreds of games launched or updated every week. To achieve anything more than mediocrity requires the entire value stream of game development to be working together effectively with each individual along that stream delivering a stellar performance. Maintaining a healthy company balance sheet, means each game project needs to have solid indicators of future success before significant time and effort is sunk into it. The Project Steering approach described in this article was something I helped to design and implement to keep multiple game projects on track while allowing those game projects the flexibility needed to find the next hit game. 


Context on company structure

The year was 2017. Game teams were the basic building block of the company. Generally, each game project was executed by one game team. Each game team was deeply cross functional including people who can cover the following as a minimum: design, development including engine development, QA, art, marketing, analytics. Each team was led by a Product Manager (effectively Product Owner and Team Lead). There was a strong emphasis on the PMs being servant leaders, who grow and develop a self-organising team. What I observed in games was that everyone was very passionate about the game they were developing and wanted to have a say in the direction it took. While this passion is intoxicating it could also lead to chaos; this why the Product Managers (PM) retain authority to make decisions. Retaining creative control ensures that the product follows a clear vision. 


Objectives of the Project/Product Steering approach

  • Ensure each game team is focused on regularly assessing if their current project is the best use of their skills, time and effort.
  • Balance the autonomy of new Product Managers (PM) with the control mechanisms that prevent significant mistakes from occurring. 
  • Support the Product Managers and teams to maximise ROI for the company over the long term; through feedback and guidance from experience leaders within the company.
  • Transparency of project Steering so that the whole company can choose to be aware of what is going on and learn from the success/failures of each game team.


Monthly Product Steering meeting

The key element of the Product Steering approach was a monthly public meeting held for each game project. During the meeting the Product Manager or a representative from the team presents from a standard template. The primary attendees are the Product Steering Committee*; however the meeting is public so anyone from the company may attend. The Product Steering Committee asks clarifying questions and provides non-binding feedback and guidance. Initially everyone apart from the Product Steering Committee was asked to observe only. Over time this gradually changed first with specific audience members being asked questions, then later open question time at the end of the meeting. 

These meetings were scheduled for 1 hour and some of the early ones took that long. However, in the matter of few months they were regularly completed in 30 minutes including question time.


Product Steering Committee

The Product Steering committee was composed of senior leaders who held a vast depth of experience in the games industry from around the world. I was also included; initially to help refine the template I had created for the meeting and after that stayed on as me asking the questions a 5-year-old would ask seemed to provide some value.

  • CEO
  • CFO
  • CTO
  • Head of Product Development
  • Head of Business Intelligence
  • Agile Coach


Preparing for the Product Steering meeting

While the Product Steering meeting was often insightful and helpful for all involved. The act of preparing for the meeting also provides a lot of value; especially for those Product Managers who were considering not holding a meeting. Often their desire to not hold the meeting is a subconscious move to avoid facing some uncomfortable facts about their project. 

Another positive aspect of the preparing of the meeting was that the Product Managers often reached out to members of the Product Steering Committee for assistance with preparing their presentation. These interactions were a chance to learn from each other and improve the direction of the project.


Regular feedback

Completing the Product Steering approach was regular 1 on 1s and feedback from the Head of Product Development to the Product Managers. This is massively important for less experience Product Managers and still very useful for the remaining Product Managers.


Common Presentation Template

The template was a two-page slide deck that provided answers to the key questions the Product Steering Committee wanted to see from every game project. Without answers to these key questions the committee members would struggle to provide effective feedback and guidance. Creating a consistent template helped both the Product Managers and the committee. For the PMs it cut down their preparation time, for the committee it meant they did not have to translate the information provided to them in different formats by different PMs. Additionally, it meant the committee had could compare game projects.

The PMs were free to add additional slides showing whatever detail they felt appropriate. Roughly three quarters of product steering meetings include extra information; such as partnering deals, highly feature maps, results of experiments etc.

Page One:

  • Product goal in one sentence
  • Declared intention (Persevere, Pivot game, Cancel game)
    • Sub section seeking input from Product Committee on specific topics
  • Product Health
    • Is the team learning what their customers will respond to?
    • Is the team delivering with suitable throughput?
  • Team Health
    • Overall morale
    • Specific issues affecting them
  • Financial summary, last three months
    • Costs, revenue, ROI
    • Projected financials, next month with certainty, next 6 months with low certainty
  • Results of objectives
    • Were last month’s objectives completed?
    • Did those objectives change?
  • Objectives for the coming month.
    • High level deliverables, learning outcomes, etc.

Page Two:

  • High level road map of deliverables, experiments, deals, etc. 
    • Current quarter was more detailed and showed when items were completed.
    • Next two quarters were highly level and subject to significant change




Sunday, July 12, 2020

Achieve more by delivering less: descoping is the secret sauce of agile teams that quickly achieve business objectives

Switching from traditional development to agile development usually results in a significant increase in the speed of delivery. This increase in speed partially comes from agile practices speeding up the people’s ability to build and test functionality. Interestingly the vast majority of this speed improvement comes from dramatically reducing the amount of functionality we are working on at once. By reducing our scope, the people are able to focus, gain fast feedback and quickly deliver outcomes. What is most interesting to me is that to further increasing the speed of building and testing functionality has a high cost; while further reducing the scope has a medium to low cost. This means that to achieve the greatest impact we should turn our attention to cutting scope where ever we can. This is the key to FAST deliver with agile.

To put all of that in numbers, adopting Scrum compared to Traditional methods usually delivers a 20% increase in speed of delivery. This can be gradually improved year on year, say 5% increases each year. When we cut out 30% of our scope from the first years’ worth of work, we are already two years ahead for a lower investment in effort. I am not saying stop trying to improve your development speed; please continue and add that to the gains to be made by reducing scope.“But our stakeholders have been promised X, Y and Z, you can’t just remove any one of them.” I hear you say. That is fine, we not going to remove those items, we are going to reduce the scope of each item, honing it in on achieving the business objective(s) it relates to. The business will still get X, Y and Z, it will meet all of its objectives; it will just be achieved with slimmer versions of X, Y and Z.

The approach to delivering business outcomes faster for less effort is not magic. It is a composition of basic agile practices, wrapped up in a feedback loop and supported with lots of collaboration. To start achieving more by delivering less, follow these steps.


  1. Scope what you know: google “User story mapping” to start.
  2. Split work into smaller pieces that are vertical slices: google “splitting user stories” to start.
  3. Forecast your delivery outcomes: Please refer to my blog post of Forecasting.
  4. De-scope to fit your objectives: explained below.
  5. As you learn more, repeat those steps just for the changes/additions/removals. 

A forecast indicating late delivery



Descope to fit your objectives

While this is titled as “descoping” it will not descope everything, some items will be deferred for later releases, some will be cut never to return, some will be split with bits done now, later and much later (aka never). Any item that is removed from the current release has reduced the scope of that release and hence will increase the speed at which we can achieve the business outcomes attached to that release.

To stand a good chance of succeeding with this approach you will need to understand the business environment and the objectives that the business (aka stakeholders) are setting out to achieve now and in the near-term. Understanding this will help to guide the splitting and descoping discussions; without this knowledge you are effectively operating blind. 

The work of “descoping” should be done collaboratively with a small group that represents a cross section of those involved, such as Product Owner, experienced Tester, experienced front-end developer, experienced DevOps developer. This small group should be able to move quickly, creating a view of the work that can then be reviewed more broadly.

Together the small group will repeatedly prioritise the backlog, split up larger items and move items between different releases until the delivery forecast indicates we have a chance of success (aka we have a chance of completing the release ahead of the targeted release date).

Forecast after descoping; indicating that delivery will occur ahead of the milestone

Prioritise the backlog

To figure out what to descope, and what to spend effort splitting up we need to prioritise and order backlog. When I say prioritise the backlog I mean ruthlessly prioritise by value! Forgort MoSCoW prioritisation it is too emotional; “oh I must have that”, “we should have that”. We need to be RUTHLESS! To get you started use “above the line” prioritisation. To do this visualise the backlog, draw a very clear line, then collaborate with the group to end up with only 50% or less of the items above the line. Now draw another line above that line, and repeat. You will end up with three sets of items: Top Priority <25% of the backlog, next priority <25%, and lower priority >=50%. These sets don’t have to correlate to your releases. What these sets give you is a clear view of what is important and not so important. You can even remove those lines if you like, they have no served their purpose. The backlog should also be ordered for dependencies.

Split up larger items

Larger items often hide within them some work that is crucial, some that is nice to have, and some we don’t need to do at all. Splitting up larger items helps us to uncover the work that we can move to later releases or not do at all. 

I recommend that you find the largest items in your backlog, split them up, move and/or descope those smaller pieces, then go onto the next largest items. The key to remember here is to split the work into vertical slices, it is all about finding what is valuable and what is not so valuable. If you split horizontally the value is spread across all of the split-out items and you will have wasted your time splitting it up in the first place. 

Move items between different releases

Whether your work starts out as separate releases/backlogs or as one larger backlog that you cut up into separate releases; having your work in separate releases provides you with flexibility in planning and how you communicate to your stakeholders about what is going to be done when.

I recommend that you always have one more release in your backlog then you include in your published delivery forecasts. This is the lowest priority release, the YAGNI release, later, never what ever you want to call it. This dumping ground allows us to keep all of the items around, reducing tough conversations with stakeholders that are deeply attached to something that should never be built. 

When you move an item from an earlier release to a later release you increase the chance of the earlier release being completed by its target date. Of course, the work has not disappeared just been deferred. With the rate of the change in most businesses deferring that item may mean it is never done; depending on your point of view that may be a win or a loss. 

As your small group updates these releases and their corresponding forecasts of deliver it is important to get the whole team to review them and to share them with your stakeholders.

Tuesday, September 24, 2019

How to dramatically improve your product


Let us image… you have found your spark, you have explored the market space and found a problem worth solving, you now even have part of the product that may solve that problem. Your objective is to make the product the best thing for solving that problem. You have been working on this for months maybe even a year or more. The product passes all of your automated test but how do you know customers will actually be able to use it to solve their problem? When you think about how your product works you view it as a clear path to success, similar to the image below. 



You enter some information, tweak this, change that, press a button and taa-dah, the problem is solved! Unfortunately, we are often blinded by our closeness to the product. What our users often see is similar to the image below. A bewildering array of choices, with no clear path forward.



How can we show them the path? This is where Observational Testing comes in. Observational Testing allows us to understand the pains of our user allowing us to remove those pains and improve our product.

On Metacritic.com Half life 2 is the highest rated PC game of all time; Half life 1 comes in at #4. Both games are made by Valve corporation. One of the key practices that Valve used to take their games from mediocre to great is Observational Testing. They call it Play Testing. Valve would get in volunteers to sit and play their partially finished game, while members of the team would observe them and take notes. The team was not allowed to say anything to the player.

Quoting from Ken Birdwell a senior designer there: “Nothing is quite so humbling as being forced to watch in silence as some poor play-tester stumbles around your level for 20 minutes, unable to figure out the "obvious" answer that you now realize is completely arbitrary and impossible to figure out.” 
A two-hour play test would result in 100 or so "action items" — things that needed to be fixed, changed, added, or deleted from the game. That is a phenomenal amount of feedback.



I personally ran many observational tests when developing prototype games “Planty”, “Bargain Variety Store” & “Siege Breakers”, at Halfbrick Studios. I can tell you that observational tests are easy to run, horribly painful and immensely beneficial all at once. That hair pulling frustration of the user seeing a forest of trees while you see a clear path really pushes you to improve your product.

Running an Observational Test is straight forward:
  1. Bring in a customer or potential customer. This bit is hard.
  2. Provide them an objective to achieve in the test, either verbally or written out. This could be a hypothesis you want to test.
  3. While they attempt to achieve the objective, video record over their shoulder (a smart phone will do just fine).
  4. Observe what they do/don’t do; while not saying anything or offering any guidance. This is the hard part.
  5. Afterwards ask what they were thinking at key steps (i.e. when they got stuck, when they achieved success).
Observational Testing is how you can dramatically improve your product. It brings three key benefits:
  1. Challenge your design approach. Are we tackling this problem in the right way?
  2. Validate hypothesis. As mentioned the objective you provide at the start could determine if they will use the product in the way you anticipated. Can they understand the information provided? Etc.
  3. Dramatically increase usability. This is moving them from the forest to the path, and is the most evident benefit when people start to use Observational Testing.


Halfbrick Studios maintains full Copyright over Siege Breakers, Planty and Bargain Variety Store.

Photo Reference: https://www.flickr.com/photos/eggrole/7524458398

Thursday, June 20, 2019

High Performance Agile Team Training Available

Get training in the skills that lead to high performance teams; skills that attendees will use every week. Basic agile training gives teams a good head-start and a significant boost in performance is often seen. However, that performance often stagnates well before high performance is achieved. How can you get your team to the next level? This training course addresses that gap. Attendees will build upon their foundation level agile training and be taught the skills that regularly lead to high-performance teams. Learning skills that are easy to replicate in their own team. Attendees will finish the course ready to add value to their team. 

Sustained high performance for their team will then be achieved through collaboration that harnesses the full strength of their team, clear customer centric goals and amplified delivery capability. The content and aims of this course closely align to the Heart of Agile (heartofagile.com) from Alistair Cockburn. Crammed full of interactive exercises, working in pairs or small groups gets you to experience the skill. The briefest of presentation material is used to introduce the exercises; this course is heavily skills focused.

Andrew Rusling will deliver the course, bringing with him, his experience of training over 400 people in agile, Lean, Scrum and Kanban; as well as transforming five companies. Andrew has the passion, experience and capability to provide an engaging and thought-provoking experience.

Attendee will Learn and Experience:

  1. Creating a Team Charter with Vision Statement, Values, Working Agreement, Decision Making Strategy and Proactive conflict management strategy. When they do this with their teams it will provides a foundation for their collaboration, reflection and customer centricity.
  2. Collaborative approaches to: ideation, design, problem solving, decision making, & planning.
  3. Easy to repeat skills for coaching and developing their team members. 
  4. Customer interviews - how to understand the world of their customers.
  5. Experiment design, and execution.
  6. Verifying User Stories will deliver value for the customer.
  7. Measuring Outcomes (customer behaviour) over Outputs (delivered product).
  8. Observational testing - how to dramatically improve the Customers Experience.
  9. Creating Continuous improvement actions that actually get completed
  10. Probabilistic forecasting for predictable planning
  11. Going faster by delivering less of the scope than we think we need.
  12. Visualise flow of work, removing waste & limiting work in progress to expedite delivery.

If you are located in South East Queensland, Australia and interested in this course, please contact me: andrewrusling@hotmail.com

Wednesday, January 30, 2019

Avoiding vanity metrics with Cohort Analysis



At Halfbrick Studios the “Rebel Alliance” team was working on Fruit Ninja Fight. They had validated their Problem/Market fit and were now in the Product Validation phase. Following a company-wide play test, they had refined the core game play and were ready to start an alpha trial with external players.

There were the experiments they planned out to release into the alpha over six weeks
  1. Baseline version, just basic game, no progression
  2. Improved tutorial
  3. UI/UX tweaks
  4. First trial of progression system
  5. Second trial of a different progression system
  6. Third trial of a different progression system




Looking at their experiments through the lens of a Total Retention report (above).
  • End of Week 2: Improved tutorial, we saw a slight improvement over the base version.
  • End of Week 3: UI/UX tweaks, produced a solid increase in retained users
  • End of Week 4: First trial of progression system, solid increase again. progression system is working
  • End of Week 5: Second trial of different progression system, great improvement, seems like second progress system is the best.
  • End of Week 6: Third trial of different progression system, some improvement, confirms second progress system was the best



Now let us look at those same experiments when we add Cohort Size to the Retention report. By cohort I mean how many players did the add to the Alpha test each week.

As you can see they started to add more and more players each week as they went along.
What does this mean for the Total Retention report? Its flawed, near useless for judging the outcomes of experiments. This is what the Lean Start-up describes as a vanity metric.

It will always keep increasing, and by boosting the cohort size the trend seems to change, so we can’t see what outcome we have achieved from each experiment.

In the world of games just using this report is a death sentence. Unless you work out what is keeping players in the game you need to keep adding more and more players, the cost of find these players keeps increasing and very soon the game becomes unprofitable.



Now let us look at those same experiments through the lens of Cohort Analysis.

On the X Axis you can see the percentage of people retained from each cohort. This automatically rules out influence by varying cohort size.

You can see that the baseline version, version with improved tutorial and version with UI/UX tweaks perform about the same. Meaning the tutorial offered NO improvement and the UI/UX tweaks were a waste of time.

The first two progression systems show a meaningful jump from the first three cohorts, but both performed similar to each other.

Cohort 6, the third progress system to be trialled, so far appears to be the clear winner out of the three progression systems.

Cohort Analysis shows us the true story of how each of our versions is working out. We learnt to avoid vanity metrics and focus on Cohort Analysis focused on our validated learning.

Halfbrick Studios retains all rights over Fruit Ninja Fight and all associated IP

Thursday, August 9, 2018

Who does the work requiring an expert in another team


Classic situation that tests our agile thinking… Team Neptune and Team Saturn are two mature agile teams. Team Neptune has a sizable chunk of upcoming work that centres around “System/Framework/Technology X” for which, one particular member of Team Saturn is the expert. The involvement of this expert will be crucial to the success of Team Neptune’s work. The challenge comes in how we can achieve the chunk of work without damaging / disrupting one or both teams.

“System/Framework/Technology X” It could be an ancient system that the expert helped to design and build with everyone else who worked on it now departed from the company. It could be a framework that the expert has deep experience in, etc, etc.

Generally what I see is that the expert is not needed for all of the work, however there is a central and crucial piece of work that they need to be involved in. You can see that in the diagrams below as the gray square “crucial piece” within the blue chunk of work.
I have seen three approaches used to handle this situation:



Approach A. For the duration of the chunk of work, the expert becomes a temporary member of Team Neptune and takes a leading hand in the work. They leave Team Saturn for the duration, attending none of their ceremonies.




Approach B. For the duration of the chunk of work, the expert takes a leading hand in the work; attending both teams ceremonies for the duration of the chunk of work. The expert remains a permanent member of Team Saturn. With a foot in both teams the expert is able to progress the work of both teams, with a focus on the Team Neptune work.



Approach C. Part of the work is allocated to Team Saturn who completes the work and hands it back to Team Neptune. The expert remains a permanent member of Team Saturn. Team Saturn also takes on a piece of work to provide knowledge transfer / training to Team Neptune. The expert attends design / planning ceremonies for Team Neptune and all of his Team Saturn ceremonies.

All three approaches involve sharing, helping each other, cross skilling and a big effort from the expert. Approach C has regularly proven to be the best approach when this situation has arisen. The reasons I believe delivers a good result are:
  • Both teams remain unchanged in regards to people; keeping their sense of team.
  • Clear focus for both teams, and especially for the expert.
  • No duplication of ceremonies eating into the experts’ time.
  • Keeps management mindset on split up the work to match the teams; i.e. promoting Stable teams.
  • Improved opportunities for members of Team Saturn to contribute to the work, hence improving the cross skilling.


How have you handled similar situations? What worked well for you?

Saturday, September 23, 2017

Breaking down the Coder vs. QA divide

The Coders vs. QA divide is prevalent in almost all companies that are new to an agile way of working. The Coders camp out on one side of the wall, throwing work over to the testers. Creating cross functional teams does not automatically resolve the ingrained ‘over the wall’ mental model of development. Often two mini teams form within the agile team, with the wall still very much intact. This mental wall perpetuates ‘Us vs. Them’ adversarial behaviour; which generally leads to late delivery, reduced quality, stressed testers, limited collaboration and frustration on both sides. Thankfully this issue can be addressed in a reasonable time-frame when the appropriate actions are applied as a cohesive approach.



The long term goal regarding Coders vs. QA is usually to blur the line between Coders and QA to the point that they are all ‘Developers’. Some of the Developers have more of a QA focus; however all of the Developers are actively involved in testing and quality throughout the life-cycle of the product. These Developers create and maintain a test suite that adheres to the agile QA pyramid. This is a long and rewarding journey to take; with breaking down the Coder vs. QA wall as the first major step.

How to identify that the Coder vs. QA wall exists

When you notice two or more of the following situations, it is likely that there is a divide between the coders and the QA.
  • QA/Testers are the only people who test the software. No one else helps even when it appears likely the team will not complete a user story within the iteration.
  • Reviews and showcases where teams discuss user stories that have been built, yet the user story has not been tested.
  • Reviews and showcases where teams show user stories that have not been tested.
  • Inconsistent velocity from teams.
  • The testers are stressed at the end of iterations while the coders are idle looking for work, or worse still working on user stories from future sprints.
  • All of the testing occurs in the last 10% of the sprint.
  • Request to extend sprint duration because it takes too long to test the delivered features.
  • Use of phrases such as “It is done, I have coded it, it just needs to be tested.”


How to remove the Coder vs. QA wall

My favored approach to removing the wall involves some carefully executed company level actions, supported by team level coaching. While it can be addressed just via team coaching; that does not scale well, produces inconsistent results and takes a lot longer. I recommend considering the following actions, remembering that these actions need to work together to change the hearts of minds of many different people.

Company-wide minimum DOD includes “User Stories must be Tested”. All teams must have a DOD that includes the ‘minimum DOD’; they are free to build upon if they wish.

Company-wide training which emphasizes
  • Teams succeed or fail as a whole
  • The whole team is responsible for quality, not just the testers.
  • QA provide Test Thinking, however everyone in the team contributes to testing.
  • Value of completed stories over partially complete stories
  • WIP is waste
  • WIP reduces our ability to change direction
  • ATDD/BDD


Company-wide support for ATDD/BDD with
  • Tooling and environments
  • Expertise and coaching for the implementation
  • Specific training for QA to develop their automation skills


Coach Product Owners to
  • Value only completed stories.
  • Demand to see only completed stories in reviews/showcases
  • Demand to only see working software in reviews/showcases


Support team coaches/scrum masters to:
  • Re-enforce the messages from the Companywide training
  • Establish Coder/QA pairing
  • Establish ATDD / BDD
  • Work with QA to create a prioritise automation testing backlog. This backlog can be worked on by Coders/QA during slack time. Over time it will reduce the demand for manual testing, freeing up the QA to focus on automation, exploratory testing and building quality in.
  • Run team exercises where team members learn more of the details of what each other does and how they can help each other.
  • Provide training to the coders on basic of effective manual testing; so that they are better able to step in when needed.


Questions for you

  • What has your experience been with Coder vs. QA divides?
  • Have I missed any signs of the divide?
  • Have you taken different actions that worked well or taught you what not to do?

Image by Helen Wilkinson [CC BY-SA 2.0], via Wikimedia Commons


Sunday, April 23, 2017

The Fist of Five a voting and consensus building technique

The Fist of Five is a voting and consensus building technique that allows groups of people to quickly understand what they agree and disagree on. With a foundation built upon the agreements they do have; the group can focus their time and effort on resolving their differences. The simultaneous voting aspect of Fist of Five boosts the effectiveness of the group conversations by giving everyone an equal voice. I.e. the loud extroverts in the group no longer dominate the conversation. It only takes one minute to teach the Fist of Five to a new group of people and considering its broad versatility; it is a collaborative technique well worth learning. 

I am sure that you have been in a lengthy team discussion that is wrapped up by the lead saying, “so we all agree then?!”. The team responds with some half nods, some murmuring and plenty of silence. The lead moves on quickly and you are left confused about what we just agreed upon and how much agreement there really was. This to me is a failed attempt and consensus based decision making. The Fist of Five can improve these situations in numerous ways with very little effort expended. 

Benefits of the Fist of Five


  • Reveals hidden information: Who agrees, who is sitting on the fence, who disagrees, why do they disagree.
  • Reduces me vs. them mentality: Participants are disagreeing with a statement not necessarily a person.
  • Builds consensus: quickly see where everyone agrees, hone in the areas of disagreement allowing for discussion to resolve these differences.
  • Saves time: prevents discussion around topics that are already agreed upon, speeds up the resolution of differences because the specifics of the disagreement are often clearer.
  • Provides more time to tackle the key issues: once the disagreements are clear, the group can focus their precious time on that item.


How to use the Fist of Five 


  1. The facilitator makes a statement, such as “The Sprint Backlog should include the seven User Stories that are underlined on the whiteboard” or “The new team name should be ‘High Five’”
  2. The facilitator counts down from three, holding their fist in the air. (They use that time to visually confirm that all participants are ready to vote, who show their readiness by raising their own fist into the air).
  3. At the end of the count down, all participants change their fist into their vote, as shown below.
  4. The votes are ‘read’ which leads to an ‘outcome’ as explained below. The outcomes include: Statement Accepted, Statement Rejected, and More Discussion is needed.



Participant voting


Participants show their agreement or disagreement with the statement by voting as follows:

  • 5 fingers: strongly agree / it is spot on / approaching perfect
  • 4 fingers: agree / it could be improved but i am happy with it
  • 3 fingers: neutral / will go with the majority
  • 2 fingers: disagree / the intent needs to be tweaked / the wording needs to change
  • 1 fingers: strongly disagree / the intent is wrong / i do not support this


Reading the votes

  • Strong agreement: Everyone voted four or five.
  • Agreement: The majority voted four or five; there are no twos or ones.
  • Strong disagreement: There are only threes and below.
  • Disagreement: any other result; such as there are some ones or twos, and some fours or fives.


Outcomes

  • If Agreement or Strong Agreement is reached, the statement is accepted; the team has made a decision!
  • If there is Strong Disagreement the statement is rejected; the team has made a decision!
  • If there is Disagreement then more discussion is needed. One at a time, those that voted two or one explain their point of view to the group, then others in the group join in the conversation. The facilitator guides the discussion before deciding what to do. Usually some changes will be made to the statement followed by a revote.



When to use the Fist of Five

The Fist of Five is surprising versatile; primarily because there are so many different situations where teams need to agree or at least understand what consensus exists within the team. Some situations where I have found the Fist of Five to be highly effective:

  • Choosing a team name
  • Choosing a name for a project
  • Agreeing on a Sprint backlog – which user stories to include
  • Deciding on the scope of a project – which scope items to include.
  • Agreeing on a Vision statement – which intentions to include and the specific wording of the sentence(s).
  • Deciding on the objectives for a community, such as Scrum Master Community of Practice - which objectives to include and the specific wording of each objective.
  • Deciding on a set of team values – which values to include and the specific wording of each value.


How to use the Fist of Five on multiple items

Sometimes your team will have brainstormed many competing items. The Fist of Five is still very effective in this situation to either decide on one winner or to select multiple items. The basic usage is the same as described above. The key difference is to vote on each item, and record those votes, before discussing any item in detail. As you vote on each item note down all the votes against the specific item (e.g. Jimmy votes 4, Bob votes 3, Sally votes 5, Dianne votes 2 could be recorded as 4352). This allows the group to assess the overall field of options and quickly rules out some options as well as locking in some clear winners. The team can now look to combine items before focusing their discussion on those items that did not have clear consensus.


Example of choosing a Project Name

What follows is the list of project names we brainstormed along with the Fist of Five votes for the items that did not have Strong Disagreement, and hence were immediately discounted. There were 6 people voting. In this situation we only wanted one name for the project so “Project New Hope” was the winner.

  • 323244 ProtoFNX (This item received two votes of 2 fingers, two votes of three fingers and two votes of four fingers)
  • Proton and FNX Foundation
  • Joint FNX & Proton
  • 234334 A new hope
  • 332244 FNXP
  • Return of the Mortar
  • Proton strikes back
  • 233323 Proton - A new hope  (This result is also Strong Disagreement)
  • Galactic War
  • Clone Wars
  • Death Star
  • Project JAM
  • JAM Session
  • Proton JAM
  • 544335 Project New Hope


Sunday, February 5, 2017

The company that takes lunch together, succeeds together

Most of the company's I have worked in have some kind of flexible working arrangements; ranging from choice over your break times; through to hot-desking with infrastructure that makes working from home almost seamless. So you can imagine my surprise when I joined my latest engagement and everyone takes lunch at the same time! Everyone also starts and finishes at the same time with only a hand full of exceptions. Initially I thought it was weird, even backwards; when close to one hundred people downed tools and headed off for their lunch break. However the many benefits that this provides quickly became apparent and I am now a convert.

To support these fixed times the company has a suitably relaxed approach to staff taking time away from the office when life demands it. i.e. A delivery can only be between 8AM and 12PM; your dog has a bad back and needs to go to the vet, etc. So for the most part everyone is at work during the set hours; however there is enough flexibility to live our lives.


The four primary benefits of fixed Start, Lunch and Finish times are:


  1. Increased social interaction, building up a sense of community and company.
  2. More time available for collaboration and face to face work activities.
  3. Encourages people to rarely do overtime.
  4. Increased efficiency 


Benefits related to increased social interaction


  • More random social interactions occur at lunch time.
  • Easier to arrange lunch with people outside of your team, because you all have lunch break at the same time.
  • Group lunch activities are easier for individuals to plan and attend; hence there more activities run and more regularly.  Some of the regular activities include:
    • Futsal
    • Board games
    • Co-op multiplayer (i.e. Rocket League, Fifa )
    • Art excursions


Increased collaboration time

The fixed times make for more time available for collaboration in day to day work. i.e. Everyone is available to collaborate from the Start time all of the way through to Finish time. No more having to wait until ‘Core Hours’ to be able to talk to someone in your own team.

Rarely do overtime

With everyone up and leaving at the same time, it sends a clear signal that overtime is not the norm here.

Benefits related to increased efficiency


  • Easier scheduling of meetings because you know when everyone is available.
  • Team daily cadence aligned.
  • Team cadence can be fine-tuned.
  • Companywide issues/opportunities can be resolved faster.
  • Company half day celebrations are easier to plan, and will not cut into productive time.


Drawbacks to fixed Start, Lunch and Finish times


  • Prevents regular commitments outside of those start and finish times. i.e. pick up kids from child care. This can turn away some prospective hires.
  • I am sure there are more I just don’t know what they are…


What are your thoughts?


  • Have you had similar experiences? I would love to hear about them, especially if they are from different industries.
  • Have you had different experiences to this? If so please let know how it was different and what we can learn by contrasting the two experiences?



Photo by: Juhan Sonin