⚖️Three things I’ve learned the hard way;
We can not manage what we cannot measure.
What we measure, we improve.
Measuring Software Engineering performance is a delicate topic...
We’ve all had a master class in measurement since early 2020 with covid infection rates, and later vaccination rates. Imagine trying to manage the pandemic without the daily updates, trends and model predictions?
Software delivery is a very expensive endeavour, so organisations focus on two questions;
Are we building the right thing? and,
Are we building the thing right?
We ignore either at our peril.
To choose the right thing, we turn to an array of measures across research, market data, usage trends, and of course customer interviews.
What about helping build the thing right? The measures here are just as important; however, the methodologies are, generally speaking ‘still in beta'.
Thankfully research programs like DORA (think DevOps, not the explorer), Site Reliability, and Resilience Engineering are helping us understand our teams and systems better than ever before. After some trial and error, this is a list of items I recommend we strive to measure.
📉 Strong signals - easy(ish) to measure and benchmark:
> Delivery Lead time - how long does it take for a new code change to get from keyboard to customer?
> Deployment Frequency - how often do we deploy a change to production?
> Service Level Objectives and Indicators - what is the quality of service our customers are experiencing?
🔎 Weaker signals
Are the team engaged?
Are we clear on what we need to build and why?
When something breaks, are we the first to know?
What is the team’s capacity and capability to deal with an incident of failure?
Do incident learnings lead to people or system changes?
⚡ Caution, measurement can go wrong!
It’s very easy to overcook measurement. Even for weight loss, we’re told not to chase the number on the scales because it will lead to negative diet behaviours that won’t last.
Teams are rightly fearful of being treated like machines in a stopwatch culture that oversimplifies what it takes to build and run software systems. We must choose carefully to avoid unintended consequences like burnt-out teams. No point in doubling throughput if we also double the support work.
The goal is to have a baseline and benchmark of what matters to help see our delivery system and its constraints. Just like diets, the first thing a nutritionist will do is use our weight and height to calculate our body mass index.
Looking vs knowing
High Performing teams don’t necessarily talk about measuring delivery performance. However, when asked they always know instinctively what their baselines are, and if something begins impacting them, the team acts.
Focused teams continuously measure to:
Prioritise our resources.
Validate our actions.
Stay motivated.
Originally published on LinkedIn.
Comments