Now comes the crucial feedback loop we prize, if we are to make progress. Build/Measure/Learn - then rinse and repeat.
Of course, this is easy to write, but the hardest thing to really crack. At this point at EE, we still haven’t really cracked this in order to get real data on the majority of our APs. So what follows is more to state where we are aiming for, rather than what we have accomplished!
In our Advice Process template (see the AP Housekeeping section), we have an area titled success metrics. The purpose of this is threefold:
Improve the quality of the thinking that goes into making the decision.
Provide a starting position, or anchor, for the build / measure / learn loop.
Help to de-bias future decisions, based on the result of this decision.
A significant part of the individual and organisational learning we get from a decision is through quantifying the impact of the decision. To help do this, we ask for the key hypothesis (or Objective) and associated tests (or key results) to be defined in each AP. We can use these to either test the outcome directly, or (more likely) show progress towards the outcome, by measuring leading indicators.
Be specific about:
What metrics will quantify the (positive) impact of this decision?
What are the thresholds for success and failure?
When and how will you take measurements?
Resulting correlation does not imply causation: don’t create an overly tight relationship between results and decision quality. Just because you had a good (or bad) outcome, it doesn’t follow that it was a good (or bad) decision. It could just be down to dumb luck (see that earlier reference to poker).
Break down the decision into a series of small steps: then measure as you go. At decision time, you’re often information poor. More often than not you can break down a big decision into a series of smaller ones. If you gain new information and you end up abandoning or pivoting based on this new information, congratulations – you’ve just made a good decision.