A couple of months back, I wrote that it is important to shorten the results cycle - the time in which teams accomplish business-meaningful things - before shortening the reporting cycle - the time in which we report progress. Doing the opposite generates more heat than light: it increases the magnification with which we scrutinize effort while doing nothing to improve the frequency or fidelity of results.
But both the results cycle and reporting cycle are important to resiliency in software delivery.
A lot of things in business are managed for predictability. Predictable cash flows lower our financing costs. Predictable operations free the executive team to act as transformational as opposed to transactional leaders. Predictability builds confidence that our managers know what they're doing.
The emphasis on predictability hasn't paid off too well for IT. If the Standish (and similar) IT project success rate numbers are anything to go by, IT is at best predictable at underperforming.
When we set out to produce software, we are exposed to significant internal risks: that our designs are functionally accurate, that our tasks definitions are fully defined, that our estimates are informationally complete, that we have the necessary skills and expertise within the team to develop the software, and so forth. We are also subject to external risks. These include micro forces such as access to knowledge of upstream and downstream systems, and macro forces such as technology changes that obsolete our investments (e.g., long-range desktop platform investments make little sense when the user population shifts to mobile), and labor market pressures that compel people to quit.
We can't prevent these risks from becoming real. Estimates are informed guesses and will always be information deficient. Two similarly skilled people will solve technical problems in vastly different ways owing to differences in their experience. We have to negotiate availability of experts outside our team. People change jobs all the time.
Any one of these can impair our ability to deliver. More than one of these can cause our project to crater. Unfortunately, we're left to self-insure against these risks, to limit their impact and make the project whole should they occur. We can't self-insure through predictability: because these risks are unpredictable, we cannot be prepared for each and every eventuality. The pursuit of predictability is a denial of these risks. We need to be resilient to risks, not predictable in the face of them.
This brings us back to the subject of result and reporting cycles: the shorter they are, the more resilient we are to internal and external risks.
Anchoring execution in results makes us significantly less vulnerable to overstating progress. Not completely immune, of course: even with a short results cycle we will discover new scope, which may mean we have more work to do than we previously thought. But scope is an outcome, and outcomes are transparent and negotiable. By comparison, effort-based execution is not: "we're not done coding despite being at 125% of projected effort" might be a factual statement, but it is opaque and not negotiable.
In addition, a short results cycle makes a short reporting cycle more information rich. That, in turn, makes for more effective management.
But to be resilient, we need to look beyond delivery execution and management. A steady diet of reliable data about the software we're developing and how the delivery of that software is progressing allow a steering committee to continuously and fluidly perform its governance obligations. Those obligations are to set expectations, invest the team with the authority to act, and validate results.
When project and asset data are based in results and not effort, it is much easier for a steering committee to fulfill its duty of validation. And it helps us out with our other two obligations as well. We can scrutinize which parts of the business case are taking greater investment than we originally thought, and whether they are still prudent to pursue as we are making them. We also see if we are taking technical shortcuts in the pursuit of results and can assess the long term ramifications of those shortcuts nearer the time they are made. We are therefore forewarned as to whether an investment is in jeopardy long before financial costs and technical debt rise, and change or amplify our expectations of the team as we need to. This, in turn, gives us information to act on the last governance responsibility, which we do by changing team structure, composition and even leadership, and working with the investment committee to maintain viability of the investment itself.
A short results cycle, reporting cycle and governance cycle make any single investment more resilient. They also enable a short investment cycle, which makes our entire portfolio more robust. From an execution perspective, we can more easily move in and out of projects. Supported by good investment flow (new and existing opportunities, continuously reassessed for timeliness and impact), hedging (to alleviate risks of exposure to a handful of "positions" or investments), and continuous governance assessing - and correcting - investment performance, we can make constant adjustments across our entire portfolio. This makes us resilient not only at an individual investment level, but at a portfolio level, to micro and macro risks alike.
IT has historically ignored, abstracted or discounted risks to delivery. Resiliency makes us immune to this starry-eyed optimism which is at the core of IT's chronic underperformance. That makes IT a better business partner.