The longer an IT project is expected to take, the greater the risk of moral hazard: i.e., that IT will provide poor information to its business partners or have incentive to take unusual risks to complete delivery.
This is not borne of maliciousness. People on an IT project are not necessarily out to defraud anybody. It may simply be that people incompletely scope the work, make assumptions about skills and capabilities, or are overly optimistic in estimates. This creates misleading project forecasts, which, in turn, lead to a disappointing asset yield.
This is the raison d'être for the rules-based approach to IT: improve rigor in scoping, estimating and role definition, it is argued, and projects will be on-time and on budget. Unfortunately, this won't accomplish very much: the moral hazard that plagues any IT project is not a product of poor practice, but of behaviours.
Rules-based IT planning assumes that each person in a team has an identical understanding of project and task, and is also similarly invested in success. It ignores that any given person may misunderstand or outright disagree with a task, a technology choice or a work estimate. These differences amplify as people exit and join a project team: those who are present when specific decisions are taken – technical or business – have a context for those decisions that new people will not. The bottom line is, there is a pretty good chance that any action by any person will not contribute to the success of the project.
Complicating matters is the ambiguous relationship of the employee to the project. The longer a project, and larger a team, the more anonymous each individual’s contribution. This gives rise to ITs version of the tragedy of the commons: because everybody is responsible for the success of a project, nobody takes responsibility for its success. The notion that “everybody is responsible” is tenuous: success or failure of the project may have no perceived bearing on their status as employees. And, of course, people advance their careers in IT by changing companies more often than they do through promotion.
But by far, the biggest single contributing factor to moral hazard is the corporate put option. There’s a long history of companies stepping in to rescue troubled IT projects. This means people will expect that some projects are too big or too important to fail, and that the business will bail out a project to get the asset.
All told, this means that the people working in a traditionally managed IT project may not understand their tasks, may perceive no relationship between project success and job or career, and may believe that the company will bail out the project no matter what happens. There might be a lot of oars in the water, but they may not be rowing in the same direction, if at all.
Especially for high-end IT solutions, the rules-based approach to IT is clearly a fallacy: any “precise” model will fail to identify every task (we cannot task out solutions to problems not yet discovered) and every risk (project plans fail to consider external forces, such as dynamics in the labour market). Rules feign control and create a false confidence because they assume task execution is uniform. They deny the existence of behavioural factors which make-or-break a project.
A rules-based approach actually contributes to moral hazard, because the tasks people perform become ends in and of themselves. To wit: writing requirements to get past the next “phase gate” in the project lifecycle is not the same as writing actionable statements of business need that developers can code into functionality.
Work done in IT projects can end up being no different from the bad loans originated to feed the demand for securitised debt. At the time development starts in a traditionally managed project, all we know is that there are requirements to code (e.g., mortgage paper to securitise.) Further downstream, all we know is there are components to assemble from foundation classes (e.g., derivatives to create). Nobody touching the details of the project have responsibility for its end-to-end lifecycle; once a detailed artifact clears the phase gate, that person is done with it. This is supplemented with misguided governance: quality and completeness of intermediate deliverables aren't reconciled to a working asset but to an abstraction of that asset, the project plan.
Just as we don’t discover defaults until long after the bad paper has entered the securitisation process, we similarly don’t discover problems with specifications or foundation code until late in the delivery cycle. There's typically only a minor provision (in IT terms, a “contingency”), meaning we can absorb only a small amount of “bad paper” in the project. And because it comes so late in the cycle, the unwind is devastating.
This does not mean that IT professionals are untrustworthy. What it does mean is that there must be a short impact horizon for every decision and every action. Our top priority in managing IT projects must be to minimise the time between the moment a requirement is articulated and the moment it is in production. That means the cycle time of execution – detailing requirements, coding, testing and releasing to production – should be measured in days, not months and years. This way, the results of each decision are quickly visible in the asset to everybody on the project.
Short impact horizons align behaviour with project success. Each person sees evidence of their contribution to the project; they do not simply pass the work downstream. A project may still go off course, but it won't do so for very long; a small correction is far less costly than a major unwind. And, of course, we can extract better governance data from an asset than we can from a plan.
Best of all, we’re not backstopping the project with the unwritten expectation that the business may need to exercise its put option.