Friday, December 28, 2007

Mitigating Capability Risk

With the cost of capital on the rise, the need to focus on returns is much more acute. Unfortunately, IT has not traditionally excelled at maximising returns. Industry surveys consistently show that a third to a half of all IT projects fail outright or significantly exceed their cost estimate.1 Delays are costly: IRR craters 25% if a $5mm / 12 month project with an estimated annual yield of $30mm is 4 months late. Monte Carlo simulation that factors the most common project risks, including schedule, turnover, and scope inflation, will consistently show that the probability of delivery being made 3 months late or later is greater than the probability that delivery will occur early, on time, or within one month of delivery.2

Given the significant contribution of technology to just about every business solution, IT risk management is a critical practice. But IT risk management practices are not mature. Planning models tend to be static representations of a project universe, regardless the time horizon. Risks are managed as exceptions. When things change, as they inevitably do, we try to force exceptions back into compliance with the plan. Given all the variables that can change – core technologies and compatibilities, emergent best practices, staff turnover, and a business environment that can best be described as “turbulent” - traditional approaches of “managing to plan” have a low risk tolerance.

To manage risk in our environment, we must first understand the nature of risk. Market risk offers the possibility of returns for invested capital. The yield depends on a lot of factors which an investor may influence, but over which the investor likely has little control: that a market materialises for the offering, that the company is not outmaneuvered by competitors, and so forth. Some market risks have potential to generate breakaway returns – yields well above a firm’s cost of capital. These opportunities represent the most strategic investments to a firm. IT doesn’t face market risks, IT faces primarily execution risk: that it can deliver solutions in accordance with feature, time, cost and quality expectations. Execution risk factors that are substantially within the control of an investing company because it had more direct control over them.

Execution risk is the risk of committing an unforced error. Poor execution depresses returns (again, consider the impact to IRR for a late delivery), whereas competent execution does little more than maintain expected returns. Maximising execution can amplify yield. Using the example above, making incremental deliveries beginning at 3 months can increase project IRR between 5 and 10%. This is, obviously, a significant competitive weapon. But this capability can be monetised only if it can be exploited by the business itself. This, then, is the impact of IT on returns: highly capable execution can create extraordinary returns, but only if the business can put it to use, and the market opportunity exists in the first place. The yield ceiling is dictated by the potential in the business opportunity itself, not in how it is executed. Execution risk, then, is a threat to returns, not an enabler of them.

Execution risk is not simply the risk that things don't get done; e.g., that excessive days out of office prevent people from performing tasks by specific dates. It is the risk that the organisation has the fundamental capability to identify, understand and solve the problems and challenges it faces in realisation of a solution. This means that execution risk is substantially capability risk: that IT brings the right level of capability to bear to minimise the risk of execution failure and thus maximise returns.

Breakaway market opportunities present the greatest challenges to fulfillment. They involve things that haven’t been done before: a product, service, or business competence that doesn’t currently exist within a firm or even an industry. The business processes that need to be defined, modeled and automated to fulfill that market opportunity will not be established at the front end of a project. They will change significantly over the course of fulfillment as they become better understood. Breakaway opportunities tend also to be highly sensitive to non-functional requirements, such as performance, scalability and security. It is subsequently highly likely that there will be new or emergent technologies applied, if not outright invented, over the course of delivery. All together, this means that the problem domain will be complex and dynamic. These are not problem domains that lend themselves to a divide-and-conquer approach, they are domains that require a discover-collaborate-innovate approach. This calls for people who are not only intelligent, but strong, open-minded problem-solvers with a predisposition to work collaboratively with others. It isn’t a question of engaging experienced practitioners; it is a question of engaging high-capability practitioners.

If we fail to understand the capability demands of breakaway opportunities, and similarly fail to recognise the capability of the people we bring to bear to fulfill them, we amplify capability risk. Consider what happens under the circumstances described above if we take a “mass production” approach to delivery. We define a static set of execution parameters for a largely undefined domain. We make a best effort decomposition of an emergent business problem into compartmentalised task inventories. We then look to fulfill these using the lowest cost IT capacity that can be sourced, grading it on a single dimension of capability – experience – which constitutes the extent of our assessment of team strength. Because the situation requires a high degree of problem solving skills and collaboration, this approach quickly over-leverages the highest capable people. This leaves the mass of executors wasting effort on misfit solutions, or it leaves them idle, waiting for orders. A recent quote from Shari Ballard, EVP of Retail Channel with Best Buy, highlights this:

  • 'Look at why big companies die. They implode on themselves. They create all these systems and processes – and then end up with a very small percentage of people who are supposed to solve complex problems, while the other 98% of people just execute. You can’t come up with enough good ideas that way to keep growing.'3

Because capability isn’t present in the decision frame, we run a significant probability of defaulting into a state of capability mismatch. This obliterates any possibility of cost minimisation (over-running the mass production model) and jeopardises the business returns.

IT is a people business, as opposed to an asset or technology business. The assets produced by IT – that is, the solutions bought by the business – are the measurable results produced by capability. Capability risk management is a byproduct of effective IT Governance. While it has a stewardship responsibility for the capital with which it is entrusted, IT Governance is primarily concerned with sourcing, deploying and maturing capability to maximise business returns. It looks to trailing indicators – which with Agile practices can be made “real time” indicators – that evaluate the quality of assets produced and the way in which those assets are produced. These allow it to determine whether current capability delivers value for money, and delivers solutions in accordance with expectations. It must also look to leading indicators that assess the skills, problem solving abilities and collaborative aptitude of its people, no matter how sourced: employee, contractor, consultant or outsourcer. By so doing, IT becomes a better business partner as it can unambiguously assess and improve its ability to maximise returns.


1There is the classic mid-90s Chaos Report by the Standish group that posited that as many as 50% of all IT projects fail. See also, “Reduce IT Risk for Business Results,” Gartner Research, 14 October 2003

2The seminal work in this area is Waltzing with Bears by Tom DeMarco and Tim Lister. They published a Monte Carlo method in a spreadsheet called Riskology that allows you to explore risk factors and tolerances and their impact on a project forecast.

3Ms. Ballard was quoted by Anders, George. “Management Leaders Turn Attention to Followers” The Wall Street Journal, 24 December 2007.

Sunday, November 25, 2007

Market Power Increases Exponentially with IT Velocity

Bernoulli’s Theorem holds that the potential power that can be produced by a turbine or rotor is equal to the cube of the velocity with which the turbine rotates, expressed simply as Power = Velocity3. A basic concept of wind energy systems, it is increasingly relevant in commercial building architecture: specifically, if wind velocity can be increased through building design, the potential power that a building can derive from wind energy is considerably greater. This means that a building can be designed such that it generates a non-trivial portion of its electrical power from wind energy.1

The exponential relationship of power to velocity is similarly evident in the relationship between business competitiveness and IT application development. Specifically, market power should increase exponentially with increases in IT velocity.

We can define velocity as the measure of the rate of delivery, expressed in the time it takes for a finely grained business need to go from idea to implemented solution. Here, we are interested in assessing the rate at which functionality is delivered: a dozen features2 delivered in a 6 month time frame have an average velocity of 6 months, not 0.5 months (12 features delivered in 6 months is not 0.5 months to produce one feature; time to production is 6 months). Restated, we hold time constant to assess the time it takes for new features to go from end to end of the delivery pipeline.

The power derived from this velocity is the ability of the company to exert itself in the marketplace. That is, a company has power in the market if it attracts customers, employees, partners and investors through execution of its strategy; it also has power if it forces competitors react if they are to retain what they have. This could be anything from having a lower cost footprint, features and functionality in solution offerings that competitors simply don’t have, or having the best tools or solution offering that attracts the top talent. The more change that a firm creates in its market, the more influence it exerts over an industry: competitors will be forced to spend resources reacting to somebody else’s strategy, not pursuing their own.

In the aggregate, power is abstract in this definition. An economic model that assesses the extent to which a firm has market power would be substantially an academic exercise. There are, however, tangible indicators of market power that are worthy of mention in the annual report: net customer acquisition, relative cost footprint, and competitive hires and exits are all hard measures of market power. These are real and significant business benefits: indeed, making competitors react by destabelising their agenda is of exponentially greater value than that of the innovations themselves.

Because all of these can be enabled or amplified by IT, velocity is subsequently a key measure of IT effectiveness. It is a particularly critical concept for IT in both Governance and Innovation.

Velocity is a key metric of the first of our two Governance questions: are we getting value for money? Many company's market offerings or cost competitiveness are rooted in applied technology. It stands to reason that the rate at which functionality is delivered increases business competitiveness either by constantly adding capability or by aggressively reducing costs. Sustainable IT velocity maintains market power; an increase in this velocity increases market power. Velocity, then, is a key indicator of the first governance question in that it provides a quantified assessment of IT’s value proposition to an organisation.

It is also an indicator of how effectively IT drives innovation. Business innovation is the consistent, rapid and deliberate maturation of products, services, systems and capabilities. As, again, businesses are increasingly dependent on technology for capability and cost, the rate with which IT delivers functionality will be an indicator of how effective IT is as an enabler of business innovation. While an intangible concept, this allows IT to position itself as a driver of business innovation and not simply a utility of technology services.

This is not simply a question of delivering IT solutions, but of how those solutions are consumed by the business. IT may make frequent deliveries, but if they are not consumed, organisational velocity is reduced. This is not the same as what happens in the market. The opportunities to exploit an innovation or solution delivered may not materialise in the market; in this case, the potential market power achievable by IT velocity will not be realised. If, however, solutions are delivered by IT but not consumed by the business, velocity is never truly maximised. This is an important distinction, because IT is not governed exclusively by how it is delivered; it is governed by how effectively it is consumed. Ignoring the “buy side” makes it too easy for an IT organisation to create false efficiencies or meaningless business results because it is knowingly or otherwise out of alignment with its host organisation. This lack of alignment doesn’t leave power potential unrealised, it undermines velocity.

This is actionable market behaviour with historical precedent. General George S. Patton understood the need to constantly bring the fight to the enemy. “Patton… clearly appreciated the value of speed in the conduct of operations. Speed of movement often enables troops to minimise any advantage the enemy may temporarily gain but, more important, speed makes possible the full exploitation of every favorable opportunity and prevents the enemy from readjusting his forces to meet successive attacks. Thus through speed and determination each successive advantage is more easily and economically gained than the previous one. … [R]elentless and speedy pursuit is the most profitable action.”3 Inciting market change, then, determines whether you follow your strategy or react to another’s.

The ability to disrupt a market by introducing change allows a company to execute its strategy at the expense of its competitors. Business execution, increasingly rooted in technology, thus derives a great deal of its competitive advantage from the rate at which it can change its technology and systems. Velocity, the sustained rate at which business needs mature from expression to implemented solution, is therefore a key IT governance metric. It is, in fact, an expression of ITs value proposition to its host organisation.



1 I am indebted to Roger Frechette for introducing me to this element of Bernoulli’s theorem. There are a number of articles highlighting his work on the Pearl River Tower, which when completed will be a remarkable structural and mechanical engineering achievement.
2 In this context, a feature is the same as an Agile story: "simple, independent expressions of work that are testable, have business value, and can be estimated and prioritised."
3 Eisenhower, Dwight D. Crusade in Europe Doubleday, 1948. p. 176.

Sunday, October 28, 2007

IT Governance Maximises IT Returns

In recent years, Michael Milken has turned his attention to health and medicine. Earlier this year, the Milken Institute released a report concluding that 7 chronic illnesses – diabetes, hypertension, cancer, etc. – are responsible for over $1 trillion in annual productivity losses in the United States. They go on to report that 70% of the cases of these 7 chronic illnesses are preventable through lifestyle change: diet, exercise, avoiding cigarettes and what not.1 In a recent interview on Bloomberg Television, Mr. Milken made the observation that because of the overwhelming number of chronic illness cases, medical professionals are forced to devote their attention to the wrong end of the health spectrum in the US. That is, instead of creating good by increasing life expectancy and enhancing quality of life through medical advancement, Mr. Milken argues that the vast majority of medical professionals are investing their energy into eliminating bad by helping people recover from poor decisions made. It’s obviously a sub-optimal use medical talent, and through sheer size is showing signs of overwhelming the medical profession. It is a problem that will spiral downward until the root causes are eradicated and new cases of “self inflicted” illness abates.

This offers IT a highly relevant metaphor.

Many of the problems that undermine IT effectiveness are self-inflicted. Just as lifestyle decisions have a tremendous impact on quality of life, how we work has a tremendous impact on the results we achieve. If we work in a high-risk manner, we have a greater probability of our projects having problems and thus requiring greater maintenance and repair. Increased maintenance and repair will draw down returns. The best people in an IT organisation will be assigned to remediating technical brownfields instead of creating an IT organisation that drives alpha returns. That assumes, of course, that an IT organisation with excessive brownfields can remain a destination employer for top IT talent.

This suggests strongly that “how work is done” is an essential IT governance question. That is, IT governance must not be concerned only with measuring results, but also knowing that that the way in which those results are achieved is in compliance with practices that minimise the probability of failure.

This wording is intentional: how work is performed reduces the probability of failure. If, in fact, lifestyle decisions can remove 70% of the probability that a person suffers any of 7 chronic conditions, so, too, can work practices reduce the probability that a project will fail. Let’s be clear: reducing the probability of failure is not the same as increasing the probability of success. That is, a team can work in such a way that it is less likely to cause problems for itself, by e.g., writing unit tests, having continuous integration, developing to finely grained statements of business functionality, embedding QA in the development team, and so forth. Doing these isn’t the same as increasing the probability of success. Reducing the probability of failure is the reduction of unforced errors. In lifestyle terms, I may avoid certain actions that may cause cancer, but if cancer is written into my genetic code the deck is stacked against me. So it is with IT projects: an extremely efficient IT project will still fail if it is blindsided because a market doesn’t materialise for the solution being developed. From a solution perspective, we can do things to control the risk of an unforced error. This is controllable risk, but it is only internal risk to my project.

This latter point merits particular emphasis. If we do things that minimise the risk of an unforced error – if we automate a full suite of unit tests, if we demand zero tolerance for code quality violations, if we incrementally develop complete slices of functionality – we intrinsically increase our tolerance for external (and thus unpredictable) risk. We are more tolerant to external risk factors because we don’t accumulate process debt or technical debt that makes it difficult for us to absorb risk. Indeed, we can work each day to maintain an unleveraged state of solution completeness: we don’t accumulate “debt,” mortgaged our future by needing downstream effort (such as “integration” and “testing”) that accumulates a partial solution which is alleged to be complete. Instead, we pull downstream tasks forward to happen with each and every code commit, thus maintaining solution completeness with every action we take.

One of our governance objectives must be that we are cognisant of how solutions are being delivered everywhere in the enterprise, because this is an indicator of their completeness. We must know that solutions satisfy a full set of business and technical expectations, not just that solutions are “code complete” awaiting an unmeasurable (and therefore opaque) process that makes code truly “complete.” These unmeasurable processes take time, and therefore cost; they are subsequently a black-box: we can time-box them, but we don’t really know the effort that will be required to pay down any accumulated debt. This opacity of IT is no different from opacity in an asset market: it makes the costs, and therefore the returns, of an IT asset much harder to quantify. The inability to demonstrate functional completeness of a solution (e.g, because it is not end-to-end developed) as well as the technical quality of a solution (through continuous quality monitoring) creates uncertainty that the asset that is going to provide a high business return. This uncertainty drives down the value of the assets that IT produces. The net effect is that it drives down the value of IT, just as the same uncertainty drives down the value of a security.

If the governance imperative is to understand that results are being achieved in addition to knowing how they are being achieved, we must consider another key point: what must we do to know with certainty how work is being performed? Consider three recent news headlines:

  1. Restaurant reviews lack transparency: restauranteurs encourage employees to submit reviews to surveys such as Zagat, and award free meals to restaurant bloggers who often fail to report their free dining when writing their reviews.2

  2. Some watchmakers have created a faux premium cachet: top watchmakers have been collaborating with specialist auction houses to drive up the prices by being the lead bidders on their own wares, and doing so anonymously. The notion that a Brand X watch recently sold for tens of thousands of dollars at auction increases the brand’s retail marketability by suggesting it has investment-grade or heirloom properties. That the buyer in the auction might have been the firm itself would obviously destroy that perception, but it is obfuscated from the retail consumer.3

  3. The credit rating of mortgage backed securities created significant misinformation in risk exposure. Clearly, a AAA rated CDO heavily laden with securitised sub-prime mortgage was never worthy of the same investment grade as, say, GE corporate bonds. The notion that what amounted to high-risk paper could be given a triple-A rating implied characteristics of the security that weren’t entirely true.

Thus, we must be very certain that we understand fully our facts about how work is being done. Do you have a complete set of process metrics established with your suppliers? To what degree of certainty do you trust the data you receive for those metrics? How would you know if they’re gaming the criteria that you set down (e.g., meaningless tests are being written to artificially inflate the degree of test coverage)? We must also not allow for surrogates: we cannot govern effectively by measuring documentation. We must focus on deliverables, and the artifacts of those deliverables, for indicators of how work is performed. A recent quote dating to the early years of CBS News is still relevant today: “everybody is entitled to their own opinion, but not their own facts.”4 Thus, IT governance must not only pay attention to how work is being done, it must take great pains to ensure that the sources of data that tell us how that work is being done have a high degree of integrity. People may assert that they work in a low-risk manner, but that opinion may not withstand the scrutiny of fact-based management. As with any governance function, the order of the day is no different than administration of nuclear proliferation treaties: “trust, but verify.”

This entire notion is a significant departure from traditional IT management. As Anatole France said of the Third Republic: “And while this enfeebles the state it lightens the burden on the people. . . . And because it governs little, I pardon it for governing badly.”5 On the whole, IT professionals will feel much the same about their host IT organisations. Why bother with all this effort to analyse process? All anybody cares about is that we produce "results" - for us, this means getting software into production no matter what. This process stuff looks pretty academic, a lot of colour coded graphs in spreadsheets. It interferes with our focus on results.

Lackadaisical governance is potentially disasterous because governance does matter. There is significant data to suggest that competent governance yields higher returns, and similarly that incompetent governance yields lower returns. In a 2003 study published by Paul Gompers, buying companies with good governance and selling those with poor governance from a population of 1,500 firms in the 1990s would have produced returns that beat the market by 8.5% per year.6 This suggests that there is a strong correlation between capable governance and high returns. Conversely, according to this report, there were strong indicators in 2001 that firms such as Adelphia and Global Crossing had significant deficiencies in their corporate governance, and that these firms represented significant investment risk.

As Gavin Anderson, chairman and co-founder of GovernanceMetrics International recently said, “Well governed companies face the same kind of market and competitor risks as everybody else, but the chance of an implosion caused by an ineffective board or management is way less.”7 The same applies to IT. Ignoring IT practices reduces transparency and increases opacity of IT operations, reducing IT returns. Governing IT so that it minimises the self-inflicted wounds, specifically through awareness of “lifestyle” decisions, creates an IT capability that can drive alpha returns for the business.


1DeVol, Ross and Bedroussian, Armen with Anita Charuworn, Anusuya Chatterjee, In Kyu Kim, Soojung Kim and Kevin Klowden. An Unhealthy America: The Economic Burden of Chronic Disease -- Charting a New Course to Save Lives and Increase Productivity and Economic Growth October 2007
2McLaughlin, Katy. The Price of a Four Star Rating The Wall Street Journal, 6-7 October 2007.
3Meichtry, Stacy. How Top Watchmakers Intervene in Auctions The Wall Street Journal, 8 October 2007.
4Noonan, Peggy. Apocalypse No The Wall Street Journal, 27-28 October 2007.
5Shirer, William L. The Collapse of the Third Republic Simon and Schuster, 1969. Shirer attributes this quote to Anatole French citing as his source Histoire des littératures, Vol. III, Encyclopédie de la Pléiade
6Greenberg, Herb. Making Sense of the Risks Posed by Governance Issues The Wall Street Journal, 26-27 May 2007.
7Ibid.

Wednesday, September 26, 2007

Investing in Strategic Capability versus Buying Tactical Capacity

US based IT departments are facing turbulent times. The cost efficiencies achieved through global sourcing face a triple threat to their fundamentals:
  1. The USD has eroded in value relative to other currencies in the past 6 years1 – this means the USD doesn’t buy as much global sourcing capacity as it did 6 years ago, particularly vis-à-vis its peer consumer currencies.

  2. The increase in global IT sourcing is outpacing the rate of development of highly-qualified professionals in many markets2 – salaries are increasing as there are more jobs chasing fewer high-qualified candidates, and turnover of IT staff is rising as people pursue higher compensation.
  3. Profitability growth in the high end of the IT consumer market remains strong – As the returns of firms at high end of the IT consumer market continue to be strong – Goldman Sachs just had its 3rd best quarter in its history3 – demand will intensify for highly capable people.
This could significantly change labour market dynamics. Since the IT bubble, the business imperative has been to drive down the unit cost of IT capacity (e.g., the cost of an IT professional per hour). This has been achieved substantially through labour arbitrage – sourcing IT jobs from the lowest cost provider or geography. However, the reduced buying power of the USD, combined with increasing numbers of jobs chasing fewer people, plus an increase in demand at the high end of the labour market, means that simple labour arbitrage will have less impact on the bottom line. As IT costs change to reflect these market conditions, US-based IT organizations will face an erosion of capability.

In one sense, labour is to the IT industry as jet fuel is to the airline industry: IT is beholden to its people, just as airplanes don’t fly without fuel. For quite some time, we’ve attempted to procure labour using a commodity approach: somebody estimates they have x hours of need, which means they need y people, which will then globally sourced from the least expensive provider. The “unit cost optimisation” model of pricing IT capability defaulted into success because of the significant cost disparity in local versus offshore staff. The aforementioned market trends suggest that the spread may narrow. If it does, a number of the underlying assumptions are no longer valid, and fundamental flaws in most labour arbitrage models is exposed: specifically, that IT needs are uniform, and IT capabilities are uniform and can be defined as basic skills and technical competencies.

Unlike jet fuel, labour isn’t a commodity. Not every hour of capacity is the same. There are grades of quality of capability that defy commoditisation. This means there is a quality dimension that is present yet substantially invisible when we assess capacity. Macro-level skill groupings are meaningless because they’re not portable (e.g., one organisation’s senior developer is another’s junior). They also fail to account for labour market trends: if the population of Java coders increases in a specific market but new entrants lack aptitude and experience and their training is inferior, we have a declining capability trend that is completely absent from our sourcing model. Nor is capacity linear – two people of lower capability will not be as effective as one person of high capability, and too many low capability people create more problems than they solve. An IT organisation which stabilised around simple unit-cost optimisation will find itself at the mercy of a market which it may not fully understand, with characteristics which haven’t factored into its forecasts.

The commodity model also ignores how advanced IT systems are delivered. High-return business solutions don’t fit the “mass production” model, where coders repetitively apply code fragments following exacting rules and specifications. Instead, business and IT collaborate in a succession of decisions as they navigate emerging business need whilst constantly integrating back to the tapestry of existing IT components and business systems. This requires a high degree of skill from those executing. It also requires a high degree of meta knowledge or “situational awareness,” that is, domain knowledge and environmental familiarity necessary to deliver and perpetuate these IT assets. This includes everything from knowing which tools and technology stacks are approved for use, to how to integrate with existing systems and components, to what non-functional requirements are most important, to how solutions pass certification. Combined, this meta knowledge defines the difference between having people who can code to an alleged state of “development complete” versus having people who can deliver solutions into production.

Because the assets that drive competitiveness through operations are delivered through a high-capability IT staff, unit cost minimisation is not a viable strategy if IT is to drive alpha returns. Strategic IT is therefore an investment in capability. That is, we are investing not just in the production of assets that automate operations, we are investing in the ability to continuously adjust those IT assets with minimal disruption, such that they continue to support evolving operational efficiencies. This knowledge fundamentally rests with people. The value of this knowledge is completely invisible if we’re buying technology assets based on cost.

This brings us back to current market conditions. At the moment, tactical cost minimisation works against the USD denominated market competitor. The EUR, CHF, AUD, CAD or GBP competitor can afford to increase salaries wherever sourced without as much bottom-line impact as their USD competitors. They subsequently have an advantage in attracting new talent, and are better positioned to lure away highly capable people from US based competitors. In addition, the increased cost of IT for the US based competitor might mean more draconian measures, such as staff reductions, to meet budget expectations. To avoid the destruction of capability, a US IT organisation may look to simply shift sourcing from international to local markets. But this shift is not without its risk in durability (will the USD rise again to match historical averages?), competitive threat (other firms will follow the same strategy and drive up local market salaries), or cost of change (nothing happens in zero time, and the loss / replacement of meta knowledge comes at a cost.) Clearly, global sourcing is no longer a simple cost equation. It is complex, involving a hedge on investing in sustainable capability development relative to competitive threats and exchange rate fluctuations.

Responding to this challenge requires that the IT organisation have a mature governance capability. Why governance? Because surviving the convulsions in the cost of the “jet fuel” of the IT industry requires that we frame the complete picture of performance: that value is delivered, and that expectations (ranging from quality to security to regulatory compliance) are fully satisfied. IT doesn’t do this especially well today. It suffers no shortage of metrics, but very few are business-facing. The absence of so few business-oriented metrics gives “cost per hour” that much more prominence, and fuels the unit cost approach to IT management.

Breaking out of this requires assessing the cost of throughput of IT as a whole and of teams in particular, not of the individual. IT is only as capable as the productivity of its cross-functional execution; specifically, how effectively do IT teams steer business needs from expression to production, subject to all the oddities of that particular business environment. If the strength of currently sourced teams can be quantitatively assessed, the organisational impact of a potential change in IT sourcing can be properly framed. The lack of universal capability assessment, and the immaturity of team-based results analysis mean that an IT governance function must define these performance metrics for itself, relative to its industry, with cooperation and acceptance from its board. Without it, IT will be relegated to a tactical role, forever chasing the elusive “lowest unit cost” and perpetually disappointing its paymasters, struggling to explain the costs of execution which cannot be accounted in a unit cost model.

If an IT organisation is focused on team throughput and overall capability, it can strategically respond to this threat. Just as jet fuel supply is secured and hedged by an airline, so must labour supply be strategically managed by IT. This means managing the labour supply chain4 to continuously source high capability people, as opposed to recruiting to fill positions as they become vacant. This requires managing supply and demand by doing such things as anticipating need and critically assessing turnover, creating recruiting channels and candidate sources, identifying high-capability candidates, rotating people through assignments, understanding and meeting professional development needs, setting expectations for high-performance, providing professional challenges, offering training and skill development, critically assessing performance, managing careers and opportunities, correcting poor role fits and bad hiring decisions, and managing exits.

Doing these things builds a durable and resilient organisation – attributes that are invisible in a cost center, but critical characteristics of a strategic capability. This is, ultimately, the responsibility of an IT organisation, not an HR department. HR may provide guidelines, but this is IT’s problem to solve; it cannot abdicate responsibility for obtaining its "raw materials." Clearly, building a labour pipeline is a very challenging problem, but it's the price of admission if you're going to beat the market.

IT drives alpha returns not just through the delivery of strategic IT assets, but by investing in the capability to consistently deliver those assets. If capability moves with the labour market, an IT organisation will yield no better then beta returns to the business. Current market indicators suggest that it will be difficult for US based firms to maintain their current levels of capability, thus the business returns driven by an IT capability that moves with the market are likely to decline. Tactical buyers of IT are facing a cost disparity, and will have few cards to play that don't erode capability. Strategic investors in IT can capitalise on these trends to intensify strengths, and even disrupt competitors, through aggressive management of its labour pipeline.


1 Comparing August 2001 to August 2007 monthly averages, the USD declined 28% to the GBP, 34% to the EUR, 28% to the CHF, 37% to the AUD, 31% to the CAD, 13% to the INR, 8% to the CNY. Exchange rate data was pulled from Oanda.

2 Technology job growth and salaries are on the rise worldwide. Two recent articles highlight India
and the US. Also, I’ve referenced the following two previously, but Adrian Wooldridge makes a compelling argument for the increased competition for talent, and there’s ample data on the gap between job growth and volume of new entrants. There’s some recent articles evaluating the quality of talent but I don’t have those handy.

3 Profitability among market leaders and overall technology sector growth continues to be strong globally.

4 I am indebted to Greg Reiser for this term.

Friday, August 24, 2007

Good Management Can Work Miracles

Pharmaceutical companies do not successfully deliver drugs just because they hire a lot of highly skilled researchers in lab coats. They deliver drugs because they have people to secure funding for research of some drugs over others, to take them through lab and clinical trial, to steer them through regulatory approval, to manufacture, to make doctors aware of them, to distribute them to hospitals and pharmacies, and to follow-up on results. When it all comes together, the overall benefit of the resulting system can be incredible, such as an increase in life expectancy. As Thomas Teal wrote succinctly, “Good management works miracles.”1

The same applies to an Information Technology organisation that is core to achieving alpha returns: it needs the management practices to match high-capability people. Consider Xerox PARC. In the 1970s it spawned tremendous innovation in personal computing technology. From those innovations came products, solutions, and even categories that didn’t previously exist. Billions of dollars of revenue and profitability were generated – for everybody but Xerox. Why? Bright people were inventing and innovating, but nobody was there to monetise their work.2

In general, IT has structural and social deficiencies that conspire against development of a management capability. Technical IT jobs offer individual satisfaction on a daily basis, with frequent feedback and acknowledgement of success at solving a very specific problem or challenge. IT reward systems are also often based on individual performance tied to granular and highly focused statements of accomplishment. This is contrary to IT management positions where the results of decisions made may not be realised for weeks or months, measures of success are often less tangible, and rewards based on the performance of a large collection of individuals. It is also not uncommon for a company to belittle the value of management by defining it as a collection of supervisory or hygienic tasks e.g., to ensure everybody on a team has taken requisite compliance training each quarter and submits their timesheet each week. In other instances, line management may simply have all decision-making denied it, relegated to polling subordinates to find what’s going on to summarise and report upwards, while also being given messages from higher up to deliver to the rank and file. These aren’t management practices, they’re tasks of administrative convenience masquerading as management.

These deficiencies are supplemented by an IT industry that, on the whole, doesn’t place much value on management. Technical people promoted into management positions will very often resist management responsibilities (e.g., electing to continue to perform technical tasks), and also be highly unleveraged in a team (one manager to dozens of direct reports.) There is more precise definition and greater mobility of technical jobs than of management jobs. The same is true for skill acquisition: there are a wealth of IT oriented books, magazines, journals and publications on very specific technologies or applications of technologies, but not nearly the depth on matters of management. And all too often, what passes for management guidance in books and publications are lightweight abstractions of lessons learnt wrapped in layers of cheerleading and self-esteem building directed at the reader, not principles of sound management science. No surprise, then, that it can be far more attractive for an IT professional to prefer a career path of “individual contributor” over “manager.” Collectively, this doesn’t help increase in number the ranks of capable managers.

One of the root causes is that IT is considered a technology-centric business, as opposed to a people-centric one. 3 By definition, IT solutions – what the business is paying for – are not delivered by technology, but by people. The misplaced orientation toward technology as opposed to people means that management tends to focus not on what it should – “getting things done through people” – but on what it should not – the “shiny new toys” of technology assets. This misplaced focus creates a deficiency in management practices, and widens the capability gulf relative to the demands being placed on IT today. It is a structural inhibitor to success in delivering solutions.

Top-down, high-control management techniques have proven to be orthogonal to the IT industry. Amongst other things, this is because of a high concentration of highly-skilled and creative (e.g., clever at design, at problem solving, and so forth) people who don’t respond to that style of management. It is also because top-down practices assume an unchanging problem space, and thus an unchanging solution space. Unfortunately, this is not a characteristic of much of what happens in technology: IT solutions are produced in a dynamic, creative solutions space. For one thing, most projects are in a constant state of flux. The goal, the defined solution, the people, and the technologies constantly change. This means day to day decisions will not move in lock step with initial assumptions. For another, in most projects there is not a shared and fully harmonised understanding of the problem: if anybody on the team has a different understanding of the problem, if there is latency in communicating change to each member of the team, or if somebody simply wants to solve an interesting technical problem that may or may not be of urgent need to the business problem at hand, decisions on the ground will be out of alignment with overall business needs.

Each IT professional takes dozens of decisions each day. The principal objective in managing a highly-capable organisation is not to take decisions for each person, but to keep in alignment decisions people take. Management of a professional capability is thus an exercise in making lots of adjustments, not pushing blindly ahead toward a solution. But facilitating lots of adjustments isn’t a simple problem because management doesn’t just happen by itself in a team. As Dr. George Lopez found out, changing to self-managed work teams isn’t a turnkey solution:

  • With no leaders, and no rules, "nothing was getting done, except people were spending a lot of time talking." After about a year and a half, he decided teams should elect leaders.4

But that, too, isn’t enough. The basic management tools and structures must be present or this simply doesn’t work. Consider this description of Super Aguri’s F1 pre-race meetings:

  • "The process is very formal," says Super Aguri sporting director Graham Taylor. "I chair our meetings and ask the individuals present to talk in turn. Anyone late gets a bollocking, only one person is allowed to talk at a time, and it absolutely is not a discussion. It … is absolutely the best way to share a lot of information in a short passage of time. … If you don’t need to be there, you’re not."


  • … While the briefing process has always been with F1, the structure has evolved to absorb the increasing complexity of cars and the concomitant increase in team size. "When I started in F1 there were 50 people in the team, now there are 500," says [Sam] Michael, […Williams technical director]. "If you don’t have a firm hold, not everyone gets all the information. You need to have structure.”5


Consider how many layers a team, or even an individual developer, may be removed from a large IT programme. The lack of upward transparency from the individual and downward visibility from the programme management makes it nearly impossible to keep decisions aligned. This is where good management delivers tremendous value: the basic, focused structures of Agile project management – the daily stand-up, the iteration planning meeting, the retrospective, the iteration tracking report, the release plan – provide focus, structure, and discipline that facilitate maintaining alignment of individual and group goals. A salient point in the example from Super Aguri is that greater team capability makes the need to do these things that much more acute: the higher the degree of professional capability in the team, the more adjustments there can be to make, if for no other reason than the rapidity with which the highly capable will work a problem space.

“We are not a family,” Robert Lane, the CEO of Deere & Co, recently said.6 “What we are is a high performance team.” If IT is to deliver high-performance results – to be an alpha capability that contributes to alpha business results – it requires not only high-capability people but the management practices to match.

1 Teal, Thomas. "The Human Side of Management." Harvard Busines Review. April, 1996. I highly recommend reading this article. He deftly summarises the gap between the impact management has relative to the investment made into developing talent completely. It has arguably become more pronounced in the 11 years since this article first appeared.
2 Gross, Daniel. “How Xerox Failed to copy Its Success.” Audacity. Fall, 1995. Audacity, the business journal, has long since ceased publication which is a shame: it provided excellent snippets on business decisions, both successful and unsuccessful, in the broader context of their market conditions.
3 The change in moniker from “Information Systems” to “Information Technology” was, arguably, a step in the wrong direction. The word “systems” is expansive, and extends to solutions with partial technology component to them - or none whatsoever. “Technology” narrows the remit, and thus the business impact, of what we now call “IT.”
4 White, Erin. “How a Company Made Everyone a Team Player” The Wall Street Journal, Monday 13 August 2007.
5 Grand Prix Tune-Ups” F1 Racing Magazine. August 2007. As perhaps the first among high-performance industries, it’s interesting to see what sounds very much like a stand-up meeting being core to race day execution - and how it has allowed them to scale.
6 Brat, Ilan and Aeppel, Timothy. “Why Deere is Weeding Out Dealers Even as Farms Boom” The Wall Street Journal, Tuesday, 14 August 2007.


Sunday, July 29, 2007

Alpha Returns Require an Alpha IT Capability

Demand for IT in business continues to rise. Looking backward, over the last 10 years the IT market has absorbed the new capacity in Asia and South America, yet still we find global and national/regional IT employment is up since 2000.1 Looking forward, all indications are that demand will continue to rise. More importantly, there are very strong indicators that IT will increasingly be a strategic capability: the forecasted increase in worldwide investable assets is creating demand for new sell-side financial products;2 fact-based management is increasingly being applied to sweat assets or improve competitiveness of operations, which in turn demands increasing amounts of data about specific businesses and processes;3 and the re-emergence of high-risk capital (the recent downturn in credit markets notwithstanding) is funding start-up companies suggest that demand for IT will continue to rise.


This presents both a dilemma and a competitive opportunity for companies today.


IT is, fundamentally, a people business. While the systems and solutions it produces might automate tasks of the business, not to mention allow for complex tasks not otherwise practical to be manually executed, the production of those systems is a people-centric process. It stands to reason also that the more complex the solution, the more skilled the people behind the solution. The challenge facing an increasingly strategic IT isn’t a capacity question, but a capability question. Skills and capabilities are not commodities. It takes a great deal of individual skill to solve business problems through IT systems, specifically to model, code and construct solutions that scale, are secure, and are reliable. The highly-capable IT professional, one who has the ability to perform technical tasks as well as understand the business context, is already rare. But as IT becomes a driver of strategic imperative, these professionals will be in that much greater demand. The problem facing IT, then, is that the increase in demand for strategic business solutions that have a significant IT component will outpace the arrival rate of new highly-capable IT professionals, making the highly-capable IT professional that much more scarce.


This is obvious through the example verticals posited above: an increase in development of sell-side products, or an increase in the demand for greater and more accurate data on the business (and business processes) are clear examples of companies trying to achieve returns that beat the market by making use of a significant IT component. The trick to yielding higher ROI through such strategic IT solutions is not to reduce the “investment” on the IT component. Such strategic solutions can’t be sourced based on cost-of-capacity; they need to be sourced specifically on the capability of those delivering the solution. Capacity - the time available to work on developing these solutions - will not alone deliver the solution as emergent products and greater business insight are arrived at iteratively, and through business-IT collaboration. Looking simply at IT capacity to do such tasks is to hold skills as a constant. In a people-centric business, skills are not constant. The trick to yielding higher ROI through strategic IT solutions is to achieve “alpha” IT capability relative to the market – that is, to have an IT capability that beats the average, or “beta,” market IT capability. Specifically, sourcing an IT capability and allowing it to improve at the same rate of the overall market (beta) isn’t going to be sufficient if IT is to be a driver of above-average returns (alpha.) To drive above-average market returns, that IT capability must itself be above average.


Being “above average” or “below average" is difficult to assess because there is no “IT capability index” and thus no baseline for IT in general, let alone within an industry. Subsequently, any assessment of IT capability is likely to be laden with assertion. Worse, we have often allowed “results” to act as a surrogate for an assessment of IT effectiveness, but looking exclusively at results is often incomplete, and there is a high degree of latency between results data – the quality of delivered IT systems – and capability development. It is possible, though, to take an objective assessment of IT capability. We can look to a wealth of indicators – development excellence measures such as code quality and delivery transparency, staff peer reviews, customer satisfaction, and business value delivered – to create a composite of IT effectiveness. In some cases, we may need to have relative baselines: e.g., we measure over time against an initially assessed state of something like customer satisfaction. In other cases, we can identify strong industry baselines, such as code quality metrics we can run against the source behind such open source projects such as CruiseControl, JBoss, Waffle, PicoContainer and many, many others to provide an indicator of achievable code quality.


Gaining a sense of relative strength is important because it provides context for what it means to be high-capability, but it doesn't define what to do. Clearly, there must be things a strategic IT organisation must do to become high-capability, and remain so over time. And this isn’t just a money question: while compensation is a factor, in an increasingly competitive market you quickly end up with a dozen positions chasing the same person, all the time.4 The high-capability IT organisation must offer more than comp if it is to be durable. It must be a “destination employer” offering both skill and domain knowledge acquisition as well as thought leadership opportunities. This requires investing in capability development: skills, specialisation, innovation, and so forth. Going back to our examples, the development of cutting-edge sell side products (e.g., structured credit products) will require business domain fluency as well as a high degree of technology skill. Similarly, if companies are to “sweat the assets” of a business to their limits requires a very low noise-to-signal ratio of their business intelligence; that requires IT to be highly fluent in business process and business needs. Companies seeking alpha through IT cannot be obtain this capability through beta activities such as the acquisition of new technology skills through natural turnover and changes in the overall market capability, or the introduction to the business domain through immersion in an existing codebase. Indeed, companies relying on beta mechanisms may find themselves underperforming dramatically in an increasingly competitive market for capability. Instead, to be drivers of strategic solutions and thus alpha results, capability must rise from strategic investments.


The successful development of this capability turns IT into a competitive weapon in multiple ways. The business context is obvious: e.g., new sell-side products will clearly allow a trading firm to attract investment capital. But it goes beyond this. In a market with a scarce volume of high-performance people, being the destination employer for IT professionals in that industry segment can deprive the competition of those people. Hindering a competitor from developing an alpha IT capability will undermine their ability to achieve alpha returns. This makes IT both a driver of its host firms returns as well as an agent that disrupts the competition. This clearly demarks the difference between beta reactions, such as anticipating IT demographic changes in anticipation of recruiting efforts, and alpha reactions that create those shifts through recruiting and retention activities that force competitors to react.


This does not apply to all IT organisations. Those that are strictly back-office processing are simply utilities, and can, with minimum risk, trail the market in capability development. But businesses with significant portions of alpha returns dependent on IT systems development require a similarly alpha IT capability. If they follow the utility model, firms dependent on strategic IT are going to post sub-optimal returns that will not endear them to Wall Street. If instead they do the things to build a high capability captive IT or by partnering with a firm that can provide that high capability IT for them, and by building out the governance capability to oversight it, they’ll not just satisfy Wall Street, they’ll be market leaders with sustainable advantage.


1The Department of Computing Sciences at Villanova University has published some interesting facts and links.
2"Get Global. Get Specialized. Or Get Out." IBM Institute for Business Value, July 2007.
3"Now, It's Business By Data, but Numbers Still Can't Tell Future." Thurm, Scott. The Wall Street Journal, 23 July 2007.
4The Battle for Brainpower" The Economist, 5 October 2006.

Sunday, June 24, 2007

Strategic IT Does More than Assume Technology Risk, it Mitigates Business Risk

Risk management, particularly in IT, is still a nascent discipline. Perhaps this is because there are an overwhelming number of cultural norms that equate “risk management” to “defeatism.” To wit: “Damn the torpedoes, full speed ahead!” is intuitively appealing, offering a forward-looking focus without the encumbrance of consideration for that which might go wrong. This notion of charging ahead without regard to possible (and indeed likely) outcomes all too often passes for a leadership model. And poor leadership it is: sustainable business success is achieved when we weigh the odds, not ignore them entirely. Thus leadership and its derivative characteristics - notably innovation and responsiveness - are the result of calculated, not wanton, risk taking.

How risk is managed offers a compelling way to frame the competitiveness of different business models. A recent report by the IBM Institute for Business Value1 forecasts that Capital Markets firms engaged in risk mitigation have better growth and profit potential than firms engaged in risk assumption. According to this report, risk assumers (such as principle traders or alternative asset managers) may do a greater volume of business, but it is the risk mitigators (notably structured product providers and passive asset managers) who will have greater margins. Since Capital Markets firms are leading consumers of IT, and as IT tends to reflect its business environment, this has clear implications for IT.

Traditionally, IT has been a risk assumer. It provides guarantees for availability, performance and completeness for delivery of identified, regimented services or functionality. This might be a data centre where hardware uptime is guaranteed to process transactions, a timesheeting capability that is available on-demand, or development of a custom application to analyse asset backed securities. Being asked to do nothing more than assume risk for something such as availability or performance of technology is appealing from an IT perspective because it’s a space we know, or at least we think we know. It plays directly to that which IT professionals want to do (e.g., play with shiny new toys) without taking us out of our comfort zone (the technology space versus the business space). Worrying with the technology is familiar territory; worrying about the underlying business need driving the technology is not.

IT can only provide business risk mitigation if it is partnering with the business for the delivery of an end-to-end business solution. If it is not – if IT maintains the arms-length “you do the business, we do the technology” relationship - IT assumes and underwrites technology risk, nothing more. The trouble is, this doesn’t provide that much business value. Technology itself doesn’t solve business problems, so the notion of managing technology – be it to optimise cost, availability, performance or completeness – is no different from optimising, say, the company's power consumption.

This defines IT's business relevance. Being a provider of utilities is not a strategic role. Energy offers a compelling parallel: while it is important for a business to have electricity, most businesses don’t think of the power company as a strategic partner, they think of the power company as just “being there.” The concern is far more utilitarian (e.g., are we turning off the lights at night) than it is strategic (e.g., nobody measures whether we're maxmising equity trades per kilowatt hour.) Worse still, businesses don’t think of their utilities whatsoever. They're aware of them only when they're not there. Awareness is negative, at best escalation, at worst a disruption to the utility’s cash flow.

In any industry, risk assumption is the purview of utility providers. Software development capacity, IT infrastructure, and software as a service are all examples of risk assumption. They are useful services to be sure, but of low business value. They compete on cost (through, for example, labour arbitrage or volume discounts) as opposed to differentiation on value (that is, as drivers of innovation or invention.) For IT as a whole to have business value it must not be viewed as a technology risk assumer but a mitigator of business risk.

The latter role has been historically de facto granted because business systems are substantially technology systems, hence IT has had a direct (if unforseen and unofficial) hand in business process re-engineering, partner management, and compliance certification. In the future, defaulting into this role is not a given: while business solutions have a significant technology component they are not solved entirely by technology. The actual business solution is likely to involve increased regulatory compliance, complex process changes, constant training, ongoing supplier and customer management and integration, and so forth, all of which are increasingly complex due to multiplicity of parties, tighter value chain coupling, and geographic distribution, amongst other factors. Clearly, the technology component is but one piece part of an overall business solution.

Here lies the point-of-pivot that defines IT as strategic or tactical. If IT subserviates itself to a role of technology sourcer abdicating responsibility for success of the end-to-end business solution, so shall the business cast IT as nothing more than an interchangable component to the solution to the business problem. Conversely, when the business and IT both recognise that the technology piece is the make-or-break component of the overall business solution (that is, the technology bit is recognised as the greatest single controllable factor in determining success), IT has strategic footing. It achieves this footing because it mitigates risk of the business solution in business terms, not because it assumes risk for services that the business can competitively source from any number of providers.

Being a mitigator of business risk does in fact require that IT has robust risk management of its own capabilities. That is, internally, IT effectively and competently insures delivery. Here, we run directly into the fact that risk management is not a particularly strong suit of IT, and for the most part is primitively practiced in the IT space. Certainly, it is not simple: executive IT management must be able to analyse risk factors of individuals and teams (e.g., staff turnover, knowledge continuity, individual skill, interpersonal factors.) It must do so across a broad spectrum of roles (quality engineer, developer, DBA, network engineer) with regard to a wide variety of factors (e.g., are we a destination employer / destination account for our suppliers, are we giving people stretch roles with proper mentoring or simply tossing them in the deep end filling out an org chart?) These people factors are critical, but are but one source of risk: IT must be able to manage service availability, requirements accuracy, security and performance across software and infrastructure. Furthermore, this risk spectrum must be presented in a portfolio mannner that assesses risk factors on in-flight and potential fulfillment alternatives, both at this moment and forecast over time. A trivial task it is not.

Regardless of the complexity, technology risk assumption does not provide business value. In Hertzberg’s terms, keeping one’s house in order does not provide satisfaction to one’s partners, even if the absence of order creates genuine dissatisfaction with one’s partners. Successful assumption of risk – maintaining uptime within tolerance, or being “on time and on budget” - are nothing more than basic hygiene. We have allowed these to masquerade as providing “business value.” They don’t. The absence of hygiene – reliability, performance, continuity, completeness – relegates IT to a tactical role because it gives the appearance that IT is incapable of keeping its house in order. At the same time, the presence of hygiene – making deliveries on schedule, or meeting conditions of SLAs – does not entitle IT to a strategic role, it merely contains dissatisfaction.

To become a strategic capability, IT must offer motivators to the busisiness. To accomplish this, IT must focus specifically on activities that mitigate business risk. The core opportunity lies with people. IT is still very much a people-based business; that is, code doesn’t write itself, projects don’t manage themselves, network topographies don’t materialise, solution fitness dosesn't just happen, etc. A key differentiator in what makes IT strategic versus tactical is the extent to which people are leveraged to create business impact: the developer who creates a clever solution, the analyst who connects a complex series of support issues and expressed business requirements, the project manager who brings business solutions to fruition. This requires an outward view that includes domain knowledge and business intimacy on the part of IT professionals. A greater, outward looking context core to each person’s day-to-day is how IT is provides satisfaction to the business. The absence of this - reverting to the “you do the business we’ll do the technology” approach - relegates IT to a utility service, at best a department that doesn't let the business down, at worst something that does. Conversely, an outward-looking, business-engaged capability that is focused on the business problems at hand is what distinguishes a strategic as opposed to a tactical IT.

An efficient, risk-assuming IT capability is a superb utility that contains cost. It is well regarded by the business until a less expensive alternative presents itself, at which time that same IT capability becomes an under-achiever, even a nuisance.2 By comparison, an effective, expansive and business-risk-mitigating IT is a superb driver of business value, in touch with the environment such that it anticipates change and adjusts accordingly. In so doing, IT is not in the minimising business – minimising downtime, minimising cost, minimising catastrophic failures – but in the maximisation business – specifically, maximising business returns. A risk-assuming IT defines tactical IT; a risk-mitigating IT defines strategic IT.



1IBM's Institute for Business Value offers a number of interesting research papers. On the whole they have much more of a consumer-oriented view of IT and offer different market perspectives on the role of IT. Look for another Capital Markets paper in July 2007.
2Michael Porter made the case in Competitive Strategy that competition on price isn't sustainable; we should expect nothing different for an IT utility.

Monday, May 28, 2007

Just as Capital Has a Static Cost of Change, So Must IT

The global economy is awash in cash. We’ve experienced unprecedented profitability growth for the past 16+ quarters, the cost of capital is low, investment risk is more easily distributed, and companies find themselves with strong cash balances. Increasingly, though, we're seeing companies being taken private and their cash taken out by new ownership, or companies buying back their own stock.


This has implications for IT, as it competes for this same investment dollar on two fronts. First, if the executive decision is to concentrate equity or engage in M&A, it inherently means that these types of investments are expected to provide greater return than alternatives, notably investments in operations. IT projects, being operations-centric, are losing out. Second, when companies are taken private, it’s often with the expectation that they’ll be flipped in a short period of time; to maximise return, operations will be streamlined before the company is taken public again. This means private capital will scrutinise the business impact of not only new projects, but existing spend.


To win out, IT has to change the way it communicates. It must think and report more in terms common to capital, less in terms common to operations.


This means that IT has to show its business impact in a portfolio manner. For every project, there must be some indication of business impact, be it reduction of risk, reduction of cost of operations, revenue generation, and so forth. This is not a natural activity for IT because, for the most part, IT solutions don’t themselves provide business return; they do only as part of larger business initiatives. As a result, IT often abdicates this responsibility to a project’s business sponsor. As stewards of spend and benefactors of budgeting, IT cannot afford to be ignorant of or arrogant to the larger context in which its solutions exist.


ITs effectiveness depends on its ability to maximise use of people and resources. This means taking decisions across multiple initiatives, which can bring IT into conflict with the rest of the business, especially with sponsors of those initiatives that are deprioritised. Business issues are not universally understood by IT's business partners. For example, Accounting and Finance people may recognise the need for systems that reduce restatement risk, whereas operations people may see systems and processes designed to reduce restatement risk as contributing only to operational inefficiency. Communicating business imperative, and then people and resource decisions in a business-priority context, make IT decisions less contentious. It also makes IT more of a partner, and less of a tool.


This is not to say that business-impact and return offers ubiquitous language for business projects. Not every dollar of business value is the same: an hour of a person’s work reduced is not the same as reduced energy consumption of fewer servers is not the same as a reduction in restatement risk is not the same as new revenue. However, always framing the project in its business context makes both needs and decisions unambiguous, and gives us the ability to maximise return on technology investment.


Because the business environment changes, so do returns. As a result, assessing business impact is an ongoing activity, not a one-off done at the beginning of a project. Over the life of any project it must be able to show incremental returns. The further out that returns are projected, the more speculative they are, if for no other reason than the changes in the business environment. Capital is impatient, and can find faster returns that provide greater liquidity than long-term programmes. If the business itself is providing quarterly returns, so must any IT project.


Operating and measuring an IT project in the context of its business impact is a fundamental shift for IT. The purpose of continuing to spend on a project is to achieve a business return; we don't continue to spend simply because we think we’ll continue to be “on-time and on budget.” This latter point is irrelevant if what doing - on-time or otherwise - has zero or even negative business impact. Measuring to business impact also allows us to move away from a focus on sunk costs. Sunk costs are irrelevant to capital, but all too often are front-and-centre for operations-centric decision-making: e.g., the criteria to keep a project going is often “we’re $x into it already.” This inertia is, of course, the classic “throwing good money after bad.” We forget that it’s only worth taking the next steps if the benefits outweigh remaining costs.


Managing to business impact requires perspective and visibility outside the IT realm. The actual business impact made must be followed-up and assessed, and all stakeholders – especially business sponsors – must be invested in the outcome. That might mean a budget reduction with the successful delivery of a solution, or bonus for greater revenue achieved. Whatever the case these expected returns must factor into the budgets and W2s of the people involved. This makes everybody oriented to the business goals, not focused on micro-optimisation of their particular area of focus (which may be orthogonal to the business goal.)


To execute on this, the quality of IT estimating must also be very high. When the business does a buy-back or engages in M&A, it has a clear understanding of the cost of that investment, an expectation of returns, and the risks to the investment. IT projects must be able to express, to the greatest extent possible, not only expected costs but the risks to those costs. Over time, as with any business, it must also be able to explain changes in the project’s operating plan – e.g., changing requirements and how those requirements will meet the business goal, missed estimates and the impact on the business return model. This creates accountability for estimating and allows a project’s business case to be assessed given historical estimate risk. It also improves the degree of confidence that the next steps to be taken on a project will cost as expected, which, in turn, improves our portfolio management capability.


Estimation must also go hand-in-hand with different sourcing models. Very often, projects assume the best operating model for the next round of tasks was the operating model taken to date. We often end up with the business truism: “when the only tool you have is a hammer every job looks like a nail.” Estimates that do not consider alternative sourcing models – different providers, COTS solutions, open source components, etc. – can entrap the business and undermine IT effectiveness. Continuous sourcing is an IT governance capability that exists at all levels of IT activity: organisational (self-sourcing, vendor/suppliers), solutions (COTS, custom), and components (open-source, licensed technologies, internally developed IP.) The capability to take sourcing decisions in a fluid and granular manner maximises return on technology investment.


In this approach, we can also add a dimension to our portfolio management capability to attract high-risk capital of the business. Every business has any number of potential breakaway solutions in front of it, not all of which can be pursued due to limited time and capital, not to mention the need to do the things that run the business. In addition to offering potential windfall benefits to the business, they are most often the things that provide the most interesting opportunities and outlets for IT people, necessary if an IT organisation is to be competitive for talent as a “destination employer” for best and brightest. These are impossible to charter and action in an IT department managing expectations to maintain business as usual. It becomes easier to start-up, re-invest and unwind positions in breakaway investment opportunities – and the underlying IT capability that delivers them – if they’re framed in a balanced technology portfolio.


By doing these things, we are better able to communicate in a language more relevant to the business: that of Capital. The behaviour of IT itself is also more consistent with Capital, with a static, as opposed to an exponential, cost of change. Such an IT department is one that can compete for business investment.

Saturday, April 28, 2007

Patterns and Anti-Patterns in Project Portfolio Management

A critical component of IT governance is Project Portfolio Management (PPM). Effective portfolio management involves more than just collecting status reports of different projects at specific dates; it also involves projecting the delivery date, scope and cost that each project is trending towards and the impact of that projected outcome on the overall business. Without this latter capability, we may have project reporting, but we are not able to take true portfolio decisions – such as reallocating investment from one project to another, or cancelling underperforming projects – to maximise return on technology investment.

As with IT governance as a whole, many PPM efforts belie the fact that the discipline is still very much in its infancy. We see around us a number of practcies executed in the name of PPM that are in effect PPM anti-patterns.

Anti-Pattern: Managing by the Fuel Gauge

Traditional or Waterfall project planning defines a definitive path by which milestones of vastly different nature (requirements documentation, technical frameworks, user interface design, core functionality, testing activity and so forth) can be completed in an environment (team composition, team capability and requirements) projected to be static over the life of a project. This definitive path, defined to the granular “task” level in traditional project planning, creates a phase, task and subtask definition within the GL account code(s) that track spend against the project budget. When wed to the IT department’s time tracking system – which tracks effort to the subtask level – it is not uncommon for people to draw the mistaken conclusion that the total cost expended against the budgeted amount is representative of the percent complete we are to the overall project.

This is akin to “navigating the car by the fuel gauge” – the amount of time spent in each task is assumed to be an indicator of delivery progress because the plan itself is held out to be fact. Unfortunately, the environment is not static, and the different nature of project milestones makes the project prediction highly suspect. The car could be heading toward a completely different destination than originally envisioned, and in fact could be going in circles. This granular level of data does not translate into meaningful PPM information.

Anti-Pattern: Navigating through the Rear or Passenger Windows

Another approach to portfolio management is to survey every project at regular intervals to ascertain where it’s at relative to its original, deterministic plan, and for those “off course” reporting what will be necessary to restore project back to plan.

In its simplest form, this is akin to “navigating the car out the rear window.” Surveying projects to ascertain their overall percent complete is a backward looking approach that is easily – and not infrequently – gamed (e.g., how many projects quickly report “90% complete” and stay there for many weeks on end?) In its slightly more complex form – communicating the gap of current status to projected status and reporting a detailed set of deliveries a team is hoping to make by a particular date – is akin to “navigating the car out the passenger window.” It assumes that the original project plan itself is the sole determinant of business value, and is the basis of control of projects.

These are anti-patterns because they miss the point of PPM. The objective of portfolio management is to maximise return for the business, not maintain projects in a state of “on time and on budget.” Those time, scope and budget objectives, which might have been set months or even years ago, lose relevance with changing business conditions: market entrants and substitutes come and go, regulation changes frequently, and the sponsoring business itself changes constantly through M&A. These factors – not “on time and on budget” to an internally defined set of objectives – are what determine a maximum return on technology investment. In addition, this approach substitutes “sunk costs” for “percent of value delivered.” Sunk costs are irrelevant to business decisions; it is the remaining cost of completion relative to value achieved that matters.

Anti-Pattern: Breaking Every Law to Reach our Destination

An unintended consequence of PPM is the distortion of organisational priority. A culture of results can quickly morph into a culture of “results at any cost.” This, in turn, may mean that in the process of traveling to a destination, we commit multiple moving violations, burn excess fuel, and pollute excessively simply so that we appear to have “met our numbers.”

This is not typically considered part of PPM as much as it’s really a question of our overall IT governance. Still, it’s relevant to PPM: knowing that our investments are performing as they purport to be performing is important protection for our returns. To wit: through the 1990s Parmalat and Enron may have been strong performers in an equity portfolio, but gains were obliterated once shown to have been misrepresented all along. It must be remembered that project portfolio management relies on good governance and, in fact, exists as a component of it. Reaching our destination might make an initial delivery appear to be successful, but any returns achieved for reaching the destination might be completely obliterated by the cost of remediating the brownfield we created. Maximising return on technology investment is concerned with the total asset life, not just achieving a goal at a specific point in time.

Characteristics of a Pattern

We don’t yet – and should not expect to have – “GPS-style” navigation systems for individual projects that can feed our PPM. Because we cannot predict the future, any “map” is a fallacy. But we do have the tools by which we can "navigate through the front windshield," and do so without leaving destruction in our wake. We can do this if:


  • We have fact-based manner by which to assess if a project can achieve its business goals in a time and for a cost acceptable to its business objectives.

  • We have detailed visibility into the fact that energies are being directed toward high priority work.

  • We have current, meaningful indicators of the completeness of the work being done – that we are working in such a way that we are maximising objectives under the circumstances, and that work declared to be “complete” is a matter of fact, and not opinion.

Agile project management is uniquely capable of bringing this about. Inclusive, independent statements of scope allow the path of system delivery to adjust to changes in priority, changes in capacity, changes in the understanding of requirements, and the experience of the team. Instead of relying on a prescriptive path, we have unadulterated transparency that exposes to everybody whether the best decisions relative to the business objectives are being taken given current information at the moment of decision.

These constructs provide the foundation for a fact-based, forward-looking PPM capability, because they enable informed “what if” scenario building across a portfolio of projects. Using these practices, we can develop meaningful, time-sensitive models, founded in fact, that allow us to forecast the impact of changes to team capacity (e.g., through turnover or reassignment), priority (through changing business environment) or scope (through expansion or better understanding) on our total portfolio. This isn't “project tracking” masquerading as “portfolio management;” it is the foundation of true portfolio management that maximises return on technology investment.

Thursday, March 29, 2007

Agile under Sarbanes-Oxley

The business cycle of most firms is cash-driven: work is performed, invoiced at completion, and collected on negotiated payment terms. Obviously, cash flow is important to the business as it affects our ability to do the things to run the business, like meet payroll and pay expenses. Cash flow isn’t revenue, however. To recognize work delivered as revenue, client work must be delivered and unambiguously accepted by the client.

This is a priority, particularly for publicly traded firms. As the stock price usually trades at a multiplier to income (or in the case of many Nasdaq companies, a multiplier to revenue in lieu of earnings), revenue recognition is critical to Wall Street. For engagements that span many months, this can mean that revenue recognition is deferred for many reporting quarters. We can end up in a situation where cash flow is consistent and healthy, but net income is variable and frequently weak.

Amongst other things, Sarbanes-Oxley (a.k.a. Sarbox, or SOX) establishes compliance guidelines for publicly traded companies so that revenue isn’t gamed. The intent is to define clear guidelines accounting the facts of what operations have delivered or not delivered. As simple as that may seem, the pressure in the executive offices to recognize revenue is quite real, and the software industry in particular is rife with examples of companies gaming revenue numbers with incomplete deliveries.

The rules under Sarbox governing revenue recognition are explicitly defined. The governance mechanism for this under Sarbox is a “proof of completion” certificate. This is a simple document that serves as the client’s acknowledgement that specific functionality was delivered to satisfaction by the supplying vendor. This document must be received in the reporting period in which the revenue is to be recognized; e.g., if we’re going to recognize revenue in Q3, the proof of completion must be received by the supplier in Q3.

The capability for operations to deliver what they forecast will go a long way to letting the air out of the results bag. Of course, it’s not so easy. The ability for ops to deliver isn’t purely an internal function. Factors outside the control of a company’s internal ops, such as customer staff turnover or change in a customer’s business direction, can impair execution of even the best laid plans. Thus no matter how strong the internal operational performance, external factors will significantly affect results. Still, the ability to forecast and respond to this change in a timely fashion will go a long way to meeting revenue targets and goals, and reduce the risk of change in the business environment.

Our traditional ways of working in these environments are often based in hope and unnecessarily produce a lot of uncertainty and inconsistency of our own making. We set our forecasts based on individual “quarterly completion commitments” and “business feel” based on what we see in the sales pipeline. As we approach quarter-end, we swarm disproportionate numbers of people on specific projects to drive to what amounts to an internal definition of “complete,” only to then plead with the customer to accept. The pursuit of a mythical number given at the beginning of a quarter in the vain hope of making it a reality through the fortunate combination of contracts, capacity and capability coming into alignment is a primitive practice. This ultimately results in a mad scramble at quarter-end to complete deliveries, introducing operational risk. For example, if a delivery proves to be more complex than originally thought, or if people are not available, or if some customer deliveries are prioritsed at the cost of others, quarterly ops revenue contribution is at the mercy of things substantially out of our control. Without mitigating this risk – or indeed providing visibility into it in the first place – we increase the probability of a disappointing quarter.

In fact, these practices stifle operational maturity. In this model, operations are at best a hero-based delivery group that relies on a few talented individuals making Herculean effort 4 times a year (that is, at quarter end. At worst, they’re an under-achieving function that requires a high degree of tactical supervision. In either scenario, operations are reactive, forever executing to a mythical, primitive tactical model, never rising to become strategic contributors to the business.

Because they bring operations into alignment with regulatory requirements in a non-burdensome manner, Agile management practices are especially valuable in Sarbox or similarly regulated business environments. There are several Agile practices we can bring to bear in this environment:


  • Instead of defining large, monolithic packages of delivery, we can decompose client deliverables into independent, uniform statements of business functionality or Agile Stories. Each of these Stories will have an associated revenue amount, specifically the cost of delivering each to the customer. This gives us a granular unit of work with economic properties.

  • Each of these requirements can have an associated Proof of Completion document. This provides tangible affirmation that client acceptance criteria have been met.

  • We can define the fiscal quarter as an Agile “release” divided into 13 iterations of 1 week each. This gives us time-boxes around which we can construct a release plan.

  • We can forward plan our capacity by taking a survey of known downtime (vacations, holidays, etc.).


By executing to these, we yield significant financial and operational benefits.

  • We accelerate revenue recognition. Granular, federated expressions of business requirement can be developed, delivered and accepted incrementally by the customer. This will yield faster revenue recognition than highly-coupled requirements made in one delivery.

  • We reduce the risk of not being able to recognize revenue. Incremental customer acceptance reduces the risk to revenue recognition inherent in a single large delivery. For example, suppose a sea change within a customer threatens revenue from projects. If we have granular delivery and acceptance we can recognize the revenue for deliveries made to date. If we don’t, we lose revenue from the entire project, making both the revenue and the efforts to-date are a business write-off.

  • We have more accurate forecasts of revenue capacity and utilization. By planning capacity, and taking into account our load factor, we can assess with greater accuracy what our remaining quarterly capacity looks like. Expressing this in revenue terms gives us a realistic assessment of our maximum revenue capacity. From this we can take investment decisions – such as increasing capacity through hiring – with greater confidence.

  • We have more accurate revenue reporting. Each POC received creates a revenue recognition event in that specific iteration. This gives us a “revenue burn-up chart” for the quarter. In tracking actuals, we can show our revenue recognition actual versus our burn-up. This means revenue forecasting and reporting is based more in fact than in hope.

  • We have more accurate revenue forecasting. By forming a release plan that includes the complete cycle of fulfillment stages for each customer requirement – analysis, development, testing, delivery and POC/acceptance – we have a clear picture of when we expect revenue to be realized. As things change over the course of the quarter – as stories are added or removed, or as capacity changes – the release plan is modified, and with it the impact on our revenue projection is immediately reflected.

  • We have transparency of operations that enables better operational decisions. Following these practices we have a clear picture of completed, scheduled, open and delayed tasks, an assessment of remaining capacity, and visibility into a uniform expression of our backlog (i.e., a collection of requirements expressed as stories). With this we have visibility into delayed or unactioned tasks. We can also take better scheduling and operating decisions that maximize revenue contribution for the quarter.

  • We have transparency of operations that reduces surprises. The release plan tells us and our customers when we expect specific events to take place, allowing us to schedule around events that might disrupt delivery and acceptance. For example, we may expect to make a delivery in the last week of the quarter, but if the person with signature authority on the POC is unavailable, we’ll not recognize the revenue. Foreknowledge of this allows us to plan and adjust accordingly.

  • Acceptance Criteria are part of everything we do. The Proof of Completion document builds acceptance criteria into everything that we do. We think of completion in terms of delivery, not development. This makes everybody a driver of revenue.


In sum, Agile practices professionalize operations management. By being complete in definition, being fact-based, providing operational transparency and exposing and mitigating risk consistently throughout a reporting period, they align execution with governance. This results in non-burdensome compliance that actually improves the discipline – and therefore the results – of business operations.