Friday, December 24, 2010

Heathrow Mess is Explained by Taleb

Like many thousands of other people, I was forced to stay in London for an extra few days because weather-related factors caused Heathrow and other UK airports to close. Nearly a week after it began, thousands remain stranded.

Most analyses of why this happened have looked at how supply-side factors such as additional snowplows or seat capacity on contract could lessen the impact of an event like this. Hugo Dixon, writing in Reuters Breakingviews, suggested that a better demand management mechanism - one that creates a more efficient market for seat demand under circumstances where seats are at a premium - is just as important to consider.

I took a different perspective, which Breakingviews was kind enough to publish. Here's the abstract and a link to my letter:

Heathrow mess is explained by Taleb.
Europe's main airport is a highly inefficient market with poor and asymmetric information. Snow turned it from a utility to a casino. This wasn't a Black Swan event. But it reflects Taleb's argument that optimized systems are vulnerable to catastrophic failure.

Vulnerability to catastrophic failure makes abundantly clear we would be better served by demanding and rewarding robustness over optimization, especially from utilities.

Tuesday, November 30, 2010

Regulatory Capture and IT Governance

Industries are regulated by governments so that companies don't compromise the public interest.  Regulatory agencies usually grab headlines because most regulation comes in response to nefarious actions, but it isn't always the case: people in a company may conduct their affairs in what they believe to be a perfectly justifiable manner, only for there to be unintended consequences of what they do to consumers or society.  

In the same way, we have governance within businesses to make sure that management doesn't compromise the interest of investors.  And just as it is with businesses in a regulated industry, management of a well-governed business may have a set of priorities that are perfectly justifiable in management's context, but are orthogonal to investor's interests.

Industrial regulation and business governance are both poorly understood and poorly practiced. Each is also easily compromised. John Kay provided a fantastic example of how easily governance is compromised earlier this month in the FT, describing a phenomenon he referred to as "regulatory capture":

Regulatory capture is the process by which the regulators of an industry come to view it through the eyes of its principal actors, and to equate the public interest with the financial stability of these actors.


Let's think about this in the IT governance context.  We may have good governance instrumentation and a governing body that meets consistently.  But it's still easy for our governance infrastructure to be co-opted by the people it's supposed to be governing.  Mr. Kay explains how:

[T]he most common form of capture is honest and may be characterised as intellectual capture. Every regulatory agency is dependent for information on the businesses it regulates. Many of the people who run regulated companies are agreeable, committed individuals who are properly affronted by any suggestion that their activities do not serve the public good. ... It requires a considerable effort of imagination to visualise that any industry might be organised very differently from the way that industry is organised now. So even the regulator with the best intentions comes to see issues in much the same way as the corporate officers he deals with every day.


In IT governance, management provides and frames governance data.  Overtly or covertly it imposes structural limitations on the presentation of that data.  People in governance roles are all too often lulled into a sense of complacency because integrity of the messenger - management in this case - isn't in doubt.

Yet one of the most critical expectations we have of people in governance roles is that they have a broader picture than management of what should be happening, and how it should be taking place.  Perhaps management doesn't want to look bad, or they're not comfortable delivering bad news.  And all too often, management can do no better than to play the cards they're dealt (e.g., people, scope, technology or something else).  Whatever the underlying situation, we need a governing body that doesn't look at the cards in hand, but at the cards they can get out of the deck.  There's no mechanical process that enables this; it all comes down to having the right governors.

Which leads to Mr. Kay's next point, where he provides some important insight into the characteristics of a good regulator that are very much applicable to somebody in an IT governance role:

You require both an abrasive personality and considerable intellectual curiosity to do the job in any other way.


IT governance requires activist investors: people who will ask challenging and uncomfortable questions, reframe the data provided by management, and propose completely different solutions. This is a specific behavioral expectation, and a high one at that.  But, as Mr. Kay points out:

[T]hese are not the qualities often sought, or found, in regulators.


Sadly, this is all too true for IT governance as well.  

The value of governance is realized by its professional detachment. Whether you're recruiting a board for an IT investment or evaluating the people you have in one today, think very hard about their ability to act independently.

Sunday, October 31, 2010

Restructuring IT: First Steps

In this last post in the series on restructuring IT, we'll take a look at some things we can do to get going on a restructure.

The place to start is to establish a reason for restructure that everybody inside and outside the organization can understand. Tech is inherently optimistic, and we have short memories. As a result, we don't have very good self awareness. So it's worth performing a critical analysis of our department’s success rate. That means looking at how successful we are at getting stuff into production. What is our batting average? How does it stack up against those Standish numbers? But it isn't enough to look at the success of delivery, but the impact. Is there appreciable business impact of what we deliver? Is the business better because of the solutions we put in production? These questions aren't so easily answered because IT departments often don't retain project performance history and we very often don't have business cases for our project. A critical self-assessment, while valuable, may not be all that easy to perform.

Of course, the point isn't just to assess how we have performed, but to look at how ready we are for the future. What will be expected of us in 2 years? What business pressures will build up between now and then? How ready are we to deal with them?

To properly frame performance, we need to have a very firm definition of what “success” means. I worked with a firm that had a very mature phase-gated governance system. With maturity came complexity. With complexity came loopholes and exceptions. Whenever a project was at risk of violating one of the phase gates such as exceeding rate of spend or taking too long, somebody would invoke an exception to change the project control parameters to prevent the project from defaulting. As a result, they could report an extraordinarily high rate of success – upwards of 99%. But a 99% success rate of changing the problem to fit their reality is a dubious record of achievement.

In addition to scrutinizing results, take a look at the top 5 constraints you believe your IT organization faces today. Of those top 5 things, very likely most (if not all) will be rooted in some behavioural mis-alignment with results. One quick way to get the contours of the impact of these mis-alignments is to bring business and IT people into a short, facilitated workshop to focus on the mechanics of delivery. This will reveal how people react to those constraints (working within them, working around them, or doing things that reinforce them), and subsequently the full effect that they have on delivery.

Finally, get a professional assessment of your organization, looking at the behaviours and practices behind what gets done and what doesn't get done. It’s also important to engage business partners in this process. While we very often find IT organizations that are being outpaced by their business partners, it’s been our experience that with a little bit of concentrated effort, it doesn’t take much for IT to outpace its host business. That isn’t healthy either. Ultimately, we need a firm peer relationship between the business and IT, so we need to engage business partners in this process. We’re looking to create a symbiotic relationship so that responsiveness is both mutual and durable.

Doing these things will give you the classic look in the mirror, a critical assessment of the "lifestyle" decisions that your organization is making today. That will allow you to speak in firm facts of why the organization needs to change, set the bar for what is acceptable and what is not, and define a target and a set of action items that will create change.

One parting thought.

I hope this series on IT restructure has crystallized some of the thoughts and experiences that you have had as an IT professional, that this gives you perspective on the impact of industrialization on the IT industry, particularly in the people you interview and the skills and experiences they have.

Hopefully, you’ll go back to your office and think, “yeah, I remember being in a team of professional workers, and now I’m with industrial workers.” And you might think being the lone change agent is too much. Maybe if you were more senior, you could pull it off. But as a [insert your title here], you just don’t feel you can pull it off.

Quick story to tell about a securities firm I worked with. Some years ago, the CIO was dev lead of a dozen-person team that created the next gen platform for their trading operations. He now had 2,000 people in NY and India and wondered, “why is it so difficult to get things done around here.” He stepped into a conference room at one point where he'd sequestered the team and got a bit misty-eyed about “the good old days.” Here’s the guy running the IT department - in fact, he created most of it - feeling that same sense of frustration with the industrial model. And, ultimately, he felt trapped by it.

This is coming from the CIO.

The frustration is there, from top to bottom in IT. The people to make change are the people who recognize the difference between industrial and professional IT. It will take time. It will take a lot of convincing and explaining. But it’s there to be done. The time is now. And you’re not alone.

Wednesday, September 29, 2010

Restructuring IT: Guiding Principles

This is a continuation of a series I left off in December 2009 on Restructuring IT. This post presents a few guiding principles to understand before undertaking a restructuring exercise.

First, don't fool yourself about your ambitions. Come to grips with what you think you want to be: a demonstrably world-class organization, or just less bad at what you do. The prior is easy to say but hard to achieve. If you're wed to any aspect of your current organization, if you think process will make your business better, or if you're concerned about making mistakes or losing staff, you're really no more ambitious than being less bad. There's nothing wrong with that. Minor restructure can have a beneficial impact. Just don't mistake "sucking less" for "software excellence".

Second, be aware of your level of commitment. Change is hard. As liberating an experience as you hope it will be, the people in your organization will find restructuring invasive, divisive and confusing. Some people will resist, some will revert. Some will exit of their own volition, and some you'll have to invite to leave. Change is tiring and frustrating. Staying the course of change through the "valley of despair" requires a deep personal commitment, often in the absence of compelling evidence that the restructure is going well and frequently against a chorus of voices opposed. Fighting the tide is never easy, even from a leadership position.

Third, don’t expect that you’re going to restructure with tools, training and certification. That won’t change behaviours. If you believe change comes about through tools and training, you should release 80% of your staff and hire brand new people every year: just put them in training and give them tools to run your business. Of course you wouldn’t do that, because you’d lose all the experience. So it is with this restructure: you’re developing new experience. Tools can make good behaviours more efficient, but tools alone don’t introduce good behaviours in the first place.

Finally, be less focused on roles, titles and hierarchy and focus instead on what defines business success and what actually needs to get done to achieve it. Tighten up governance scrutiny to verify that people are working to a state of “demonstrably done" and not just "nobody can tell me I'm not done". And prioritize team over the individual. Don't privatize success while socializing failure: incentivize the team, not each person. People focused on a team accomplishment are less concerned with individual accolade. Culturally, make clear that an outstanding individual performance is a hollow (and dubious) victory in a failed team.

The final installment in this series will cover some immediate actions you can take today to restructure.

Tuesday, August 31, 2010

One-Way Risk and Robustness of IT Projects

Writing in the FT's Long View column, James Mackintosh makes the point that hedge fund managers “appeared smarter than they really were, because they were taking a risk they did not recognize.” That’s an apt description for a lot of what goes on in IT, too.

Despite all of the risks that commonly befall an IT project, we still deal with IT planning as an exercise in deterministic forecasting: if these people do these things in this sequence we will produce this software by this date. The plan is treated as a certainty. It then becomes something to be optimized through execution. As a result, management concerns itself with cost minimization and efficiency of expenditure.

Trouble is, an operations plan isn't a certainty. It's a guess. As Nassim Taleb observed in Errors, Robustness and the Fourth Quadrant:

Forecasting is a serious professional and scientific endeavor with a certain purpose, namely to provide predictions to be used in formulating decisions, and taking actions. The forecast translates into a decision, and, accordingly, the uncertainty attached to the forecast, i.e., the error, needs to be endogenous to the decision itself. This holds particularly true of risk decisions. In other words, the use of the forecast needs to be determined – or modified – based on the estimated accuracy of the forecast. This, in turn creates an interdependency about what we should or should not forecast – as some forecasts can be harmful to decision makers.

In an IT project context, the key phrase is: “This holds particularly true of risk decisions.” We take thousands of decisions over the course of an IT project. Each is a risk decision. Yet more often than not, we fail to recognize the uncertainty present in each decision we make.

This comes back to the notion that operations plans are deterministic. One of the more trite management phrases is “plan your work and work your plan.” No matter how diligently we plan our work in IT, we are constantly under siege while “working our plan”. Developers come and go. Business people come and go. Business needs change. The technology doesn’t work out as planned. The people responsible for the interfaces don’t understand them nearly as well as they believe they do. Other business priorities take people away from the project. Yet we still bake in assumptions about these and many other factors into point projections – as opposed to probabilistic projections – of what we will do, when we will be done and how much it will cost.

Our risk management practices should shed light on this. But risk management in IT is typically limited to maintaining a “risks and issues” log, so it’s never more than an adjunct to our plan.

That most IT projects have only rudimentary risk management is quite surprising given the one-way nature of risks in IT. One-way risks are situations where we have massive exposure in one direction, but only limited exposure in another. Taleb gives the example of trans-Atlantic flight times. It’s possible for an 8 hour flight to arrive 1 or possibly 2 hours early. It can’t arrive 6 hours early. However, it can arrive 6 hours, or 8 hours, a day or even several days late. Clearly, the risks to flight duration are substantially in one direction. IT risks are much the same: we may aggressively manage scope or find some efficiency, but by and large these and many other factors will conspire to delay our projects.

The fact that risk in IT is substantially one-way brings a lot of our management and governance into serious doubt. Having a project plan distinct from the risk log makes the hubristic assumption that we will deliver at time and cost, so we must pay attention to the things that threaten the effort. Given that our risk is substantially one-way, we should make a more humble assumption: odds are that delivery will occur above our forecast time and cost, so what do we need to make sure goes right so that we don't? While such a pessimistic perspective may be in direct contrast to the cheerleading and bravado that all too often pass for "management", it makes risk the core activity of management decision making, not a peripheral activity dealt with as an exception.

In Convexity, Robustness and Model Error in the Fourth Quadrant, Taleb makes the point that one-way risk is best dealt with by robustness – for example, that we build redundancies into how we work. Efficiency, by comparison, makes us more vulnerable to one-way risk by introducing greater fragility into our processes. By way of example, think of the "factory floor" approach to IT, where armies of people are staffed in specialist roles. What happens to the IT "assembly line" when one or more role specialists exit, depriving the line of their situational knowledge? Without redundancy in capability, the entire line is put at risk.

Common sense and statistical analysis both conclude that an optimized system is sensitive to the tiniest of variations. This means that when risks are predominantly one-way – such as in IT projects – it behooves us to err on the side of robustness.

That risk in IT is substantially one-way brings a lot of our management and governance into serious doubt. Having a project plan distinct from the risk log makes the hubristic assumption that we will deliver at time and cost, so we must pay attention to this list of things that could go wrong. Given the one-way risk - and the uncertainty of what those risks are - we should make a more humble assumption: delivery will occur well above our forecast time and cost, so what do we need to make sure goes right? While such a pessimistic outlook may be in direct contrast to the cheerleading and bravado that pass for "management", it makes risk the core activity of management decision making, not a peripheral activity dealt with as an exception.

Robustness is the antithesis of efficiency. Maximum efficiency of execution against a plan calls for the fewest people delivering the most output to a predetermined set of architectural decisions. Building in robustness – for example, redundancy of people so that skills and knowledge aren’t resident in a single person, pursuing multiple technical solutions as a means of mitigating non-functional requirements, etc. – will not come naturally to managers with a singular focus on minimizing cost, especially if, like hedge fund managers James Mackintosh was referring to, they’re blissfully unaware of the risks.

So, what can we do?

First, we have to stop trafficking in the false precision of IT project management. This is no easy task, particularly in a business culture rooted in fixed-budgets and rigid planning cycles, buyers of industrial IT expecting that technology labor is interchangeable, and so forth. We won’t change the landscape all at once, but we can have tremendous influence with current business examples that will be relevant to sponsors and investors of IT projects. If we change the expectations of the people paying for IT projects, we can create the expectation that IT should provide probabilistic projections and take more robust – and therefore one-way risk tolerant – solution paths.

Second, we can introduce risk management that is more sophisticated than what we typically do, yet still easy to understand. If you haven’t read the book, or haven’t read it for a while, pick up Waltzing with Bears by DeMarco and Lister. Their statistical model for risk profiling is a good place to start, quick to work with and easy to understand. Nothing stops us from using it today. Now, the act of using the tool won’t make risk management the central activity of project managers or steering committees, but adding a compelling analysis to the weekly digest of project data will shift the balance in that direction. That, in turn, makes it easier to introduce robustness into IT delivery.

On that subject of robustness, Taleb observed:

Close to 1000 financial institutions have shut down in 2007 and 2008 from the underestimation of outsized market moves, with losses up to 3.6 trillion. Had their managers been aware of the unreliability of the forecasting methods (which were already apparent in the data), they would have requested a different risk profile, with more robustness in risk management …. and smaller dependence on complex derivatives.

Given the success rate of IT projects – still, according to the research organizations, less than 40% - IT project managers should similarly conclude that more robustness in risk management would be appropriate.

Friday, July 09, 2010

Separating Utility from Value Add

One of the more hotly debated subjects in the recent debate on financial services reform has been the reintroduction of Glass-Stegall. Enacted in 1933, the intent was in part to prevent banks from financing speculative investments with money obtained through deposit and lending. Because of the importance of commercial banking to the stability of the economy (and, arguably, society), it was deemed unacceptable to make it easy for a bank to take imprudent risks with money for which has a stewardship responsibility. The law was substantially repealed in the 1990s. Quite a few people have suggested that it be brought back.

Whether it's appropriate or not for banking isn't the purpose of this blog post. But there is some thinking behind the separation of business activity that's worth considering in the IT context.

Retail banking serves largely a utilitarian purpose in an economy. Deposits give banks the capital to make loans to small businesses, write mortgages, and so on. This banking infrastrucure allows a community to pool its resources to grow and flourish as it would likely not be able to do otherwise. It also provides new businesses with capital at startup, and stabelizing cash through business cycles. Still, you don't loan out money to everybody who asks for some. If a bank makes loans to people and companies that aren't creditworthy, they put deposits at risk. Needless to say, commercial banks have (historically, anyway) held high lending standards because they are expected to be highly risk averse. With low risk appetite come low returns.

While low returns aren't all that exciting, there's an argument to be made that low returns are just fine for this kind of banking. The mission of a commercial bank isn't to produce outsized returns; the mission is to be a financial utility, to be stable and consistent. With stability comes confidence in the financial system (a confidence underwritten by federal deposit insurance), and that confidence is a pillar of a strong society.

Investment banks are vastly different. They are, by definition, far more risk prone. While there are conservative investment banks - banks that engage largely in advisory and research and do a minimum of trading - there is an expectation that bulge bracket investment banks will produce an outsized return by taking outsized risks. They trade their client's capital as well as their own using complex strategies specifically to generate high yield.

Instead of producing large returns, of course, investment banking can produce large losses. Because a lot of investment banks make proprietary investments with borrowed capital (that is, they make leveraged investments), a projected windfall can quickly become a bottomless pit.

Hence one of the the reasons for separating investment and commercial banking. The utility functions of commercial banking provide a fat pile of capital that can be leveraged for investment banking activity. Trouble is, there's no upside for the utility side of the bank if it allows its deposits to be exposed to outsized risk. It still pays the same rate to depositors, still collects the same rate from borrowers. For the utility, there's only downside: in a universal bank, a severe loss in investment banking puts commercial deposits at risk. Putting explicitly risk-averse capital at high risk undermines the stability of the financial system.

So, that's banking. What does any of this have to do with IT?

Just like the banking system, IT has two sides to it's house: a utility side, and an investment side. Comingling them hasn't done us much good. If it's done anything, it's confused the business mission of IT. We should separate them into independently operating business units.

A significant portion - maybe 70%+ - of IT spend is on utility services, things that keep a business operating. This includes things like data storage, servers, e-mail, office productivity applications, virus protection, security and so forth. Obviously, business is largely conducted electronically today, so a business needs these things. Restated, there's a lot of business that we simply can't conduct today without it.

These utilities don't provide return in and of themselves. They're so ubiquitous in nature, and so fundamental to how business is done, it's not an option to try to operate without them. They're the information technology equivalent of electricity or tap water. A firm does not derive competitive advantage from the type of electricity it uses. Nor do we measure return on tap water.

And like electricity or tap water, you don't typically provide your own. You plug into a utility service that provides it for you. Every volt of electricity and every gallon of water are the same.

It actually would seem a bit strange for most businesses to be providers of their own utilities. Still, most companies are in the business of providing their own IT utilities.

One reason they do is because of the stationary intertia of IT. We've injected technology into companies through captive IT departments. Nobody questions "why do we obtain these services this way", because technology has "always been provided this way."

Another is that IT services have complex properties to them that other utilities don't. Every volt of electricity is the same, but not every byte of e-mail is the same. Some contain proprietary, confidential or sensitive information. It's not enough for a firm to outsource responsibility for the protection of that data. If data confidentiality is compromised, the firm contracting for the utility is compromised. All the commercial and service level agreements in the world won't undo the damage.

Of course, these complex properties don't make them "high value added" services. They're still utilities. They're just a bigger pain in the neck than things like electricity.

It's very likely that a lot of what we do in captive IT today will be obtained as a utility service in the future. We'll buy it like tap water, metered and regulated. Obviously, this is the business model of SAAS and outsourced services. While they're still not robust enough for every business, we're seeing advances in things like networking and encryption technology that provide a greater level of accessibility and assurance. We're getting close to (if not already well past) the inflection point where it's less attractive to underwrite the risk of providing these things captively than to get them metered.

But not everything done by captive IT is utility. The remaining 30% of today's IT spend is investment into proprietary technology that amplifies the performance of the business to increase yield. This is "high value added" because it provides unique, distinct competitive advantage to the host business. Investing in these things is one way we build our businesses, and make life difficult for the competition.

Which brings us back to Glass-Stegall: just as those two forms of banking are vastly different, so are these two forms of IT.

Dividing IT along "utility" and "value added" lines is a departure from where we are today. We've put everything from disks to development under the heading of "technology" in most companies, because we've had no other way of looking at it. Technology is still in its infancy, is still relatively foreign to most people, and we're still figuring out how to apply it in business. So anything involving technology is considered foreign to a business, and attached to it as an appendix, or a tumor.

Nor is the common division of IT into "infrastructure" and "application development" the dividing line between utility and value-add. Not all infrastructure is utility, and not all app dev is value add. Firms dependent on low latency for competitive edge are not likely to get competitive advantage by hosting their applications in the cloud. Similarly, payment processing is perhaps not something that a retail site wants to invest money into development of, so it contracts to get those services.

This is not a separation of IT by the nature of the technology, but into what technology does for the host business. That portion of the business that provides outsized return - the "investment banking" portion - is what should remain captive. The rest - the "utility banking" - should be part of facilities or operations management. The expectation must also be that this division is dynamic: today's captive data center may be tomorrow's CPU cycles obtained through the cloud if there's no performance or reliability to be gained from providing it captively.

Separating utility from value add will make IT a better performing part of the business. Because they're comingled today, we project characteristics of "investment" into what are really utilities, and in the process we squandor capital. Conversely, and to ITs disadvantage, we project a great deal of "utility" into the things that are really investments, which impairs returns.

As a business function, IT has no definition on its own. It only has definition as part of a business, which means it needs to be run as a business. The risk tolerance, management, capabilities, retention risks, governance and business objectives of these two functions are vastly different. Indeed, the "business technologist" of value added IT needs a vastly different set of skills, capability, and aptitude than she or he generally has today. Clearly, they're vastly different businesses, and should be directed accordingly.

Separating the utility from the value add allows us to reduce cost without jeopardizing accessibility to utility functions, and simultaneously build capability to maximize technology investments. Running them as entirely different business units, managed to a different set of hiring expectations, performance goals, incentive and reward systems, will equip each to better fulfill the objectives that maximize their business impact.

Saturday, June 26, 2010

A Portfolio Perspective on Requirements

A software application is not a collection of features that create business value. It is a portfolio of business capabilities that yield a return on the investment made in creating them.

This isn't semantics. There's a big difference between "business impact" and "financial returns."

Some software requirements have a direct business impact. But not all of them do, which we'll explore in a little bit. As a result, the justification for and priority of a lot of requirements are not always clear, because the language of "business value" is one-dimensional and therefore limiting. "Financial returns" is far more expansive concept. It brings clarity - in business terms - why we have to fulfill (and, for that matter, should not fulfill) far more requirements. Thinking about "returns" is also more appropriate than "value" for capital deployment decisions, which is what software development really is.

Why is software development a "deployment of capital"? Because a company really doesn't need to spend money on technology. When people choose to spend on software development, they're investing in the business itself. We elect to invest in the business when we believe we can derive a return that exceeds our cost of capital. That's why we have a business case for the software we write. That business case comes down to the returns we expect to generate from the intangible assets (that is, the software) we produce.

This should affect how we think about requirements. As pointed out above, a lot of requirements have a clear and direct business impact. A business requirement to algorithmically trade based on fluctuations in MACD, volume weighted average price and sunspot activity has a pretty clear business value: analysis before we code it tells us some combination of market and cosmic events leads to some occasional market condition that we expect we can capitalize on. And after the fact, we know how much trading activity actually occurs on this algorithm and how successfully we traded.

But not all requirements fit the business impact definition so nicely. We fulfill some requirements to avoid paying a penalty for violating regulations. Others increase stability of existing systems. Still others reduce exposure to catastrophic events.

This is where "business value" loses integrity as an index for requirements. Calling one activity that increases revenue equivalent to another that reduces exposure to catastrophic loss is comparing apples to high fructose corn syrup. They're sweet and edible, but that's about it.

As anybody who has ever run a business knows, not every dollar of revenue is the same: some contracts will cost more to fulfill, will cause people to leave, will risk your reputation, etc. The same is true in "business value": not every dollar of business value is the same. Translating all economic impact into a single index abstracts the concept of "business value" to a point of meaninglessness. Making matters worse, it's not uncommon for IT departments to sum their "total business value" delivered. Reporting a total value delivered that eclipses the firm's enterprise value impeaches the credibility of the measure.

Business value is too narrow, so we need to have a broader perspective. To get that, we need to think back to what the software business is at its core: the investment of capital to create intangible assets by way of human effort.

The operative phrase here isn't "by way of human effort", which is where we've historically focused. "Minimizing cost" is where IT has put most of its attention (e.g., through labour arbitrage, lowest hourly cost, etc.) In recent years, there's been a movement to shift focus to "maximize value". The thinking is that by linking requirements to value we can reduce waste by not doing the things that don't have value. There's merit in making this shift, but essentially "maximize value" and "minimize cost" are still both effort-centric concepts. Effort does not equal results. The business benefits produced by software doesn't come down to the efficiency of the effort. They come down to the returns produced in the consumption of what's delivered.

Instead of being effort-centric, our attention should be requirements-centric. In that regard, we can't be focused only on a single property like "value." We have to look at a fuller set of characteristics to appreciate our full set of requirements. This is where "financial returns" gives us a broader perspective.

When we invest money to create software, we're converting capital into an intangible asset. We expect a return. We don't get a sustainable return from an investment simply if it generates revenue for us, or even if we generate more revenue than we incur costs. We get a sustainable return if we take prudent decisions that make us robust to risk and volatility.

Compare this to other forms of capital investment. When we invest in financial instruments, we have a lot of options. We can invest at the risk-free rate (traditionally assumed to be US Treasurys). In theory, we're not doing anything clever with that capital, so we're not really driving much of a return. Alternatively, we can invest it in equities, bonds, or commodities. If we invest in a stock and the price goes up or we receive a dividend, we've generated a return.

But financial returns are at risk. One thing we generally do is spread our capital across a number of different instruments: we put some in Treasurys to protect against a market swoon, some in emerging market stocks to get exposure to growth, and so forth. The intent is to define an acceptable return for a prudent level of risk.

We also have access to financial instruments to lock in gains or minimize losses for the positions we take. For example, we may buy a stock and a stop loss to limit our downside should the stock unexpectedly freefall. The put option we purchased may very well expire unexercised. That means we've spent money on an insurance policy that wasn't used. Is this "waste"? Not if circumstances suggest this to be a prudent measure to take.

We also have opportunities to make reasonable long-shot investments in pursuit of outsized returns. Suppose a stock is trading at $45 and has been trading within a 10% band for the past 52 weeks. We could buy 1,000,000 call options at $60. Because these options are out of the money they won't cost us that much - perhaps a few pennies each. If the stock rises to $70, we exercise the call, and we'll have made a profit of $10m less whatever we paid for the 1m calls. If the stock stays at $45, we allow the options to expire unexercised, and we're out only the money we spent on those options. This isn't lottery investing, it's Black Swan investing - betting on extreme events. It won't pay off all that often, but when it does, it pays off handsomely.

These examples - insurance policies and Black Swans - are apt metaphors for a lot of business requirements that we fulfill.

For example, we need to make systems secure against unauthorized access and theft of data. The "value" of that is prevention of loss of business and reputational damage. But implementing non-functional requirements like this isn't "value", it's insurance. The presence of it simply makes you whole if it's invoked (e.g., deters a security threat). This is similar to a mortgage company insisting that a borrower take out fire insurance on a house: the fire insurance won't provide a windfall to the homeowner or bank, it'll simply make all parties whole in the event that a fire occurs. That insurance is priced commensurate with the exposure - in this case, the value of the house and contents, and the likelihood of an incendiary event. In the same way, a portfolio manager can take positions in derivatives to protect against the loss of value. Again, that isn't the same as producing value. This insurance most often goes unexercised. But it is prudent and responsible if we are to provide a sustainable return. To wit: a portfolio manager is a hero if stock bets soar, but an idiot if they crater and he or she failed to have downside protection.

We also have Black Swan requirements. Suppose there is an expectation that a new trading platform will need to support a peak of 2m transactions daily. But suppose that nobody really knows what kind of volume we'll get. (Behold, the CME just launched cheese futures - with no contracts on the first day of trading.) So if we think that there's an outside chance that our entering this market will coincide with a windfall of transactions, we may believe it's prudent to support up to 3x that volume. It's a long shot, but it's a calculated long shot that, if it comes to pass and we're prepared for it, provides an outsized yield. So we may do the equivalent of buying an out-of-the-money call option by creating scalability to support much higher volume. It's a thoughtful long-shot. A portfolio manager is wise for making out of the money bets when they pay off, but a chump if he or she all positions aligned with conventional wisdom and a market opportunity is missed.

Neither of these examples fit the "value" definition. But they do fit well into a "portfolio" model.

Of course, just as determining the business value of each requirement isn't an exact science, neither is defining a projected investment return. Even if we ignore all the factors that impact whether returns materialize or not (largely what happens after the requirement is in production), the cost basis is imprecise. We have precise pricing on liquid financial instruments such as options. We don't have precise pricing in IT. The reason goes back to the basic definition of software development: the act of converting capital into intangible assets by way of human effort. That "human effort" will be highly variable, dependent on skills, experience, domain complexity, domain familiarity, technology, environment, etc. But this isn't the point. The point isn't to be precise in our measurement to strain every ounce of productivity from the effort. We've tried that in IT with industrialization, and it's failed miserably. The point is to provide better directional guidance that maximize returns on the capital, to place very well informed bets and protect the returns.

It's also worth pointing out that going in pursuit of Black Swans isn't license to pursue every boondoggle. Writing the all singing, all dancing login component in this iteration because "we may need the functionality someday" has to withstand the scrutiny of a reasonable probability of providing an outsized return relative to the cost of investment. Clearly, most technology boondoggles won't pass that test. And all our potential boondoggles are still competing for scarce investment capital. If the case is there, and it seems a prudent investment, it'll be justified. If anything, a portfolio approach will make clearer what it is people are willing - and not willing - to invest in.

Because it gives multi-dimensional treatment to the economic value of what we do, "portfolio" is a better conceptual fit for requirements than "value." This helps us to frame better why we do things, and why we don't do things, in the terms that matter most. We'll still make bad investment decisions: portfolio managers make them all the time. We'll still do things that go unexercised. But we're more likely to recognize exposure (are you deploying things without protecting against downside risk?) and more likely to capitalize on outsized opportunities (so what happens if transaction volume is off the charts from day one?) It's still up to us to make sound decisions, but a portfolio approach enables us to make better informed decisions that compensate for risk and capitalize on the things that aren't always clear to us today.

Friday, June 11, 2010

Short Run Robustness, Long Run Resiliency

There is no such thing as a "long run" in practice --what happens before the long run matters. The problem of using the notion of "long run", or what mathematicians call the "asymptotic" property (what happens when you extend something to infinity), is that it usually makes us blind to what happens before the long run. ...
[L]ife takes place in the pre-asymptote, not in some Platonic long run, and some properties that hold in the pre-asymptote (or the short run) can be markedly divergent from those that take place in the long run. So theory, even if it works, meets a short term reality that has more texture. Few understand that there is generally no such thing as a reachable long run except as a mathematical construct to solve equations - to assume a long run in a complex system you need to assume that nothing new will emerge.

Mr. Taleb is commenting on economists and financial modelers, but he could just as easily be commenting on IT planning.

Assertions of long-term consistency and stability are baked into IT plans. For example, people are expected to remain on the payroll indefinately; but even if they don’t, they’re largely interchangeable with new hires. Requirements will be relatively static, specifically and completely defined, and universally understood. System integration will be logical, straightforward and seamless. Everybody will be fully competent and sufficiently skilled to meet expectations of performance.

Asserting that things are fact doesn’t make them so.

Of course, we never make it to the long run in IT. People change roles or exit. Technology doesn't work together as seamlessly as we thought it would. Our host firm makes an acquisition that renders half of our goals irrelevant. Nobody knows how to interface with legacy systems. The historically benign financial instruments we trade have seen a sudden 10x increase in volume and volatility off the charts. A key supplier goes out of business. Our chief rival just added a fantastic new feature that we don't have.

Theoretical plans will always meet a short-term reality that has more texture.

* * *

After the crisis of 2008, [Robert Merton] defended the risk taking caused by economists, giving the argument that “it was a Black Swan” simply because he did not see it coming, hence the theories were fine. He did not make the leap that, since we do not see them coming, we need to be robust to these events. Normally, these people exit the gene pool –academic tenure holds them a bit longer.
- ibid.

The long-term resiliency of a business is a function of how robustly it responds to and capitalizes on the ebbs and flows of a never-ending series of short runs. The long-term resiliency if an IT organization is no different.

This presents an obvious leadership trap, the “strategy as a sum of tactical decisions” problem. Moving with the ebb and flow makes it hard to see the wood for the trees. An organization can quickly devolve into a form of organized chaos, where it reacts without purpose instead of advancing an agenda. Reacting with purpose requires continuous reconciliation of actions with a strong set of goals and guiding principles.

But it also presents a bigger, and very personal, leadership challenge. We must avoid being hypnotized by the elaborate models we create to explain our (assumed) success. The more a person invests in models, plans and forecasts, the more they will believe they see artistic qualities in them. They will hold the models in higher esteem than the facts around them, insisting on reconciling the irrational behavior of the world to their (obviously) more rational model. This is hubris. Obstinance for being theoretically right but factually wrong is a short path to a quick exit.

Theoretical results can't be monetized; only real results can.

Wednesday, May 19, 2010

Webinar: Being an Activist Investor in IT Projects

Please join me on 26 May for a webinar on Activist IT Investing.

An ounce of good governance is worth a pound of project rescue. Agile practices, with their emphasis on transparency, business alignment and technical completion, are enablers of better IT governance. But all the transparency and alignment in the world isn't going to do us any good if we're not equipped to pay attention and act on it.

An Agile organization needs a new approach to governance, one that makes everybody think not as caretakers of a project but investors in a business outcome. This presentation explores the principles of Agile governance, and what it means to be an activist IT investor in a Lean-Agile world.

What you will learn

  • What are the principles of IT governance?
  • What kind of governance does Agile enable and demand?
  • How do we create a culture of activist investors in IT projects?
I hope you can join me on the 26th. Click here to register.

Friday, May 07, 2010

Digital Squalor

In the not too distant past, storage was limited and expensive. As recently as 1980, 1 megabyte of disk storage cost $200. But this is no longer the case. Today, you can buy 8,000 megabytes (a.k.a. 8 gigabytes) for $1. Storage capacity is now so abundant and compact that you can record every voice conversation you’ll ever have in a device that can fit into the palm of your hand.

What this means is that storage is no longer a physical (capacity) challenge, but a logical (organization) one. We’re maximizing the prior, storing everything we can digitize. Unfortunately, we’re not really making a lot of progress on the latter, as “intelligence” eludes us in an ever-expanding swamp of “data.”

Let’s think about the characteristics of data, just on a personal level.

  • We have data everywhere. E-mails contain data. So do documents and spreadsheets. So do various applications, such as a local contact manager. So do subscription services, such as Salesforce.com. So do financial management tools (be it Quickbooks or Oracle Financials.) So does Twitter. So digital photos. So do news feed subscriptions. So do voicemails. So do Podcasts and webinars for that matter.
  • We have a lot of redundant data. How many different booking systems have your frequent flier numbers, know that you prefer an aisle to a window, and know that you prefer a vegetarian meal on long-haul flights? And how much of that has changed since you last edited your profile in each of those systems? Or, think about contact information. How many places do you have your co-worker's (multiple) contact details spread out: in your mobile phone? Corporate directory? Google contacts? Personal e-mail box?
  • There is data in the inter-relationships among data. This document references this spreadsheet, and both were discussed in this meeting on this date with these people. Copies of drafts under discussion at the time may be attached or referenced to the meeting invitation.
  • Our data is inconsistent. We have full contact information for some people who attended a meeting because they’re in the company directory, but perhaps we have only personal data for some because we’re connected to them via LinkedIn, and still for others all we have is an e-mail address.
  • Data has different meaning depending the context. A contract from 2005 between one firm and another is a binding legal document in the context of that relationship. But that document is also a source of language that might be useful when we are drawing up a contract with the same people in that firm, with different people in that firm, or with a different firm all together. Or a specific presentation from 5 years ago may have referenceable content, but at the moment we're only interested in the fact that it encapsulates a template that has elements you want to re-use.
  • We lug this data around with us. Some of it we carry around with us in the file system paradigm, moving it from laptop to laptop. Some we have in our smart phones and media players. Some is stored in a managed service like LinkedIn. Some is managed for us in a service like iTunes. There have been attempts to corral and manage slices of this data: for example, consolidating contact details, e-mail history, proposals in a single CRM system. None have been runaway successes. They’re either incomplete, inadequate, or simply too much work to sustain.

And that’s just a recon of our personal data. The scope of this is amplified several orders of magnitude on a corporate and societal level. To wit: marketing departments seem perpetually engaged in contact list consolidation and clean-up. Then there are all those automatic feeds setup to get everything from bond prices to today’s weather to city council meeting notes.

The fact is, we already live in digital squalor. In a relatively short period of time, we’ve gone from having very little digitally stored, to having a lot digitally stored. Only, along the way we didn’t give much thought to maintaining good hygiene of it all. We have data everywhere. Some structured, some not. Some readily accessible, some long forgotten, and some we’re not entirely certain have integrity any longer. And the bad news is, we’re accumulating data at an exponentially increasing rate.

We tame the data monster through our mental memories and our synaptic processes. A memory or an idea triggers a recollection, so we know to go look for something and roughly where we might find it. Sometimes we're able pull together distinct pieces of data - possibly squirrled away over a period of several years - to derive some useful information. But not all data is created equally, so when we go mining through data, we have to judge whether it has sufficient integrity for our purpose. Is it current enough? Is it from a credible source? Is it a final version or a draft? The bottom line is, it’s human intervention that allows us to bring order out of ever-increasing data chaos.

We're going to be living in digital squalor for quite some time. There are some interesting conclusions we can draw from that.

Our principal tool for managing the data bloat is search. Search is a blunt instrument. Search is really a simple attribute-based pattern matching tool that abdicates results processing to the individual. Meta-tagging is limited and narrow, so we don’t really have much in the way of digital synaptic processes. As the data behemoth grows, search will be decreasingly effective.

But as our digital squalor expands, it presents opportunity for those who can produce a clear, distinct signal from so much noise, e.g., by bringing data and analyses to bear on problems in ways never previously done. One example is FlightCaster, which applies complex analytics on publicly available data (such as the weather, and current flight status and historical flight data) to advise whether you should switch flights or not. It's a decision support tool providing an up-to-date analysis at the moment of decision where none existed previously.

This marks a significant change in IT. We've spent most of the past 60 years in technology creating tools to automate and digitize tasks and transactions. We now have lots of tools. Because of the tools, we also have lots of data. For the first time in history, we can get powerful infrastructure with global reach for rediculously little capital outlay:

  • the internet allows us to access vast amounts of specialized data;
  • cloud computing gives us virtually unlimited, pay-as-you-go computing power to analyze it;
  • smartphones on mobile internet give us an ubiquitous means to deliver our analyses.

Historically, Information Technology has focused on the "technology". Now, it's focused on the "information".

Digital squalor gives us the first broad-based tech-entrepreneurial opportunity of the 21st century. We're now able to pursue information businesses that wouldn't have been viable just a few years ago. We’re limited only by our imagination: what would I really like to know at a specific decision-making moment?

Answer that, and you've found your next start-up.

Monday, April 26, 2010

Mitigating Corporate Financial Risks of Lean IT

It's pretty well established that Agile and Lean IT are more operationally efficient than traditional IT. Agile teams tend to commit fewer unforced errors, and don't defer work. This results in fewer surprises - and with it, fewer surprise costs - in the final stages of delivery. Agile practices unwind the “requirements arms race” between business and IT, while Lean practices reduce waste throughout the delivery cycle. And Agile teams are organized around results as opposed to effort, which enables more prudent business-IT decisions throughout delivery.

This operational efficiency generally translates into significant bottom line benefits. From a financial perspective, Agile IT:
  • Can capitalize a greater proportion of development costs
  • Consumes less cash and manages cash expenditure more effectively
  • Has higher yields and offers better yield protection on IT investments
  • Is less likely to experience a catastrophic correction that takes everybody by surprise (e.g., appear to be a Black Swan event)

While all this sounds good, there’s no such thing as a free lunch. No surprise, then, that fully agile IT brings a new set of risks. A leaned-out IT organization capitalizing a significant proportion of its discretionary spend is highly susceptible to a perfect storm of (a) SG&A contraction, (b) IT project write-off, and (c) suspended IT investments.

The "Perfect Storm" of a Lean-IT Financial Crisis

The meteorology of this perfect storm happens more often than you might think. Consider the following scenario. Suppose at the end of the first half of a fiscal year (H1), we face the following:

  • SG&A spend on data center operations runs higher than forecast because more mainetnance work is done than forecast, and contractor costs rise unexpectedly
  • Effort has to be written off a capital project because the project team didn't achieve anything meaningful (and therefore there's no asset to capitalize)
  • An early stage investment is suspended pending a re-examination of business benefits

Then the CEO pops by to say that H1 results are disappointing, and asks us to cut the IT SG&A budget significantly for H2.

We now have a lot of things competing for our reduced SG&A budget in H2. Meanwhile, our capital investment portfolio is underperforming.

Our Lean IT organization faces a two-phase exposure similar to the credit crisis that struck Wall Street in 2007. Initially, we face a liquidity crisis. This quickly gives rise to a solvency crisis.

Phase 1: Liquidity Crisis

A liquidity crisis is triggered by the contraction of SG&A (also known as Operating Expense, or OpEx). Whether our business is govered by GAAP or IAS, the rules that govern capitalization of intangible assets such as software require us to define our investment intention. That is, we need to be able to explain that we're making a capital investment in software because we expect to achieve this return or business benefit. In IT, investment intention is defined by a project inception phase of some kind. Accounting rules dictate that inception has to be funded out of SG&A. What this means is that before we can spend out of a capital budget, we must spend some SG&A money first. It's also important to bear in mind that the same is true at the other end of the delivery cycle: last mile tasks such as data migration can't be capitalized; they also must be funded out of SG&A.

In effect, our SG&A budget (also known as operating expense, or OpEx) is leveraged with capital expense (CapEx). A contraction of OpEx proportionally reduces the CapEx accessible to us. This puts IT capital investments at risk. If we have less OpEx to spend, we may not be able to start new projects because ramp-up activities like project inception must be funded out of OpEx. We also may not be able to get capital investments into production because the things we need to do to get them across the finish line must be funded out of OpEx. Depending on how highly we're leveraged, even a small loss of OpEx may create liquidity seizure of our IT portfolio. This will force us to make difficult investment decisions to defend our portfolio returns.

Phase 2: Solvency Crisis

Just as happened on Wall Street, a liquidity crisis soon becomes a solvency crisis. In IT, “solvency” is capability. IT departments invest in people to learn how to get stuff done both in the business context (what are the critical business drivers?) and the IT context (how do all these systems actually work together?) IT people master the situational complexity necessary to keep the systems that run the business humming. They know not only which bit to twiddle, but why.

With this in mind, think back to our two funding sources: OpEx and CapEx. Capitalizing development of IT assets is an exercise in funding salaries and contractor costs out of CapEx budgets. As described above, an IT department that experiences a liquidity seizure loses access to its capital budgets. With capital funds inaccessible for payroll, an IT department faces very uncomfortable staff retention decisions. The people who know how to get stuff done may have to be released. If that happens, the very solvency – that is, the ability of the IT department to meet business demands – is in jeopardy.

While transferring from one budget to another may appear to be a simple way to protect IT solvency, it’s an option of last resort. Capitalization distributes the cost of an investment over many years, because the asset is depreciated. Depreciation increases corporate profitability for the year in which an IT investment is made because costs are deferred. Conversely, expensing recognizes the cost of an investment as it occurs, which decreases profitability for the year in which an investment is made. Moving money from CapEx to OpEx, then, will have a negative impact on current FY profitability. “IT Impairs Earnings” is not the sort of headline most CIOs aspire to see in the annual report. In fact, going hat in hand to the CFO is a career limiting move for the CIO.

Mitigation

This “perfect storm” is more common than you might think. Mitigating exposure is done through a variety of different mechanisms.

One is to hedge the project portfolio by bringing several investments into the early stages of delivery and then putting them into operational suspense. This creates a deliberate OpEx expenditure at the beginning of a fiscal cycle (before risks of OpEx impairment are realized over the course of a year) to multiple project inceptions, and then rendering some of those investments dormant. This diversifies the IT project portfolio, allowing IT capability to shift among different projects should one or more of those projects be cancelled.

Another is to align the project financing window with the Agile delivery window. A lot of this risk results from the incongruity of a long budgeting cycle that is matched with short Agile delivery windows. Following the lead of the sustainable business community, businesses should move to adopt micro-finance for IT projects. This is very difficult to achieve. Among other things, it requires an active investment authority (e.g., an investment committee actively involved in portfolio reviews) as well as a hyper-efficient inception process.

Yet another is to encourage people to think like “active traders” as opposed to “passive investors”. Each person must look for “trades” that will improve a team position overall. This can be anything from the priority of specific requirements or the people in the team (e.g., aggressively rooting out net negative contributors).

Finally, and most importantly, we’ve learned that Agile IT requires Agile governance. We may have all the data in the world, but optimal investment decisions don’t happen of their own volition. Just as Agile delivery teams require tremendous discipline, so, too, does the Agile executive. Liquidity and solvency crises are averted not through mechanical processes, but through meta-awareness of the state and performance trajectory of each investment in the portfolio.

Sunday, March 21, 2010

Supplying IT Mercenaries

Last month we took a look at the different types of staffing in IT, using Machievelli’s book The Prince as a guide.

Buyers of forces, be they military or IT, have long been advised against employing mercenaries. Strangely enough, nobody has paid this counsel much mind. The buy side still buys mercenaries, more than ever. Just have a look at your own sales lead list. Lots of demand for short-term specialists.

So what’s an IT supplier to do?

Let’s look at this from the perspective of a “supplier of forces” that wishes to be sustainable business, one that aspires to do business for many years. In the parlance of Machiavelli, we want to look at this from the perspective of a firm that wishes to be an independent “state.”

Supplying Mercenary Forces

For IT firms, there are always mercenary opportunities, because there are always buyers looking to fill highly specialized roles for some period of time: an Oracle financials expert, an iPhone app developer, an interim project manager, a Sharepoint specialist.

To the supplier of forces, a mercenary opportunity might look attractive because it appears to offer short-term placement, outsized income, and few strings attached. This is almost always illusory. In fact, mercenary work can cause more harm to the supplier than it’s worth.

Mercenary work is income, not wealth.

When a supplier firm is in need of income, mercenary work can appear to be especially attractive. But income is not wealth.

Income pays the bills, but most income streams are not sustainable. Wealth sustains a business. The sell-side firm accumulates wealth in many ways, including intellectual property, a well-honed social system for delivery, people who are highly capable or have deep industry knowledge, and referenceable, long-term clients. Mercenary jobs do not contribute to the development of any of these. This stands to reason, as the mercenary buyer is looking to exploit expertise, not contribute to its development. As a result, mercenary opportunities don’t contribute to wealth. They are income, and little else.

Income can be useful depending on the needs that the supplier has. However, it must be recognized for what it is, and not confused with wealth.

The cost of mercenary income is very high.

Buyers of mercenaries rent knowledge and expertise in pursuit of their own agenda. In so doing, they exploit, but do not build, the wealth of suppliers.

Let’s think back to Machiavelli for a moment. Machiavelli wrote in terms of princes and the states they govern. In Machiavellian “sovereign state” terms, if a state dispatches too many of its best forces on mercenary missions, it will be unable to defend its home and advance its agenda. The same applies to the would-be sustainable IT supplier.

Suppose a sell-side firm chooses to retain its best and supply untrained or unskilled forces into a mercenary mission. The buyer of mercenaries will recognize the skill level of the forces supplied and conclude they are not getting value for money. The buyer will complain and even threaten the supplier (seeking damages, seeking not to pay, etc.). This means mercenary work draws down senior staff almost exclusively.

Being forced to dispatch senior staff is costly as it stunts the development of the sell-side business. Mercenary engagements rarely offer opportunities for the supplier to incubate new capability as can be done when deploying on “one’s own” missions. By extension, because they’re on mercenary missions, senior staff are unavailable to the sell side firm for the “one’s own” missions – developing deep customer relationships, industry knowledge or intellectual property – that build a business. This makes the sell-side business vulnerable and weak.

Sacrificing business development to accept jobs on offer is to trade wealth for income. That, by definition, makes it expensive income. It is dangerous for the supplier to get caught up in the attractiveness of income, especially if they lose sight of wealth-producing activities.

The worst mercenary engagements are destroyers of wealth.

The people you send into a mercenary mission may not return from the mission. For example, they may elect to quit and work for somebody else. Re-acquiring that capability is not inexpensive as you can’t just hire senior staff off the street: it takes time to recruit and train, build experiences, and mature somebody considered to be among the most senior staff.

But losing somebody in a mercenary mission is not just a loss of capability, it’s an erosion of wealth. Losing senior staff in which you’ve invested undermines the fabric of the supplying firm, especially if that person was a cultural icon, a strong leader, had many years of accumulated experiences, or was woven into the synaptic social processes of the organization itself. This form of wealth destruction in mercenary missions is particularly damaging because it results not from the pursuit of the supplier’s agenda, but in somebody else's agenda. That is, it’s not lost in pursuit of wealth, it’s lost in pursuit of income. Income doesn’t compensate for a destruction of wealth.

If you are going to put capability at risk (e.g., put people in a situation where they may be pushed beyond their limits), it is far better to have wealth to show for it.

Mercenary engagements encourage defection among the ranks.

Mercenary engagements favour the independent contractor more than they favour a firm that supplies mercenaries.

Suppose a member of your forces recognizes a mercenary situation for what it is, and furthermore that it’s likely to be a long-term mission. Since the situation is not likely to change, he or she might as well make the best of it for themselves. By becoming an independent contractor, they can negotiate more favourable terms with the buyer. This is usually nothing more complex than availing themselves at a lower cost to the buyer than their current host firm - that is, your firm - bills them out for. As an independent contractor, they’ll be well positioned to strike this bargain, especially once the mission is well under way and they’re already part of it.

The individual has the upper hand over the supplier firm because realpolitik trumps principles. Even with contractual covenants designed to deny a person going independent and taking a customer with them, the supplying firm will typically not impose them in mercenary circumstances. The sell-side firm already sacrificed pursuit of its agenda in pursuit of income. Also, the sell-side firm won't value mercenary work as highly as it will "one's own" or "auxiliary" work. Subsequently, the sell side firm won't risk future income from the buyer by playing the "principles of conduct" card with the would-be independent contractor. And both the independent contractor and the buyer know this.

Mercenary engagements can quickly become high maintenance.

If you send inferior or inexperienced forces, the buyer is likely to reject them and demand replacements and possibly seek damages. This can be as little as the refusal to pay for those initially fielded to expenses related to any transition work undertaken.

You may staff experienced forces, but they run into factors that curtail their effectiveness, things such as work environment, tools they must use, and the aptitude of those with whom they must work. The buyer of mercenaries is usually not inclined to respond to the demands of the mercenary, so they are ignored. The mercenary, unhappy and restless, may cause all kinds of disruption directly to you, or to the buyer (which will find its way to you). The mercenary may also resign themselves to the situation and mentally check-out. This will undermine the sense of value the buyer believes they are acquiring ("I expected more leadership from your people...") In either case, you will be spending a lot of time with the people deployed in a mercenary situation. On top of it, the buyer will be reluctant to pay, which will delay cash flow.

Alternatively, you may staff an experienced person, but one with the wrong personality. Personality conflicts are often not recognized for what they are, and personality problems completely obfuscate the landscape. So whether suitable to task or not, the buyer will claim he or she is not receiving value for money. In this case, the buyer may ask the supplier to intervene in integrating the mercenary, and will often communicate barbs or jabs stated by his or her "own" forces to undermine the credibility of the mercenary. The buyer may seek to renegotiate terms.

All of these increase the supplier’s cost of doing business, which erodes the margin on the mercenary income.

Mercenary engagements are very rarely closed-ended as promised.

Rarely does a patient tell a doctor the prognosis, what procedures to perform, what staff to have, what medications and treatments to prescribe, and so forth. In fact, in medicine, we do the opposite: doctors provide independent assessments, and proceed accordingly with the patient.

Yet mercenary engagements are defined by the buyer, not by the supplier. Mercenaries are rarely granted an opportunity to assess the battlefield, because they’re simply signing up to fight in somebody else’s battle.

Successful, clean extraction from the mercenary mission requires that the mercenary perform relative to the buyer expectation of the mission, and that the mercenary can explain how he or she has fulfilled consistent with buyer expectation. This requires the buyer to accurately articulate the problem space and how the mercenary will contribute to resolution. It also requires the buyer to completely articulate the problem space and how the mercenary will fit into that solution. Given the intrinsic optimism and selective amnesia that buyers of mercenaries typically have, it’s a stretch to assume that the buyer’s definition of the mission will have any bearing on the reality of what can, let alone what should, be done. The buyer has self-prescribed treatment, and most often, the prescription is well wide of the mark. This will obfuscate the definition of “success” of the mercenary mission, and make it difficult for the sell-side firm to conclude and collect.

The buy side may also wish to retain a mercenary for indefinite periods. People often buy mercenaries because their own forces are not performing well. The buyer can become unusually attached to a mercenary because this is the one person who speaks with clarity and authority and gets things done. The buyer may be unwilling to let the mercenary go, offering both extension and increased income. They may drag their feet on extracation procedures such as hand-off of mercenary work to their own, even if this comes at a cost to their own business. This positive vibe with a buyer may make the “income” more palatable, but it remains income just the same. And that vibe can just as quickly turn toxic due to some change in the buyer's situation.

Regardless the circumstance, a clean extraction from a mercenary mission is the exception rather than the rule.

Very few IT firms succeed as purely suppliers of mercenary forces.

Gaining a reputation as a “good mercenary” is not necessarily wealth building. It may create more mercenary opportunities. It may allow you to increase your fees. But it will not create wealth generating opportunities.

Mercenary skills tend to be highly specialized. The mercenary must be very fluent with a specific technology, a specific technique, or the specific nuances of how a particular company works. But as technology goes through cycles of obsolescence, as management fads come and go, and as companies are bought and sold, mercenary skills are high value for relatively short periods of time. The mercenary must therefore keep skills current and up to date and be in a position to influence technology cycles and management fads.

Conversely, there are few opportunities for generalist mercenaries. There may be people who can figure out a technology or collection of systems given time, but the buyer of mercenaries isn’t looking for generalist skills. They’re generally looking to have a very specific problem (e.g., involving a very specific collection of technologies) solved.

As mentioned above, mercenary work lacks characteristics of sustainability. Mercenaries are brought in to perform closed-ended jobs. While a job may command a high income, it is temporary by definition. The mercenary is removed from the situation at the first opportunity. Using the parlance of Machiavelli, once the battle is over, the mercenary will not be invited to be a “colonist.” The mercenary must therefore have a secure home to which to return and the opportunity to ply his or her trade elsewhere.

This means that a mercenary must always be on the lookout for the next opportunity, which usually means a different buyer. Because mercenary work tends to be very challenging and consuming, the mercenary can usually only go looking for new opportunities once the job at hand is completed. A mercenary is truly fortunate if he or she has a strong enough network that opportunities seek them out. On top of it, a mercenary must constantly evolve his or her skills to remain current, something difficult to do when deployed as a mercenary as opposed to an “auxiliary” or “one’s own” force.

Didn't the Swiss make it work?

What of Switzerland? Famous throughout the centuries as suppliers of mercenary forces (it wasn’t uncommon for Swiss forces to be engaged on opposite sides of the same battlefield), the Swiss were notoriously good mercenaries and converted mercenary income to wealth. Do they not offer a model for the would-be supplier of mercenaries? The Confederatio Helvetica benefits from a natural defensive geography. It is difficult for an invading force to mount an offensive as it’s tough to win an uphill battle, and then have sufficient forces remaining to sustain the victory. Switzerland has abundant natural resources, notably water, meaning an opposing force isn't going to win a war of attrition. A victor would have the unenviable challenge of administrating a government over the fiercely independent Swiss. All in all, the Swiss could avail a greater percentage of their population to mercenary pursuits without putting their own agenda (rather, the agenda set by each Canton) at great risk. Arguably, this was how the Swiss did advance their agenda.

While this model worked, it was also highly situational. There weren't a lot of other regions of Europe that had so many factors creating a natural invulnerability, let alone a population that sought principally to form a sustainable and symbiotic relationship with it. While it's not out of the realm of the possible, there aren't too many businesses today benefiting from market forces that make them similarly sustainable.

The sell-side has to know what it’s buying in a mercenary opportunity.

Clearly, the sell side buys into a mercenary situation, just as the buyer is buying mercenary forces.

The buyer of forces will often try to represent a mercenary opportunity as something other than “mercenary” to the supplier of forces. This can be innocuous: perhaps they don’t understand the distinction themselves, or this is how procurement has taught them services are contracted. It can also be malicious: a buyer can willfully deceive a supplier, perhaps because they’re engaged in a political battle with other people in their business and simply wish to deceive.

Mercenary work is still the most prevalent form of demand on the landscape. There is internal pressure to enter into these engagements as many people in the sell-side business will champion them for reasons ranging from hitting quarterly numbers to the brand association of doing business with those firms. Just ask yourself going in: where's the line between getting a “foot in the door” to build a business relationship and simply opening a short-term income spigot?

Throughout time, the sage advice to those on the buy side wishing to press forward an agenda as an independent "nation state" has been to avoid mercenaries. This advice is just as applicable to those on the sell side who aspire to be sustainable businesses pursuing an agenda of their own.

Tuesday, February 09, 2010

Mercenaries, Auxiliaries, and how we Staff IT

I've spent a lot of time reflecting on Brad Cross' blog post on Wages. It got me thinking specifically about Machiavelli's book, The Prince.

Niccolò Machiavelli made some important observations regarding the conduct of a prince, making specific recommendations for what a prince must do to maintain the integrity of a state. Among other things, he wrote about the composition of forces necessary to both defend and advance the interests of a principality.

While military metaphors are in many ways inappropriate for business (business situations are nowhere near as serious as armed conflict), they are among the oldest of human endeavours and provide a rich history of organization and social dynamic. Just as there have been business books that derive lessons from such diverse personalities as Sun Tzu and Attila the Hun, we can similarly draw some interesting lessons from Signore Machiavelli.

Machiavelli observed there are four compositions of troops:
  1. One's own forces (i.e., drawn from the population of the principality)
  2. Mercenaries
  3. Auxiliaries
  4. Mixed from any combination of the above.
This provides an apt metaphor that describes a how technology “campaigns” are commonly staffed.

On the buy side, most businesses aspire to staff with “one’s own forces.” This provides a perception of control, specifically with regard to compensation, promotion, and even daily work activity. Many firms believe they have "one's own" because they staff with direct hires. But there's more to having “one’s own forces” than putting people on the payroll. The interests of the employees (or in military terms, the conscripts) must be fully aligned with the unit. A unit (e.g., a project team or an army) is victorious and with it the sponsoring organization (business or state) and the individual, or the unit is not victorious and with it both sponsoring organization and individual lose out.

If an individual is pursuing a path of personal success that is not aligned with the success of their unit, they are not, from the perspective of the principality, “one’s own forces.” In fact, they fall into one of the next two categories, that of mercenary or auxiliary. In technology, we see this all the time. For example, freshly minted university graduates are typically looking first to acquire skill and experience in a specific technology, expecting to change employer many times in the pursuit of that skill, as opposed to being motivated to fulfill a business mission enabled by the project they’re working on.

By the same token, most businesses engage mercenaries. Mercenaries are often independent contractors, loyal to no leader, and loyal only to themselves. While there can be exceptions, according to Machiavelli they're not good troops in the end because they are not dependable in battle. Specifically, they're the first to desert since they have little on the line. Also, their interests are not aligned once the battle is over. According to Machiavelli, the risk here is "dastardy," or maliciousness. Mercenaries draw income from battle, not wealth from victory. It is in their interest to seek - or to create - conflict. And this is common in the IT industry. How often have we seen an important business initiative beholden to a technology component that is laden with technical debt, the delivery of which is in the hands of mercenaries who are motivated neither by the business impact of the solution nor its reliability and affordability of maintenance? By contrast, how often do we see people develop career "annuities" for themselves, drawing an income for an extended period of time as caretakers of a business critical application that is heavily laden in situational complexity?

Auxiliaries, according to Machiavelli, are particularly dangerous. In military terms, the independent auxiliary unit seeks its own opportunities precisely because it is in a position to create attractive circumstances. In business, if a host company gives a contract firm autonomous leadership on an initiative, the host firm will find a significant portion of its forces are loyal to another. According to Machiavelli, the risk with auxiliaries is "valour". Auxiliaries are motivated by the opportunity to drive wealth from adventurism. If a buy side firm contracts with a sell side firm for an independently-functioning team, it is not out of the question that under competent leadership that sell-side firm will look to strike its own bargain with the market. It may develop expertise, capability, intellectual property or domain knowledge that it can monetize elsewhere. Alternatively, it may hold a customer firm hostage for better terms for some of these things.

The last category of forces - mixed - is in fact the most common in technology. Most firms don’t have a homogenous staffing model. They rely on contractors and outside firms for significant portions of their staff. Also, as mentioned above, there will be inconsistencies in the motivations of their FTEs. “One’s own” forces must be motivated by the wealth of the business (however “wealth” in the situation is defined) as opposed to personal income, to have it at risk, and to have a direct influence on the outcome. Very often, that isn’t the case among badged employees, which means a fair number will be mercenaries. They'll be motivated by individual interests that are not fully aligned, and may even come at the cost of their paymasters. There are also cases where there are auxiliary units lurking about in a business, especially technology: people who have worked together for many years for several different organizations, who form a shadow unit and eventually move to strike their own bargain.

People on the buy side and sell side of technology generally don’t recognize these distinctions. Generally speaking, people on the buy side (a) don’t recognize that their staff are mixed and not "one's own" especially in the absence of contractors; (b) don’t understand the consequences of their staffing mix (e.g., what does it mean from a retention perspective that people aren't first-and-foremost die-hard employees committed to the success of the business imperative?); (c) fail to understand the nuances of how to deal with each group specifically (e.g., in project crises, we tend to see a lot of rah-rah managerial fluff spewed forth from buy-side leadership to sell-side people who are completely disconnected from the business situation at hand); and (d) have no idea the risks and opportunities of each group.

By the same token, the sell side doesn't appropriately approach the market. Sell-side firms that aspire to be "auxiliary forces" talk as if they can be "one's own" (the ubiquitous word "partners” is usually invoked), yet they usually sell themselves as mercenaries to the classic formula of people x time x rate. In this effort based formula the risk is borne by the buyer, but the impact of mercenary pricing cuts both ways. Sell-side firms very often end up in mercenary situations yet fail to price and structure it to reflect the “income” that it really is, mistakenly believing they're developing a vehicle for “wealth” (i.e., a sustainable business opportunity.) Finally, sell-side firms typically fail to understand the full terms and conditions required of both parties for the supplier-vendor relationship to be fully aligned to perform as “one’s own” for the business mission at hand. Very often, they export the problems they have as a sell side business (such as hitting revenue targets or maintaining margin) into the value proposition offered by the sell side. It comes as no surprise that in the end, they default into a cost or "mercenary" engagement model.

The nature of forces – and the consequences – haven’t changed all that much since the dawn of time. The lessons of Machiavelli for the state apply to both technology buyer and seller alike.

The buy side sources staff from a market overwhelmed with sell-side firms offering technology specialists, vertical market expertise and bodies in bulk. The buy side needs to have the right expectation for the right type of firm. The large-scale staff-aug firm has some of the characteristics of an auxiliary, but when it comes right down to it aren't they really just a large mercenary outfit? And does a firm with deep vertical "practices" have aspirations to monetize (possibly to our disadvantage) the expertise derived from the solution we're engaging them for? Alternatively, are we trying to engage a genuine partner firm in a mercenary model? For that matter, do we do things on the buy side to create the circumstances that allow us to engage "one's own" forces? Perhaps most importantly, do we recognize the fundamental characteristics that make a person or a firm suitable for that kind of engagement?

The buy side also needs to consider the project mission. Perhaps the organizational politics have compelled us to go in pursuit of some boondoggle but we know that sooner or later, people will come to their senses and cancel the project. Or perhaps we have a situation where we need to prop up a project deliverable only until we complete negotiations for an alternative solution. In such cases, the buy side firm would be foolish to allocate "one's own" forces, and would be better off to engage mercenaries or auxiliaries.

Curiously enough, a lot of business press has covered this from the perspective of the buy side. It’s worth reflecting on this in greater detail from the perspective of the sell side, or the supplier of “forces.” That will allow sell-side firms to recognize the right characteristics, opportunities, and circumstances for deploying capability across their customer portfolio.

Firms on the buy and sell sides – from established to start-up – experience this phenomenon. Many buy side firms wish to engage, and many sell side firms wish to be engaged, as "one's own." To do that requires trust, which doesn't come easy and must be earned every day. It also requires some form of a shared opportunity model, and very, very few examples of this exist today. This isn’t to say that true partnerships can’t be forged, but instances of this are the exceptions and not the rule in technology. Auxiliary and mercenary buying patterns – transacting for specialized knowledge or bulk capacity (people x time x rate) – are the rule in IT.

A fully aligned partnership - that is, sourcing for "one's own" in Machiavelli's model - is possible only with like minded people who form deep relationships built on a firm foundation of trust and fundamentally aligned interests in achieving a business goal. Those like minded people on whom you can build that trust relationship are hard to find, and they're few and far between, but seek them out and you'll build mutually sustainable businesses.

Saturday, January 23, 2010

Sustainability versus Efficiency

My friend Bradford Cross posted an interesting blog on wages last week. It’s a great piece, particularly his comments on Henry Ford's approach to business profitability.

To a great extent, the Ford model Brad refers to is dependent on the combination of volume and productivity. That aspect of the model came to a screeching halt for Ford in the 1920s when the Model T simply passed its "sell by" date. Once the product outlived its market, sales volume dropped. They not only discontinued production in response to growing inventories, they didn’t have their next product, the Model A, out of the design phase. They were forced to shut down the line for months. That put quite a dent in accumulated profitability. They also lost their lead in market share.

The focus on volume and productivity drives businesses to aggressively remove cost and increase productivity from repeatable processes to maximize profitability. In so doing, they're not focused on sustainability, they're focused on efficiency.

Sustainability requires constant change. We have to constantly think about the surrounding business conditions: labor patterns, competitive threats, customer needs and so forth. Sustainability requires us to be primarily concerned with where the business is going to be tomorrow. Efficiency requires everything to stay the same. We luxuriate in the simplicity of holding everything else constant when we focus solely on efficiency. When we pursue efficiency, we're focused on where the business is right now.

In the extreme, we optimize relative to the circumstances of this moment in the hope - the hope - that time will stand still long enough for us to draw an income, have 2-point-something kids, take a decent vacation every year, and accumulate sufficient wealth to retire.

Hope may be audacious, but it's a lousy strategy.

In efficiency-centric businesses, it’s not uncommon to find people doing substantially the same things that people were doing 10 years earlier. Because the definition of work is consistent, it’s repeatable, and that makes everybody's job that much simpler. That's true for everybody in the business: people on the line do the same tasks, people in HR recruit for the same positions, people in finance forecast costs in the same business model, and so forth. When things don't change all that much - markets, supply chains, etc. – a business can make a lot of money, and individual wage earners draw a steady income. But in an age when things change a lot, you can't make a lot of money this way for very long. A business optimized relative to a set of circumstances that are artificially held constant is a business in a bubble. Production of any sort can't operate in a bubble. At least, it can't operate in one for long. The longer it does, the bigger the mess when the bubble bursts.1

Brad mentions that Ford's model was more complete than volume and productivity. There's another dimension that, if executed, makes a business sustainable and less prone to seismic interruption: constant innovation in response to external factors. With that must also come invention, which is of course not the same as innovation. This is also not the same as internally-driven innovation. To wit: while shaving a few seconds off the time it takes to tighten a bolt might make bolt-tightening more efficient, it's useless if the market has switched to rivets in place of bolts.

If we aggressively evolve both what we make and how we make it, nobody in production will be doing what people were doing 10 years ago, because those jobs didn't exist 10 years ago. They won't exist 10 years from now, either. In fact, we don’t want entire categories of jobs that exist today to exist a decade from now. This means we have to be less focused on the known (what we’re doing) and more focused on the unknown (what should we be doing?) This makes work a lot harder.

Well, as it turns out, building a sustainable business is hard work.

In innovation-centric firms, production isn't in a bubble. In fact, it's very much integrated with its surroundings. That's where Brad's reference to “worker skill” comes into play. In technology, it’s more than just a question of skills: it’s a question of both capability and the passion to acquire more knowledge.

This may seem to be blatantly obvious: of course those are the workers we want. How hard can it be to hire them? It’s just a recruiting problem, right? Brad specifically makes the point that there’s a (wildly mistaken) school of thought that assumes we can get the best people by spraying a lot of money around.

If only it were so simple, as Brad points out. It's very difficult to succeed at this, not only because it requires a change in recruiting behaviours, but because it means significantly disrupting an internal business process. That's harder than you might think: the efficiency-centric mindset is firmly entrenched in business, government, and the universities that educate the management that run them both.

Efficiency-centric firms are process heavy. The people in these firms - badged, independent contractor and supplier alike - are very heavily invested in that firm's processes. Subsequently they resist change to the processes and practices that they have worked so hard at mastering and making “efficient.” This creates organizational headwinds so resistant to change they can bend solid steel. Any "change initiative" that isn’t blown away by these headwinds is corrupted by it. So, the boss says knowledge is power, and he's told us we’ve got to have the most knowledgeable people in the business? No problem: we’ll show how knowledge-hungry our people are. HR will set up some computer-based training and tie a portion of management bonuses to the number of training hours their people “volunteer” for. Managers will then measure their supervisors on training hours their people receive. Supervisors will set a quota for laborers. Laborers will fill out the necessary form to show they’ve hit their training quota, and circulate answer keys if there’s a test at the end so that nobody fails to meet their quota. The efficiency-centric system returns to balance with no net impact: laborers aren’t inconvenienced, management receives its bonus, and the organization can now measure how much they "value knowledge." Everybody plays, everybody... well, it's complicated. Those who lead the initiative "win" because they can report that a measured improvement took place on their watch. Those who sponsored the initiative are "reaffirmed" because the firm can now prove they have knowledge-hungry people. The rest don't necessarily win, but they certainly "don't lose." And isn't that the point these days, to make sure everybody is a winner?

We see this pattern repeated with all kinds of well-intended initiatives, whether it be a mission for zero defects or a drive to be Agile. People will do everything they can to sustain that which they have already mastered, even to a point - misguidedly or maliciously - of giving the appearance of innovating and changing. Efficiency-centric organizations have stationary inertia that is extremely resilient to internally-initiated change. Only when an external event trumps every other priority - and most often it has to be a seismic event at that, such as the complete evaporation of revenue - will a bubble burst.

This kind of industrial thinking has made its way into IT. We assume our external environment (labor market changes, technology changes and so forth) are static, so we stand up a big up-front design, put together a deterministic project plan, and staff at large scale to deliver. We also see it more subtly when people look to code "annuities" for themselves: systems that are business-critical that they can caretake for many years. This creates an expectation of job security and therefore recurring income. This isn’t just a behavior of employees or contractors looking for stable employment: there are consulting businesses built around this model.

Going back to Brad's blog post, this creates a wage discrepancy and with it, a bubble. People who accept the annuity make the erroneous assumption that the rising tide of inflation will sustain their income levels. It’s actually just the opposite: the minute somebody is working in one of those annuities, their skills are deflating because they're not learning and accumulating new knowledge. So is the asset value of the thing they're caretaking. The people who do this misread the market (e.g., assume an outsized value of the asset to the host firm) and subsequently have a misunderstanding of their wage sustainability. The resultant wage bubble lasts until the "market" catches up: either the host firm takes costs out of maintenance (e.g., by labor replacement) or retires the asset. The person who was earning what amounted to an outsized income by being in this bubble faces that same seizmic correction as Ford did in the 1920s if they're not prepared with their own "Model A" of what they're going to do next.

The mis-fit of the industrial model in technology is that industrialization makes no provision for capability: each person is the same, the only difference being they're either more or less productive than the average, and indexed accordingly. That completely ignores the impact of destabilizing change that people make in what they do and how they do it. Disruptive, externally-driven innovation should be the rule, not the exception. Of all lines of business, this should be the case with technology. And with the right group of people, it is.

Disruptive innovation pops a bubble. A popped bubble threatens entrenched interests (e.g., those who have mastered life inside the bubble). But disruptive innovation is what makes a company sustainable.


1 I am indebted to my colleague Chereesca Bejasa for using the term "bubble" to describe a team operating to a different set of processes and behaviours within such an environment. Just as a team can be in a bubble relative to the host organization, the host itself can be in a bubble relative to its market.