Saturday, June 26, 2010
A Portfolio Perspective on Requirements
This isn't semantics. There's a big difference between "business impact" and "financial returns."
Some software requirements have a direct business impact. But not all of them do, which we'll explore in a little bit. As a result, the justification for and priority of a lot of requirements are not always clear, because the language of "business value" is one-dimensional and therefore limiting. "Financial returns" is far more expansive concept. It brings clarity - in business terms - why we have to fulfill (and, for that matter, should not fulfill) far more requirements. Thinking about "returns" is also more appropriate than "value" for capital deployment decisions, which is what software development really is.
Why is software development a "deployment of capital"? Because a company really doesn't need to spend money on technology. When people choose to spend on software development, they're investing in the business itself. We elect to invest in the business when we believe we can derive a return that exceeds our cost of capital. That's why we have a business case for the software we write. That business case comes down to the returns we expect to generate from the intangible assets (that is, the software) we produce.
This should affect how we think about requirements. As pointed out above, a lot of requirements have a clear and direct business impact. A business requirement to algorithmically trade based on fluctuations in MACD, volume weighted average price and sunspot activity has a pretty clear business value: analysis before we code it tells us some combination of market and cosmic events leads to some occasional market condition that we expect we can capitalize on. And after the fact, we know how much trading activity actually occurs on this algorithm and how successfully we traded.
But not all requirements fit the business impact definition so nicely. We fulfill some requirements to avoid paying a penalty for violating regulations. Others increase stability of existing systems. Still others reduce exposure to catastrophic events.
This is where "business value" loses integrity as an index for requirements. Calling one activity that increases revenue equivalent to another that reduces exposure to catastrophic loss is comparing apples to high fructose corn syrup. They're sweet and edible, but that's about it.
As anybody who has ever run a business knows, not every dollar of revenue is the same: some contracts will cost more to fulfill, will cause people to leave, will risk your reputation, etc. The same is true in "business value": not every dollar of business value is the same. Translating all economic impact into a single index abstracts the concept of "business value" to a point of meaninglessness. Making matters worse, it's not uncommon for IT departments to sum their "total business value" delivered. Reporting a total value delivered that eclipses the firm's enterprise value impeaches the credibility of the measure.
Business value is too narrow, so we need to have a broader perspective. To get that, we need to think back to what the software business is at its core: the investment of capital to create intangible assets by way of human effort.
The operative phrase here isn't "by way of human effort", which is where we've historically focused. "Minimizing cost" is where IT has put most of its attention (e.g., through labour arbitrage, lowest hourly cost, etc.) In recent years, there's been a movement to shift focus to "maximize value". The thinking is that by linking requirements to value we can reduce waste by not doing the things that don't have value. There's merit in making this shift, but essentially "maximize value" and "minimize cost" are still both effort-centric concepts. Effort does not equal results. The business benefits produced by software doesn't come down to the efficiency of the effort. They come down to the returns produced in the consumption of what's delivered.
Instead of being effort-centric, our attention should be requirements-centric. In that regard, we can't be focused only on a single property like "value." We have to look at a fuller set of characteristics to appreciate our full set of requirements. This is where "financial returns" gives us a broader perspective.
When we invest money to create software, we're converting capital into an intangible asset. We expect a return. We don't get a sustainable return from an investment simply if it generates revenue for us, or even if we generate more revenue than we incur costs. We get a sustainable return if we take prudent decisions that make us robust to risk and volatility.
Compare this to other forms of capital investment. When we invest in financial instruments, we have a lot of options. We can invest at the risk-free rate (traditionally assumed to be US Treasurys). In theory, we're not doing anything clever with that capital, so we're not really driving much of a return. Alternatively, we can invest it in equities, bonds, or commodities. If we invest in a stock and the price goes up or we receive a dividend, we've generated a return.
But financial returns are at risk. One thing we generally do is spread our capital across a number of different instruments: we put some in Treasurys to protect against a market swoon, some in emerging market stocks to get exposure to growth, and so forth. The intent is to define an acceptable return for a prudent level of risk.
We also have access to financial instruments to lock in gains or minimize losses for the positions we take. For example, we may buy a stock and a stop loss to limit our downside should the stock unexpectedly freefall. The put option we purchased may very well expire unexercised. That means we've spent money on an insurance policy that wasn't used. Is this "waste"? Not if circumstances suggest this to be a prudent measure to take.
We also have opportunities to make reasonable long-shot investments in pursuit of outsized returns. Suppose a stock is trading at $45 and has been trading within a 10% band for the past 52 weeks. We could buy 1,000,000 call options at $60. Because these options are out of the money they won't cost us that much - perhaps a few pennies each. If the stock rises to $70, we exercise the call, and we'll have made a profit of $10m less whatever we paid for the 1m calls. If the stock stays at $45, we allow the options to expire unexercised, and we're out only the money we spent on those options. This isn't lottery investing, it's Black Swan investing - betting on extreme events. It won't pay off all that often, but when it does, it pays off handsomely.
These examples - insurance policies and Black Swans - are apt metaphors for a lot of business requirements that we fulfill.
For example, we need to make systems secure against unauthorized access and theft of data. The "value" of that is prevention of loss of business and reputational damage. But implementing non-functional requirements like this isn't "value", it's insurance. The presence of it simply makes you whole if it's invoked (e.g., deters a security threat). This is similar to a mortgage company insisting that a borrower take out fire insurance on a house: the fire insurance won't provide a windfall to the homeowner or bank, it'll simply make all parties whole in the event that a fire occurs. That insurance is priced commensurate with the exposure - in this case, the value of the house and contents, and the likelihood of an incendiary event. In the same way, a portfolio manager can take positions in derivatives to protect against the loss of value. Again, that isn't the same as producing value. This insurance most often goes unexercised. But it is prudent and responsible if we are to provide a sustainable return. To wit: a portfolio manager is a hero if stock bets soar, but an idiot if they crater and he or she failed to have downside protection.
We also have Black Swan requirements. Suppose there is an expectation that a new trading platform will need to support a peak of 2m transactions daily. But suppose that nobody really knows what kind of volume we'll get. (Behold, the CME just launched cheese futures - with no contracts on the first day of trading.) So if we think that there's an outside chance that our entering this market will coincide with a windfall of transactions, we may believe it's prudent to support up to 3x that volume. It's a long shot, but it's a calculated long shot that, if it comes to pass and we're prepared for it, provides an outsized yield. So we may do the equivalent of buying an out-of-the-money call option by creating scalability to support much higher volume. It's a thoughtful long-shot. A portfolio manager is wise for making out of the money bets when they pay off, but a chump if he or she all positions aligned with conventional wisdom and a market opportunity is missed.
Neither of these examples fit the "value" definition. But they do fit well into a "portfolio" model.
Of course, just as determining the business value of each requirement isn't an exact science, neither is defining a projected investment return. Even if we ignore all the factors that impact whether returns materialize or not (largely what happens after the requirement is in production), the cost basis is imprecise. We have precise pricing on liquid financial instruments such as options. We don't have precise pricing in IT. The reason goes back to the basic definition of software development: the act of converting capital into intangible assets by way of human effort. That "human effort" will be highly variable, dependent on skills, experience, domain complexity, domain familiarity, technology, environment, etc. But this isn't the point. The point isn't to be precise in our measurement to strain every ounce of productivity from the effort. We've tried that in IT with industrialization, and it's failed miserably. The point is to provide better directional guidance that maximize returns on the capital, to place very well informed bets and protect the returns.
It's also worth pointing out that going in pursuit of Black Swans isn't license to pursue every boondoggle. Writing the all singing, all dancing login component in this iteration because "we may need the functionality someday" has to withstand the scrutiny of a reasonable probability of providing an outsized return relative to the cost of investment. Clearly, most technology boondoggles won't pass that test. And all our potential boondoggles are still competing for scarce investment capital. If the case is there, and it seems a prudent investment, it'll be justified. If anything, a portfolio approach will make clearer what it is people are willing - and not willing - to invest in.
Because it gives multi-dimensional treatment to the economic value of what we do, "portfolio" is a better conceptual fit for requirements than "value." This helps us to frame better why we do things, and why we don't do things, in the terms that matter most. We'll still make bad investment decisions: portfolio managers make them all the time. We'll still do things that go unexercised. But we're more likely to recognize exposure (are you deploying things without protecting against downside risk?) and more likely to capitalize on outsized opportunities (so what happens if transaction volume is off the charts from day one?) It's still up to us to make sound decisions, but a portfolio approach enables us to make better informed decisions that compensate for risk and capitalize on the things that aren't always clear to us today.
Friday, June 11, 2010
Short Run Robustness, Long Run Resiliency
There is no such thing as a "long run" in practice --what happens before the long run matters. The problem of using the notion of "long run", or what mathematicians call the "asymptotic" property (what happens when you extend something to infinity), is that it usually makes us blind to what happens before the long run. ...
[L]ife takes place in the pre-asymptote, not in some Platonic long run, and some properties that hold in the pre-asymptote (or the short run) can be markedly divergent from those that take place in the long run. So theory, even if it works, meets a short term reality that has more texture. Few understand that there is generally no such thing as a reachable long run except as a mathematical construct to solve equations - to assume a long run in a complex system you need to assume that nothing new will emerge.
Asperger and the Ontological Black Swan
Mr. Taleb is commenting on economists and financial modelers, but he could just as easily be commenting on IT planning.
Assertions of long-term consistency and stability are baked into IT plans. For example, people are expected to remain on the payroll indefinately; but even if they don’t, they’re largely interchangeable with new hires. Requirements will be relatively static, specifically and completely defined, and universally understood. System integration will be logical, straightforward and seamless. Everybody will be fully competent and sufficiently skilled to meet expectations of performance.
Asserting that things are fact doesn’t make them so.
Of course, we never make it to the long run in IT. People change roles or exit. Technology doesn't work together as seamlessly as we thought it would. Our host firm makes an acquisition that renders half of our goals irrelevant. Nobody knows how to interface with legacy systems. The historically benign financial instruments we trade have seen a sudden 10x increase in volume and volatility off the charts. A key supplier goes out of business. Our chief rival just added a fantastic new feature that we don't have.
Theoretical plans will always meet a short-term reality that has more texture.
* * *
After the crisis of 2008, [Robert Merton] defended the risk taking caused by economists, giving the argument that “it was a Black Swan” simply because he did not see it coming, hence the theories were fine. He did not make the leap that, since we do not see them coming, we need to be robust to these events. Normally, these people exit the gene pool –academic tenure holds them a bit longer.
The long-term resiliency of a business is a function of how robustly it responds to and capitalizes on the ebbs and flows of a never-ending series of short runs. The long-term resiliency if an IT organization is no different.
This presents an obvious leadership trap, the “strategy as a sum of tactical decisions” problem. Moving with the ebb and flow makes it hard to see the wood for the trees. An organization can quickly devolve into a form of organized chaos, where it reacts without purpose instead of advancing an agenda. Reacting with purpose requires continuous reconciliation of actions with a strong set of goals and guiding principles.
But it also presents a bigger, and very personal, leadership challenge. We must avoid being hypnotized by the elaborate models we create to explain our (assumed) success. The more a person invests in models, plans and forecasts, the more they will believe they see artistic qualities in them. They will hold the models in higher esteem than the facts around them, insisting on reconciling the irrational behavior of the world to their (obviously) more rational model. This is hubris. Obstinance for being theoretically right but factually wrong is a short path to a quick exit.
Theoretical results can't be monetized; only real results can.