Tuesday, December 31, 2013

The Corrosive Effects of Complexity

"Much complexity has been deliberately created, to encourage consumers to pay more than they need, or expected." John Kay, The Wrong Sort of Competition in Energy

Modern software assets are complex in both their technical composition and their means of creation.  They are built with multiple programming languages, are expected to conform to OO standards and SOA principles, make use of automated tests and a progressive build pipeline, require a diverse set of skills (UX, developers, QA analysts, etc.) to produce, are used on a multitude of clients (different browsers or native client apps on PC, tablet and smartphone form factors), and are deployed using automated configuration management languages to a combination of physical and virtual environments (cloud and captive data centers).  Software is more complex today than it was less than a generation ago.

Complexity compromises both buyers and sellers of technology services.

Buyers suffer an information gap.  Few managers and fewer buyers have first-hand experience in one, let alone all, of the technologies and techniques a team is - or should - be using.  This creates information asymmetry between buyer and seller, manager and executor.   The more diverse the technologies, the more pronounced the asymmetry.

Sellers suffer a skill gap. Because the demand for software is outstripping the supply of people who can produce it, experienced people are in short supply.  There are more people writing their first Android app than their second, more people making their first cloud-based deployment than their second.  There are more blog posts on Continuous Delivery than there are people who have practiced it.  There are more are people filling the role of experience design than have designed software that people actually use.  And while long-standing concepts like OO, SOA and CI might be better understood than they were just a few years ago, a survey of software assets in any company will quickly reveal that they remain weakly implemented.  In a lot of teams, the people are learning what to do as much if not more than doing what they already know.

"Such information asymmetry is inevitable. Phone companies have large departments working on pricing strategies. You have only a minute or two to review your bill."

Information asymmetry favours the seller.  Sellers can hide behind complexity more easily than the buyer can wade through it: the seller can weave a narrative of technical and technology factors which the buyer will not understand.  The buyer must first disentangle what they've been told before they can begin to interpret what it actually means. This takes time and persistence that most software buyers are unwilling to make. Even if the buyer suspects a problem with the seller, buyer hasn't any objective means of assessing the seller's competency to perform.  Where complex offering are concerned, the buyer is at an inherent disadvantage to the seller.

"When you shop, you mostly rely on the reputation of the supplier to give you confidence that you are not being ripped off."

Technology buyers compensate by relying on proxies for competency, such as brand recognition and professional references of a selling firm. But these are controlled narratives: brands are aspirational visions created through advertising (although "Go ahead, be a Tiger" didn't end well...), while references are compromised by selection bias.  A buyer may also defer judgment to a third party, hiring or contracting an expert to provide expertise on their behalf. In each case, the buyer is entering into a one-way trust relationship with somebody else to fulfill their duty of competency.

An buyer inexpert in technology can compensate better by staying focused on outcomes rather than means.

Match cash flows to the seller with functionality of the asset they deliver. You're buying an asset.  Don't pay for promises, frameworks and infrastructure for months on end, pay for what you can see, use and verify.

Look under the hood.  There are plenty of tools available to assess the technical quality of just about any codebase that will provide easy-to-interpret analyses.  A high degree of copy-and-paste is bad.  So is a high complexity score.  So are methods with a lot of lines in them.

Spend time with the people developing your software.  Even if you don't understand the terminology, what people say and how they say it will give you a good indication as to whether you've got a competency problem or not.  Competency is not subject-matter-expertise: technology is a business of open-ended learning, not closed-ended knowledge.  But the learning environment has to be progressive, not blind guesswork.

Accept only black-and-white answers to questions.  Most things in software really are black and white.  Software works or it does not.  The build is triggered with every commit or it is not.  Non-definitive answers suggest obfuscation. "I don't know" is a perfectly valid answer, and preferable to a vague or confusing response.

An inexpert buyer is quickly overwhelmed by the seller's complexity. A buyer who stays focused on business outcomes won't dispel the seller's complexity, but will clarify things in favor of the buyer.

Saturday, November 30, 2013

Governing IT Investments

In the previous blog, we looked at common misconceptions of IT governance. We also looked at how corporate governance works to better understand what governance is and is not. In this blog, we'll look at how we can implement more comprehensive governance in tech projects.

For corporate tech investments, the top most governing body is the firm's investment committee. In small companies, the investment committee and the board of directors are one and the same. In large companies, the investment committee acts on behalf of the board of directors to review and approve the allocation of capital to specific projects, usually on the basis of a business case. They also regularly review the investment portfolio and make adjustments to it as the firm's needs change.

The investment committee is composed of senior executives of the firm. Although executives are managers hired by investors to run the business, in this capacity they are making a specific allocation of capital that is generally of too low a level for board consideration. This is not a confusion of responsibilities. The board will have previously approved capital expenditure targets for the year as well as the strategy that makes the investment relevant, and any investment made by the investment committee has to stand up to board scrutiny (e.g., the yield should exceed the firm's cost of capital, or it should substantially remove some business operating risk). The investment decision is left to a capital committee composed of the firm's executives - who always have a fiduciary responsibility to shareholders - for sake of expediency.

The individual shareholders of a company have multiple investments and have limited amounts of time, so they rely on a board of directors to act on their behalf. In the same way, the investment committee members are the shareholders of an IT investment. They invest the firm's capital in a large and diverse portfolio above and beyond just IT investments. They will not have time to hover over each investment they make. So, just as investors form a board to govern a corporation, the investment committee forms a board to govern an individual investment.

In technology projects, we usually associate a "steering committee" with the body that has governance responsibilities for a project. As mentioned in the prior blog, steering committees are too often staffed by senior delivery representatives. This is a mistake. People who govern delivery do so on behalf of investors, not delivery. They must be able to function independently of delivery.

We'll call our governing body a "project board" so as not to confuse it with a traditional "steering committee". A project board that represents investors is composed of:

  • a representative of the corporate investment committee (e.g., somebody from the office of the CFO)
  • a representative from the business organization that will be the principal consumer of the investment (e.g., somebody from the COO's organization)
  • a senior representative of the IT organization (e.g., somebody from the office of the Chief Information Officer or Chief Digital Officer)
  • at least one independent director with experience delivering and implementing similar technology investments.

The program manager responsible for delivery and implementation of the investment is the executive, and interacts with the steering committee in the same way that the CEO interacts with the board of directors.

Again, notably absent from this board are the delivery representatives we normally associate with a steering committee: technical architects, vendors, infrastructure, and so forth. They may be invited to attend, but because they represent the sell side of the investment and not the buy side, they have no authority within the board itself. Investing them with board authority invites regulatory capture, which undermines independent governance.

The project board has an obligation to make sure that an investment remains viable. It does this primarily by scrutinizing project performance data, the assets under development and the people responsible for delivery. In addition, the board is given some leeway by the investment committee to change the definition of the investment itself.

Let's first look at how the board scrutinizes performance. The board meets regularly and frequently, concentrating on two fundamental questions: will the investment provide value for money? and is it being produced in accordance with all of our expectations? The program executive provides data about the performance of the project and the state of the assets being acquired and developed. The board uses this data, and information about the project its members acquire themselves, to answer these two governance questions. It also reconciles the state of the investment with the justification that was made for it - that is, the underlying business case - to assess whether it is still viable or not. The project board does this every time it meets.

The project board is also granted limited authority to make changes to the definition of the investment itself. It does not need to seek investment committee approval for small changes in the asset or minor increases in the cash required to acquire it if they do not alter the economics of the investment. This enables the project board to negotiate with the delivery executive to exclude development of a relatively minor portion of the business case if the costs are too high, or approve hiring specialists to help with specific technical challenges. The threshold of the project board's authority is that the sum of changes it approves must not invalidate the business case that justified the investment.

Scrutinizing performance and tweaking the parameters of the investment are how the board fulfills the three governance obligations presented in the previous blog. It fulfills its duty of verification by challenging the data the executive provides it and asking for additional data when necessary. It also has the obligation and the means to seek its own data, by e.g., spending time with the delivery team or commissioning an independent team to audit the state of the assets. It fulfills its duty of setting expectations by changing the parameters of the investment within boundaries set by the investment committee (e.g., allowing changes in scope that don't obliterate the investment case). It fulfills its duty of hiring and empowering people by securing specialists or experts should the investment run into trouble, and changing delivery leadership if necessary.

If the board concludes that an investment is on a trajectory where it cannot satisfy its business case, the board goes to the investment committee with a recommended course of action. For example, it may recommend increasing the size of the investment, substantially redefining the investment, or suspending investment outright. The board must then wait for the investment committee decision. The presence of a member of the investment committee on the project board reduces the surprise factor when this needs to happen.

This model of governance is applicable no matter how the investment is being delivered. Teams that practice Agile project management, continuous integration and static code analyses lend themselves particularly well to this because of the frequency and precision of the data they provide about the project and the assets being developed. But any team spending corporate capital should be held to a high standard of transparency. Delivery teams that are more opaque require more intense scrutiny by their board. And, while this clearly fits well with traditional corporate capital investment, it applies to Minimum Viable Product investing as well. MVP investments are a feedback-fueled voyage of discovery to determine whether there is a market for an idea and how to best satisfy it. Without independent governance, the investor is at risk of wantonly vaporizing cash on a quixotic pursuit to find anything that somebody might find appealing.

This is the structure and composition of good governance of an IT investment. Good structure means we have the means to perform good governance. But structure alone does not guarantee good governance. We need to have people who are familiar with making large IT investments, how those investments will be consumed by the business, what the characteristics of good IT assets are, and above all know how to fulfill their duty of curiosity as members of a project board. Good structure will make governance less ineffective, but it's only truly effective with the right people in governance roles.

Thursday, October 31, 2013

Can we stop misusing the word "Governance"?

The word "governance" is misused in IT, particularly in software development.

There are two popular misconceptions. One is that it consists of a steering committee of senior executives with oversight responsibility for delivery; it's responsibilities are largely super-management tasks. The other is that it is primarily concerned with compliance with protocols, procedures or regulations, such as ITIL or Sarbanes-Oxley or even coding and architectural standards.

Governance is neither of these things.

The first interpretation leads us to create steering committees staffed with senior managers and vendor reps. This is an in-bred political body of the people who are at the highest levels of those expected to make good on delivery, not an independent body adjudicating (and correcting) the performance of people in delivery. By extension, this makes it a form of self-regulation, and defines governance as nothing more than a fancy word for management. This body doesn't govern. At best, it expedites damage control negotiations among all participants when things go wrong.

The second interpretation relegates governance to an overhead role that polices the organization, searching for violations of rules and policies. This does little to advance solution development, but it does a lot to make an organization afraid of its own shadow, hesitant to take action lest it violate unfamiliar rules or guidelines. Governance is meant to keep us honest, but it isn't meant to keep us in check.

Well, what does it mean to govern?

Let's look at corporate governance. Corporations offer the opportunity for people to take an ownership stake in a business that they think will be a success and offer them financial reward. Such investors are called equity holders or stockholders. In most large corporations, stockholders do not run the business day-to-day. Of course, there are exceptions to this, such as founder-managers who hold the majority of the voting equity (Facebook). But in most corporations, certainly in most large corporations, owners hire managers to run the business.

The interests of ownership and the interest of management are not necessarily aligned. Owners need to know that the management they hired are acting as responsible stewards of their money, are competent at making decisions, and are running the business in accordance with their expectations. While few individual stockholders will have time to do these things, all stockholders collectively have this need. So, owners form a board of directors, who act on all of their behalf. The board is a form of representative government of the owners of the business.

Being a member of a corporate board doesn't require anything more than the ability to garner enough votes from the people who own the business. An activist investor can buy a large bloc of shares and agitate to get both himself and a slate of his choosing nominated to the board (Bill Ackman at JC Penney). People are often nominated to board membership reasons of vanity (John Paulson has Alan Greenspan on his advisory board) or political connections (Robert Rubin at Citibank).

Competent board participation requires more than just being nominated and showing up. Board members should know something about the industry and the business, bring ideas from outside that industry, and have experience at running a business themselves. (As the financial crisis hit in 2008, it became glaringly obvious that few bank directors had any detailed understanding of either banking or risk.) Good boards also have independent or non-executive directors, people who have no direct involvement with the company as an employee or stockholder. Non-executive directors are brought on principally to advise and challenge on questions of risk, people, strategy and performance.

A board of directors has three obligations to its shareholders: to set expectations, to hire managers to fulfill those expectations, and to verify what management says is going on in the business.

The first - setting expectations - is to charter the business and approve the overall strategy for it. In practice, this means identifying what businesses the company is in or is not in; whether it wants to grow organically or through acquisition (or both), or put itself up for sale. The CEO may come up with what she thinks a brilliant acquisition, but it is up to the board to approve it. By the same token, a board that wants to grow through acquisition will part ways with a CEO who brings it no deals to consider. The board may choose to diversify or divest, invest or squeeze costs, aggressively grow or minimize revenue erosion, or any number of other strategies. The CEO, CFO, COO and other executives may propose a strategy and figure out how to execute on it, but it is the board who must approve it.

The second - hiring and empowering managers - is the responsibility to recruit the right people to execute the strategy of the business. The board is responsible for hiring key executives - CEO, CFO, President - and possibly other executive roles like Chief Investment Officer, Chief Technology Officer, or Chief Operating Officer, depending on the nature of the firm. The board entrusts those people to build and shape the organization needed to satisfy the expectations set by the board. They serve at the board's discretion: they must perform and demonstrate competency. The board also approves the compensation of those executives, providing incentives to executives to stay and to reward them for the performance of the firm under their leadership. These divergent interests and obligations is why it is considered poor governance to have the same person be both Chairman of the Board and Chief Executive Officer.

The third - verification - is the duty of the board to challenge what they are being told by the people they have hired. Are management's reports accurate and faithful representations of what's going on in the business? We tend to think of business results as hard numbers. But numbers are easily manipulated. Informal metrics such as weighted sales pipelines are easily fluffed: 100 opportunities of $100,000 each at a 10% close probability yields a sales pipeline of $1,000,000 - but any opportunity without signature on paper is, from a revenue and cash flow perspective, 0% closed. Formal (regulated) metrics such as profitability are accounting phenomenon; it's easy to flatter the P&L with creative accounting. There is an abundance of examples of management misrepresenting results - and boards that rubber stamp what their hired management feeds them (e.g., Conrad Black's board at Hollinger).

Compliance questions are relevant to fulfilling the duty of verification. Management that plays loose and fast with regulatory obligations create risks that the board needs to be aware of, correct, and prevent from happening again (whether a rogue trader at UBS or violation of the Formula 1 sporting rules by employees of McLaren). But compliance is a small part of what Nell Minow calls a "duty of curiosity" that each board member has. The board - acting as a representative of investors - cannot take reported results at face value. It must investigate those results. And, the board must investigate alternative interpretations of results that management may not fully appreciate: an embedded growth business who's value is depressed by a slow-growth parent, a loss leader that brings in customers to the big revenue generator, a minor initiative that provides a halo to a stodgy business.

The confusion about governance in IT is a result of too narrow a focus. People in technology tend to be operationally as opposed to financially focused, so most cannot imagine a board consisting of people other than those with super-responsibilities for delivery, such as executives from vendor partners. Tech people also tend to be more interested in the technology and the act of creating it, rather than the business and it's non-functional responsibilities. Regulations tend to take on a near mystical quality with technology people, and are subsequently given an outsized importance in our understanding of governance.

Good corporate governance requires that we have an independent body that sets expectations, hires and empowers a management team, and verifies that they are delivering results in accordance with our expectations. Good IT governance requires the same. We'll look at how we implement this in IT in part 2.

Monday, September 30, 2013

The Management Revolution that Never Happened

In the 1980s, it seemed we were on the cusp of a revolution in management. American business exited the 1970s in terrible shape. Bureaucracy was discredited. Technocracy was, too: "best practice" was derived from people performing narrowly defined tasks in rigid processes that yielded poor quality products at a high cost. There was a call for more employee participation, engagement, and trust. Tom Peters was strutting the stage telling us about excellence and heroizing the empowered employee (you may remember the yarn about the FedEx employee who called in a helicopter to get packages out of a snowbound location). Behavioural stuff was in the ascendency. We were about to enter a new era in management.

Until we weren't.

Behaviourally centric techniques - including Agile - are still fringe movements in management, not mainstream practice. The two Freds - Frederick the Great (his organization model for the Prussian military defines most modern organizations) & Frederick Taylor (scientific management) - still rule the management roost. Frederick the Great organized his military like a machine: a large line organization of specialists following standardized procedures using specialized tools, with a small staff organization of process & technical experts to make the line people more productive. Frederick Taylor defined and measured performance down to the task level. We see this today, even in tech firms: large, silo'd teams of specialists, sourced to lowest-common-denominator position specs, with their work process optimized by Six Sigma black belts. Large organizations are no different than they were 30, 50, 75 years ago.

What happened?

First, it's worth looking at what didn't happen.

1. The shift from manufacturing jobs to service jobs was supposed to give rise to networks of independent knowledge workers collaborating to achieve business outcomes. It's true that many modern service jobs require more intellectual activity than manufacturing assembly line jobs of the past. However, just like those manufacturing jobs, modern service jobs are still fragmented and specialized. Think about policy renewal operations at insurance companies, or specialized software developers working in silos: they are information workers, but they are on an information assembly line, doing piecework and passing it onto the next person.

"[Big companies] create all these systems and processes - and then end up with a very small percentage of people who are supposed to solve complex problems, while the other 98% of people just execute." Wall Street Journal, 24 December 2007.

The modern industrial service economy has a few knowledge workers, and lots and lots of drones. It's no different from the manufacturing economy of yore.

2. Microcomputing was expected to change information processing patterns of businesses, enabling better analysis and decision support at lower levels of the organization. Ironically, it had the opposite effect. Microcomputers improved the efficiency of data collection and made it easy to consolidate operational data. This didn't erode centralized decision making, this brought it to a new level.

Second, there are things that have reinforced the command-and-control style of management.

1. "Business as a machine" - a set of moving parts working in coordination to consistently produce output - remains the dominant organizational metaphor. If we're going to have organizations of networked information workers, we have to embrace a different metaphor: the organization as a brain. Machines orchestrate a handful of moving parts that interact with each other in predefined, repetitive patterns. Brain cells connect via trillions of synapses in adaptable and complex ways. The "networked organization" functions because its members develop complex communication patterns. Unfortunately, it is much harder to explain how things get done in a network organization than it is in a machine organization: general comprehension of neuroscience hasn't improved much in the past 25 years, whereas it is easy for people to understand the interplay of specialized components in a simple machine.

2. Service businesses grew at scale, and the reaction to scale is hierarchy, process, and command & control. As I've written previously, the business of software development hasn't been immune to these pressures.

3. The appetite for operational data has increased significantly. A 2007 column in the WSJ pointed out that management by objective and total quality management have been replaced by a new trend: management by data. Previous management techniques are derided by the data proponents as "faith, fear, superstition [or] mindless imitation".

4. Service businesses (e.g., business process outsourcing) moved service jobs to emerging market countries where, owing to economic and perhaps even cultural factors, command and control was easily applied and willingly accepted.

5. In the last 12 years, debt has been cheap - cheaper than equity. In 1990, 2 year and 10 year Treasurys were paying 8%. In 2002, the 2 year paid 3.5% and the 10 year paid 5%. Today (September 2013), they're paying < 0.5% and 2.64%, respectively. When debt is cheap, CFOs swap equity for debt. When we issue debt, we are committing future cash flows to interest payments to our bondholders. And unlike household debts, most corporate debt is rolled-over. To make the debt affordable we need to keep the interest rates low, which we influence by having a high credit rating. Stable cash flows and high credit ratings come from predictable business operations. As more corporate funding comes from debt instead of equity, it puts the squeeze on operations to be predictable. With predictability comes pressure for control. Those new management practices that emerged to empower individuals and teams advertise themselves as providing better "flexibility", not "control". They are anathema to businesses with financing that demands precise control.

6. In the past decade, corporate ownership has been concentrated in fewer and fewer hands. This has happened through equity buybacks (in 2005-8 this was usually funded with debt, since 2009 it's just as likely to be funded with excess cash flow) and dual-class share structures (Groupon, Facebook, News Corp, etc.)

7. The external concentration of ownership coincided with internal concentration of decision making. Speaking from experience, around 2006 the ceiling on discretionary spending decisions dropped precipitously in many companies. In most large companies, a v-level manager used to be able to make capital decisions up to $1m. Empirically, I've seen that drop to $100k in most large firms.

8. The notion of "best practice" has been in vogue again for at least a decade.

9. Recessions in 2001 (when businesses reigned in unrestrained tech spending) and 2008 (when businesses reigned in all spending) tightened belts and increased operational scrutiny.

10. I also suspect that business education has shifted toward hard sciences like finance and away from soft sciences like management. The study of org behaviours was core b-school curriculum in the 1980s. It appears this has moved into a human resources class, which emphasize org structures. This treats organizational dynamics as a technical problem, not a behavioural one. I haven't done much formal research in this area, but if it's true, it means we've created a generation of business executors at the cost of losing a generation of business managers.

What does all this mean?

The Freds will continue to dominate management practice for the foreseeable future. Corporate profitability and cash flows have been strong, especially since the 2008 financial crisis. That, twined with ownership and decision-making authority concentrated in fewer hands, means that there is no incentive to change and, more importantly, there is actually disincentive to do so. Among middle managers, the machine metaphor offers the path of least effort and least resistance. It also means that when large companies adopt alternative approaches to management & organization at scale - for example, when large corporates decide to "go Agile" - the fundamental practices will be co-opted and subordinated to the prevailing command-and-control systems.

This isn't to say that alternative approaches to management are dead, or that they have no future. It is to say that in the absence of serious upheaval - the destabilization / disruption of established organizations, or the formation of countervailing power to the trends above - the alternatives to the Freds will thrive only on the margins (in pockets within organizations) and in the emerging (e.g., equity-funded tech start-up firms).

This leads to a more positive way of looking at it: it isn't that the day of post-Fred management & organization has come and gone, it's that it is yet to come. The increasing disruption caused by technology in everything from retail to education to NGOs will defy command and control management.

But that still begs the question: after the disruption, when the surviving disruptors mature and grow, will they eventually return to the Freds?

Friday, August 30, 2013

Conflict, Part II

The founder's optimism far exceeded his customer's interest. His aggressive expansion vapourized the cash he raised to start up the business. His options were to sell or liquidate.

The firm that bought them was a multi-line company of online advertising & e-commerce driven businesses. The new owners set about trying to make their new acquisition profitable. They looked for revenue synergies by cross-selling to existing accounts. They also reduced costs by closing the acquired company's office, integrating people into corporate reporting structures, consolidating overhead functions, and standardizing procedures with those of the parent company. By the time they were done, they had laid off more than half the staff. They also released the old management team, installing a salesperson from the acquired company as the Division Manager. He'd never run a business before, but the parent company expected that operations would sort themselves out, leaving the DM to develop a sales and client services team and concentrate on revenue.

The changes dragged on for months, leaving people confused and demoralized. Knowledge was fragmented and splintered. There was no one person who knew the business from end-to-end. Business knowledge existed in pockets and occasionally contradicted. There was business-partner-specific code and processes which people simply had no detailed knowledge of. The product administrator defaulted into the role of product manager. The few salespeople who survived the purge burrowed into long-standing customers as account managers.

To boost morale, and to discourage remaining employees from leaving, the parent company showered them with praise, promotions, pay, bonuses and benefits. Comp (including salary and stay bonuses) shot up by as much as 50%. Management also gave them the option to work from home, and set no expectation for in-office hours.

* * *

It didn't take long for the remaining staff to realize that their knowledge of the business gave them power. There was nobody with enough detailed business knowledge to challenge the new product manager: anything he said was treated as gospel truth. The e-commerce software was a mess: only somebody familiar with the code could make sense of it. The processes behind the business existed in the heads of a handful of people who worked independently of each other; processes from corporate were simply ignored.

The experience of having had their employer acquired, their co-workers canned, and foreign processes imposed on them, only to emerge as having "won the lottery" of compensation, benefits, title, independence, job security and power made the few surviving employees more than a little neurotic. Each jealously guarded the contextual knowledge they had. None would step outside the strict definition of what they understood their job requirements to be, even in situations where it was clear they were the only person with the business knowledge, client contacts or code familiarity to solve a problem. Knowledge sharing exercises were canceled due to last minute "customer emergencies". Account managers controlled and outright blocked access to customers. The DM made high drama when one of the legacy employees "rescued" a situation created by a corporate transplant, publicly hailing their efforts and creativity with code and client, while privately suggesting that the parent company people simply weren't competent.

People brought in from the parent company to work with this division took one of two paths. Senior employees quietly opted out of making any significant involvement: it was a lot easier work with those divisions of the company they were familiar with than to fight with entrenched legacy employees. Junior employees had no such luxury and soldiered on, trying to make the best of it. Although they learned little bits of the business every day, their long, steep learning curve on a blind journey of discovery made them prone to errors. Unfortunately, by continuing to give it their best shot, the juniors were complicit in reinforcing the narrative spun by the legacy staff; their senior staff, having self-selected out, offered them no reinforcement.

* * *

Two years after the acquisition, sales were a bit better and costs a bit lower, but they were nowhere near their profitability and cash flow goals. The company president couldn't understand why the acquired business still wasn't integrated with the rest of the company. He had faith in corporate staff, but despite their reassurances they still seemed a long way off from integrating the business. The legacy people intimated that the new people just weren't up to scratch, while the DM told him the legacy people were the only reason that business was still functioning.

But everybody reassured the president that they were committed to the company's goals.

* * *

Conflict arises when interests collide. The intensity of the conflict is a function of the degree to which people are committed to their objectives as opposed to a common or group objective.

This chart, adapted by Gareth Morgan from the work of K.W. Thomas, shows the intersection of the degree to which people pursue their goals versus group goals.

The Micro Level: Individual's Goals versus Organizational Goals

This is easy to understand at a micro level. For example, suppose a lead developer has had a vacation scheduled for a long time, but a business critical project this developer is working on is in trouble and the project manager believes it is the wrong time for the dev lead to be out of the office on vacation. How the actors deal with this conflict falls into one of five categories. They may avoid the confrontation, deferring any decision, waiting to see if the situation changes. For example, the project situation may improve to a point that the manager may not have to ask the lead to postpone vacation. They may become competitive with each other: the manager might call on an executive to reinforce how critical the project is to the dev lead, while the employee might make subtle threats to transfer or quit if his vacation is cancelled. They may completely accommodate the project goal, with the employee volunteering to defer vacation for the greater good. They may try to reach a compromise where the employee shortens the duration of the vacation. Or they may try to collaborate on a solution, where the employee takes the vacation but agrees to schedule a time each day for the team to reach out, and to participate in some team meetings.

Obviously, situations influence behaviours: a person might be passive about working late nights but very insistent about taking a vacation. And in any conflict, the actors may take multiple tacts through resolution: each party may start with avoidance to see how the situation develops, while one gradually shifts to a competitive posture and the other calls for accommodation.

The Macro Level: Collections of Individual's Goals versus Organizational Goals

At a macro level, this model helps us recognize general behaviour patterns within an organization. Consider our case study. For a group of employees, the thrill of a start-up quickly changed to the fear of a retrenchment. That retrenchment created feelings of powerlessness while rupturing any trust they might have extend the new owners. In the aftermath, however, power shifts from the acquirer to the acquired: the buyer is beholden to the contextual knowledge held by the remaining employees, and telegraphs that dependency by showering those who remain with compensation and titles. The acquirer has created a toxic sociological cocktail within the acquired business. Because of the retrenchment, each person will be looking out for themselves. Because of the cutbacks, business knowledge is fragmented and each person is deeply entrenched into a fragment. Because of the rabid awarding of titles and pay, people have personal interests to defend (e.g., they would not command these salaries elsewhere, nor would they have these titles in similar firms). Because of the asymmetry in contextual knowledge - none held by the acquirer, all held by the acquired - the acquirer is at the mercy of the acquired, meaning the acquired have the power to place their own agendas first.

This creates a business stalemate. No one person knows the business from end-to-end. Each person defends their power position by inflating the value of their contextual knowledge and resisting attempts to share, dilute or obsolete it. Any new person brought into this business will subsequently be thwarted from learning it because of the extremes to which people will guard the information they possess. Management will be reduced to one of two states per the model: avoidance, or compromise. In our case study, the senior corporate employees choosing to stay away from this business are avoiding conflict. To them, it's more hassle than it's worth, the hassle being the constant need for compromise: whenever a corporate goal needs to be met, the parent company management has to negotiate for participation and responsibility from each person, and micro-manage each to verify that personal preferences and priorities don't impair or impede a group goal. Achievable corporate goals will be very, very modest compared with the effort expended.

Managing Through Stalemate

We can understand managing through this stalemate in light of the unitarist and pluralist philosophies presented in Part I of this series. As mentioned above, a unitarist philosophy in the presence of so many entrenched, individual interests will at best result in compromise. Group goals compete with individual goals; it is only through formal authority that the unitarist can press an agenda forward. But in our case study, that authority is limited by management's captivity at the hands of its own people. The result is compromise and, in our case study, chronic underperformance of the business in question.

But a pluralist will be similarly frustrated. The pluralist will expect people will want to collaborate to accommodate the greatest number of individual goals and needs while pursuing the organization's goals. But in our case study, people are already empowered and power shifts among actors depending on the situation (leaving the manager to constantly chase rather than lead), while individual interests are competing in such a way that stymies facilitation of a group outcome. Collaboration requires individual agendas to be out in the open and accepted by all members. When they are not, there is a fundamental lack of trust. In this case, there are many hidden agendas: people protecting titles and compensation, for example. Collaboration will not come naturally and, as it is a threat to the status quo, may not be coachable in the situation. All the process and facilitation in the world will not overcome institutionalized neuroses. Facilitation will devolve into interrogation to obtain facts and, at best, reach a compromise among competing interests.

The conflict model above helps us to recognize this as a behavioural problem, not a process or organizational structure problem. A patient manager, whether unitarist or pluralist, can break the stalemate by rebuilding the culture. She can do this by changing incentives so that legacy people aren't rewarded for preserving a perverted status quo, but instead rewarded for group goals. This removes barriers to change. She can also bring in new people to reconstruct the business knowledge by engaging with clients and developing new software and services, gradually strangling and replacing legacy assets. This both captures and deprecstes contextual familiarity, simultaneously rebuilding both the business and the culture from the outside-in. If successfully executed, these activities will break the stalemate. However, the prevailing organizational culture after the change - accommodating, collaborating, competing, avoiding or compromising - will still reflect the fundamental philosophy of the leader.

A Portrait of Disconnected Leadership

It takes a long time to change organizational behaviours, and longer still to see tangible results. It's difficult, if not impossible, to measure behaviour change. Committing to this kind of change requires executive understanding of businesses as social systems and executive commitment to a future state defined by behaviour patterns. In our case study, if the executives in the buying firm don't recognize the sociological toxicity they created through their business decisions, they're not likely to understand the need to reform the business completely. Nor will they likely be willing to make the investment.

This exposes a third way we can apply this model: how do different actors perceive the organization? In our case study, we have a leadership disconnect: the executive is unaware of the presence of individual agendas which are crowding out the group goal. He shows characteristics of a unitarist leader, unaware of how management fostered this situation, denying the relevancy or existence of individual agendas relative to group goals, and thinking about his under-performing division as an execution problem. This means he assumes the organization is in the "accommodating" quadrant of the matrix. Yet the presence of so many individual agendas being vigorously pursued puts the business squarely in the "competing" quadrant. The executive interprets this as an execution problem, when it is a sociological one.

Which brings us back to the subject of conflict. If conflict is the medium through which an organization maintains relevancy and currency, the prevailing organizational pathology is an indicator of its survival capability. A willingness to acknowledge and integrate multiple agendas, needs and goals makes for a turbulent organization, but one that constantly changes, innovates and evolves. Denial or suppression creates extremes: an "every person for themselves" competitive free-for-all that slowly erodes an organization with internal strife, or an organization that experiences a stable trajectory - possibly for a long period - that decays suddenly owing to its inability to impose its will forever. Organizations that avoid conflict or are constantly compromising with themselves are organizations adrift.

Not all conflict is healthy, but conflict is inevitable. The leader who chooses to embrace and harness conflict will have a dynamic, vibrant organization. The leader who chooses to suppress, ignore or avoid conflict will run the organization into the ground.

Choose wisely.

Tuesday, July 30, 2013

Conflict, Part I

Product management was never a formal responsibility; it just sort of happened.

Early on, it was driven by what the technical wizards came up with. But the magic left the development team years ago: it had been gutted by several rounds of staff cuts that took the garrulous personalities and innovative thinkers. It took the wind from development's sails: those who were still on the payroll were just happy to have kept their jobs. They adopted a "just tell us what you want us to do" attitude about the products.

At that point, product decisions were made by sales. The sales team was larger than it had ever been, but it had struggled. Each quarter they set lofty sales targets, and each quarter they fell short. This made them jittery about revenue: they spent more time in internal meetings discussing how not to lose revenue than they did meeting with clients to find ways to generate more. In addition, product training hadn't been refreshed in years, and they never formalized an onboarding process for new salespeople, so their grip on the products they were selling and the clients to whom they were selling was not that strong. The net effect was a chronic lack of confidence: the sales team automatically negotiated from a position of weakness and never met a dollar of revenue they didn't like. The impact on product was to pile on feature after feature and request after request.

Company leadership eventually formed a product management team, staffing it with existing employees. Although they had product responsibility, it was clearly going to take some time for this group to learn enough about the customers, users and the products to be effective in a product management role.

This created a product leadership vacuum that was partially filled by the graphic design team.

Like most software companies, user experience was originally driven by developers: screen layout, workflow, and even general appearance were byproducts of decisions made during development. Uncharitable customer feedback on usability of the software led to the creation of a small in-house graphic design team. They met with instant success: the polish they put on the software won praise from customers.

At some point, the graphic design team began to self-style themselves as a User Experience (UX) group. This was a logical evolution that would provide career development opportunities. It was also an essential competency that was conspicuous by its absence in a software company. Executive leadership was thrilled by what they saw as the design team stepping up to fill an essential need. They gave the team encouragement, and a slightly bigger budget.

As practiced, UX was largely an extension of graphic design work, creating more comprehensive interface design at the panel level. In one sense, this was pragmatic: because the company had not historically invested in skill acquisition and capability development, it was reasonable to assume UX skills would be acquired incrementally and the capability developed organically. In another, it was naive: the UX team weren't familiar with things like personas, value stream mapping and other essential inputs to software design. They were on a self-directed voyage of discovery, moving at a slow pace. Capability would trail aspiration for a long, long time.

This put UX and development into direct conflict with each other. UX felt that development was uncooperative with the design process: they saw development doing their own thing, or stonewalling, or being outright obstinate. Development had a different perspective. They saw design infringing on work that had always been the purview of development. They also felt it wasn't improving the development process: more detailed screen layouts without accompanying context about users and scenarios wasn't all that helpful, and it was eating into the alloted time for coding and testing. Development felt increasingly frustrated, believing they had less time and less control over what they did.

One day, following a particularly contentious design meeting, the development director put his complaints, criticisms and demands into an e-mail to the head of design. He was tired of design impinging on development's turf, taking too much time to do inadequate work and leaving development to scramble to meet deadlines and struggling to work within a UX framework he felt was incomplete. The e-mail was not personally insulting, but his frustrations were obvious. And although he proposed some reassignment of responsibility, his main message to design was "back off".

The e-mail didn't just travel from the development director to the head of design. Each shared it with superiors and colleagues. It eventually found its way to the company president and CEO.

The president reacted by sending a one-line e-mail to the development director: "I won't tolerate behaviour like this in this company!"

* * *

For most managers, conflict is unwelcome. It creates work: the manager has to smooth over hurt feelings, come to the defense of an accuser or accused, and navigate the power politics that conflict stirs up. It exposes ambitions that set a tone of competitiveness, distrust or fear. It lays bare opinions, frailties and passions that many believe have no place in the workplace. It creates the appearance that the manager doesn't have control of his or her people.

But conflict in an organization is inevitable. Conflict arises from the collision of competing interests, and every company has a rich diversity of interests. There are the company goals (in effect, those set by leadership), such as revenue targets. There are objectives people have for their departments, such as to add new positions and hire more people. There are individual goals, such as career ambitions or control over what one works on. These interests will never be in perfect alignment. And they are always present: it is irrational to expect there will never be conflict, that company interests are the only ones that matter and that people will subordinate their interests to those of their employer.

Conflict is also healthy. It exposes fault lines, real and imagined, in an organization. It indicates engagement: when people don't feel motivated enough to enter into conflict, it suggests they're not optimistic that their goals can be met by the organization, or that they don't believe the organizations goals are worthwhile. It also fosters innovation: passionate, engaged people publicly advocating divergent opinions will come up with creative solutions to organizational problems. Lethargic passengers in an organization rife with groupthink will not.

As I wrote above, it is irrational for a leader to expect that there will never be conflict. Yet many leaders intrinsically assume that people subordinate themselves in service to the company. In doing so, leaders deny the relevancy and even the existence of interests outside of those held by the leadership. This is, according to Burrell and Morgan1 what is called a "unitary" philosophy of management. The organization is seen as being united under a common set of goals and uniformly pursuing them as a team. Authority is derived from the hierarchy and responsibilities are narrow: managers have the right to manage and employees the duty to obey, and everybody is expected to do no more and no less than the job to which they have been assigned. Unitary leaders regard conflict as uncommon and temporary, something to be removed by managers. Instigators of conflict are regarded as troublemakers.

The unitary philosophy aligns with traditional ways of seeing an organization: as a top-down, command-and-control, static operation. At first glance, unitary leadership would appear to be out of touch and out of date in the era of mobile and empowered knowledge workers in increasingly dynamic businesses. Still, it remains very appealing to leaders, managers and subordinates alike. For one thing, it deals with sociological complexities of the workplace by ignoring them. For another, it imposes a facade of unification under which symbols, rituals, process and reporting structure can be brought to bear to create organizational order. As a result, it reduces management to a technical problem of defining and tracking tasks, and measuring performance against those tasks. This is the key to its appeal: business is a whole lot easier when it's a one way street!

An alternative2 to the unitary philosophy, called pluralism3, conceptualizes the organization as a coalition of diverse people. To the pluralist, conflict in an organization is inherent and routine, and the pluralist leader looks at conflict as an opportunity for change and evolution. Authority in the organization is diversified and constantly changing, as are each person's scope of responsibilities. People are expected to seek ways to collaborate to achieve organizational goals. This final point is key to understanding the role of conflict. In the unitary organization, people are confined to roles and the manager creates order. In the pluralist organization, people constantly collaborate to create a new order.

Let's look at our case study from a pluralist leadership perspective. First, the pluralist must understand the other person before trying to be understood themselves. The development director may feel frustrated, but has to realize that development abdicated leadership years ago when it became a passenger in product management. The spat with design is simply the latest event in a long trend of largely self-inflicted decline. The development director must also recognize how the design team was trying to evolve a design capability. Similarly, the UX lead must understand the gap between interface design and UX and how development has historically bridged that gap while it develops. Next, the organization leader must then facilitate a collaborative outcome that will improve the state of the organization overall, not a compromise that subordinates one collection of objectives to another to expedite the task at hand. In this case, the company might hire in analysts or train people in developing personas and workflows and restructure the development cycle to incorporate more thorough user experience design activity.

The extent to which a leader accommodates versus imposes has a dramatic impact on the character of the organization he or she leads.

The pluralist organization is a "big tent" that welcomes diversity and divergent opinions and goals. It will be attractive to drivers (people who get stuff done) and creative thinkers. Each person must be a pluralist: they have to work in collaboration with other people to achieve the maximum superset of outcomes. As a group they must be highly skilled, but to a person it is less important to be skilled than it is to have the disposition to learn. This is because the capability of the group supersedes that of any one person, whereas the aptitude to learn and change makes for stronger interpersonal partnerships. The result is an organization of peers that is more flat than hierarchical, that moves in fits and starts but requires less hands-on, in-the-weeds management. Facilitators do well in the pluralist organization, task tyrants do not.

In comparison, the unitary organization does not accommodate alternative (let alone competing) goals to those set by the business leadership. As a result, it will attract passengers, people who are largely content to do as they are told. This means the unitary organization will be less diverse, more sullen, and more fragile than the pluralist organization. But that isn't to say that the unitary organization is staffed with soulless go-bots who do the bidding of their paymasters: every person in a unitary organization will still harbor personal objectives, some which will be in perfect alignment with those of the organization, and some which will be wildly divergent to them. Each person will still go in pursuit their own objectives to one degree or another, but because a unitary organization has no tolerance for goals other than its own, people will pursue their objectives below the organization's radar to avoid conflict. The proliferation of all these invisible individual agendas reduces management to meticulous supervision of and continuous negotiation with each employee just to achieve the simplest of organizational objectives. Ambitious goals are out of the question.

The pluralist organization is more vibrant, diverse and robust than its unitary counterpart. These are important characteristics for professional, knowledge and creative businesses. They are also increasingly important to industrial firms as they become more knowledge-based businesses. At a macro level, businesses have to be more environmentally aware and adaptive to a degree their industrial forebears did not. At a micro level, highly skilled technical and knowledge workers are scarce, increasingly complex business context makes people far less interchangeable, and employee mobility has never been greater.

Being a pluralist is a struggle against entrenched patterns and expectations. Stories of leadership are dominated by people who espoused a unitary philosophy, from Julius Caesar to Steve Jobs. Unitarism is entrenched in business and even societal norms.

Still, it is a choice that every leader makes, and the decision to impose, deny and steamroll or accommodate, facilitate and explore will impact the nature and character of the organization. For the leader, one truth stands above all others: you get the organization you deserve.

Choose wisely.

 

 

1Morgan, Gareth. Images of Organization, page 189.

2There is another alternative to the one discussed here, called radicalism. We'll look at radicalism and why we need more radicals in the software business in a future post.

3Ibid.

Friday, June 28, 2013

Debt, Fear & Demographics: The Ingredients of Stagnation

You developed a simple tool for individuals and small groups. You never imagined it would be a runaway hit. Revenue poured in, would-be investors called constantly, The Wall Street Journal ran features on you twice, marketing chiefs from a dozen S&P 100 firms wanted you in workshops to figure out how to "partner", and your founder was presenting keynotes and on panels every few weeks. You were the classic tech success story.

Ideas poured in as fast as the revenue, and some copycat products sprung up. You knew you weren't keeping up, largely because under the hood, the product wasn't very flexible. When you started this thing, the goal was to get product out the door to see if people would like it, not design for every possible need. The R&D team (the original developers - a new team was hired to maintain the existing product) concluded they could build a much better product from scratch having learned so much the first time round. Many more layers of abstraction, more flexible and open architecture, and a DSL for creating plug-in apps to sit on top of what was really more of a Platform than just a Product. The existing software couldn't support that. So you gave them approval to make version 2 a complete rewrite. They went off to a special location, separate from everybody else, to develop The New Platform.

That decision was, in retrospect, what killed the company.

It wasn't so bad that the R&D team took twice as long as they had originally forecast. It's that what they delivered was so commercially bad: bug laden, unstable, unreliable. Customer service complaints skyrocketed & new product sales collapsed. You gradually pulled more and more people into fixing the code; before long, it was an operational black hole of time and effort. When, a year later, the product was stabilized, revenues had fallen by nearly 80%.

That was when the CFO sat down with the CEO, told her that cash reserves were all but depleted, and that operating costs needed to be culled.

* * *

A few months ago, I described zombie businesses as a financial phenomenon wrapped around an operating crisis. First, a financial crisis triggers deep operating cuts that gut a business of its capability. This leaves it operationally inert. Over time, revenues go down as products & services grow stale in the hands of rudderless development and depleted operations staff. At the same time, costs go up as golden handcuffs are slipped on to people to prevent employee defection. Long after the financial crisis, the business may still have a well established set of products, a bit of brand recognition, a blue chip client list and many years of history. But a zombie business is not good investment because the cost to re-humanize it is prohibitive.

That's the financial perspective. The operating crisis at the core of this bears further examination.

Edward Hadas, a columnist at Breakingviews, recently wrote that a stagnant economy has 3 key characteristics: debt, fear, and demographics. Oppressive levels of debt devote money to interest payments, denying new and productive assets of it. Investors, gripped with fear, avoid making high risk investments. And when economies stagnate, people have fewer kids; this depletes the economy of its working age population, which slows growth and dynamism.

These same characteristics apply to a bombed-out tech business. Developers spend time servicing technical debt at the cost of developing new software assets. Executives are afraid to take the risk of new investments and assets get stale. Managers cling to existing employees, who rob the company of growth and dynanism by clinging to existing code, concepts and techniques.

An economic crisis (e.g., the rapid evaporation of liquidity) can usher in a period of economic stagnation (little economic development), but crisis and stagnation are not one and the same. Nor does the prior necessarily lead to the latter. An economic crisis can be cathartic, purging an economy of legacy constraints that serves to initiate a period of investment and development. The same is true of an operational crisis. A sudden loss of capability can trigger a period of stagnation, where what was once creative problem-solving work is reduced to checklists of tasks. Or it can trigger a period of dynamic change, where legacy business and technology constraints are thrown off and the company reinvents itself completely.

It isn't the crisis, but the reaction to it, that matters. The crisis is immediate; the stagnation settles in over a long period of time. The longer it has to settle in, the further assets and skills decay in an environment of timidity, old ideas, and increasing tech debt, the more costly it will be to revive the business.

It is an oversimplification to say that stagnation is a collection of choices people make, and that people are bound by limitations of their own making. Neither an economy nor a business can will itself out of stagnation, and not everybody has the leadership or appetite for risk to make bold decisions in a familiar (if decaying) environment. But how we react to a triggering financial crisis determines the fate of operations. If the reaction is to retrench and preserve, a financially-induced operational crisis will lead to stagnation. If the reaction is to reinvent and rebuild, it will not.

Friday, May 31, 2013

Self-Insuring Software Assets

When we buy a house, we also buy insurance on the house. Catastrophic events can not only leave the homeowner without shelter, they can be financially ruinous. In the event of a catastrophe, the homeowner has to be able to respond quickly to contain any problems, and be able to either enact repairs herself, or mobilize people and materials to fix the damage. Repairs require building materials and a wide variety of skills (electrician, mason, carpenter, decorator). Most homeowners don't have the construction skills necessary to perform their own repairs, so they would have to set aside large capital reserves as a "rainy day fund" to pay for such repairs. But homeowners don't have to do this, because we have homeowner's insurance. In exchange for a little bit of cash flow, insurance companies make policyholders whole in the event of a catastrophe by providing temporary shelter, and providing the capital and even the expertise to make repairs in a reasonable period of time.

When we build software, we don't always buy insurance on the software. For software we build, we underwrite the responsibility for the fact that it will be technically sound and functionally fit, secure and reliable, and will work in the client and server environments where it needs to operate. As builder-owners, we have responsibility for these things during the entire useful life of the software. This obligation extends to all usage scenarios we will encounter, and all environmental changes that could impair the asset. If regulation changes and we need to capture additional data, if a nightly data import process chokes on a value we hadn't anticipated, if our stored procedures mysteriously fail after a database upgrade, if the latest release of FireFox doesn't like the Javascript generated by one of our helper classes, it's our problem to sort out. There is nobody else.

In effect, we self-insure software assets that we create. When we build software, we underwrite the responsibility for all eventualities that may befall it. Self-insuring requires us to retain people who have the knowledge of the technology, configuration and code; of the integration points and functionality; of the data and its structures; and of the business and rules. It also requires us to keep sufficient numbers of people so that we are resilient to staff turnover and loss, and also so that we can be responsive during periods of peak need (the technology equivalent of a bad weather outbreak). Things may be benign for most of the time, but in the event of multiple problems, we must have a sufficient number of knowledgeable people to provide timely responses so that the business continues to operate.

The degree of coverage that we take out is a function of our willingness to invest in the asset to make it less susceptible to risk (preventative measures), and our willingness to spend on retaining people who know the code and the business to perpetuate the asset and to do nothing else (responsiveness measures). This determines the premium that we are willing to pay to self-insure.

In practice, this premium is a function of our willingness to pay, not of the degree of risk exposure that we are explicitly willing to accept. This is an important distinction because this is often an economic decision made in ignorance of actual risk. Tech organizations are not particularly good at assessing risks, and usually take an optimistic line: software works until it doesn't. If we're thorough, previously unforeseen circumstances are codified as automated tests to protect against a repeat occurrence. If we're not, we fix the problem and assume we'll never have to deal with it again. Even when we are good at categorizing our risks, we don't have much in the way of data to shed light on our actual exposure since most firms don't formally catalogue system failures. We also have spurious reference data: just as a driver's accident history excludes near-miss accidents, our assessments will also tend to be highly selective. Similarly, just as an expert can miss conditions that will result in water in the basement, our experts will misjudge a probable combination of events that will lead to software impairment (who in 2006 predicted the rise in popularity of the Safari browser on small screens?) And on top of it all, we can live in a high risk world but lead highly fortunate lives where risks never materialize. Good fortune dulls our risk sensitivity.

The result is that the insurance premium we choose to pay in the end is based largely on conjecture and feeling rather than being derived from any objective assessment of our vulnerability. Most people in tech (and outside of tech) are not really cognizant of the fact that we're self-insuring, what we're self-insuring against, the responsibility that entails, and the potential catastrophic risks that it poses. Any success at self-insuring software assets has little to do with thoughtful decision making, and more to do with luck. If operating conditions are benign and risks never manifest themselves, our premium looks appropriate, and even like a luxury. On the other hand, if we hit the jackpot and dozens of impairments affect the asset and we haven't paid a premium for protection, our self-insurance decision looks reckless.

Insuring against operating failures is difficult to conceptualize, more difficult to quantify and an even more difficult to pay for. We struggle to define future operating conditions, and the most sophisticated spreadsheet modeling in the world won't shed useful light on our real risk exposure. Willingness to pay a premium typically comes down to a narrative-based decision: how few people are we willing to keep to fix things? This minimal cost approach is risk ignorant. A better first step in self-insuring is to change to an outcome-based narrative: what are the catastrophes we must insure against and what is the income-statement impact should those happen? This measures our degree of self-insurance against outcomes, not on costs.

Tuesday, April 30, 2013

Zombie Businesses

The Lehman bankruptcy is best known as the event that triggered a financial crisis. For many firms, it also sowed the seeds of an operating crisis.

Revenues plummeted at the end of 2008. Companies retrenched by laying people off. Managers coped with smaller staffs by asking employees to perform multiple jobs and to work longer hours. With remaining employees grateful to have kept their jobs, and with the economy leveling off rather than staying in freefall, corporate profitability rebounded as early as 2009.

Smart business leaders knew this wouldn't last, because you can't run a business for very long by running people into the ground. True, jobs weren't a-plenty and a depressed housing market meant employees weren't going to chase dream jobs. Plus, economic indicators gave no reason to believe things were going to improve any time soon. Still, the employer's risk of losing people who held the business together increased every day that the "new normal" set in. Smart leaders got in front of this.

From early 2009, healthy companies boosted their capital spending.1 They used their capital in three ways.

The first was defensive. Managers classified as much work activity as "capital improvement" as they could. Doing so meant labor costs could be capitalized for up to 5 years. This let businesses retain people without eroding profitability. This prevented companies from losing employees with systemic knowledge of the business and intimate knowledge of customers.

The second was offensive, investing in core business operations. Companies invested in technology2 to lock in those post-layoff productivity gains, and to improve customer self-service offerings since they had fewer employees to service customers. These investments made operations a source of competitiveness by lowering costs, increasing efficiency, and making businesses more responsive to customers. This made these firms better able to compete for market share - essential in a slow-growth world.

The third use was financial restructuring and building reserves. This meant issuing debt, and retiring equity. Debt was cheap, as interest rates hit record lows during the crisis. Debt was also in demand, as market liquidity was making a "flight to quality" and many corporations sported high credit ratings and had large cash balances that could comfortably cover interest payments. Debt-for-equity restructuring lowered the total cost of capital. It also benefited boards and CEOs by concentrating ownership of the company in fewer people's hands.

Smart business leaders responded to the financial crisis not only by protecting operations, but by improving and reforming them, taking advantage of cheap capital during the crisis to pay for it.

But many small businesses, high-risk businesses, and poorly capitalized businesses had neither capital cushions nor creative accounting to protect their operations. All they could do was cut costs and hope for the best. And cut they did. The people who got salary bumps in the boom years from 2006 through 2008 became "high-salary outliers" in 2009. It didn't matter that those were the people at the core of the company's capability and drive. When facing financial ruin, the CFO calls the shots, and the pricey people are the first to go regardless the impact to operations.

These cuts may have staved off bankruptcy, but set the stage for an operating crisis by depleting firms of core operating knowledge and contextual business understanding. Cuts made at the beginning of the crisis left few people - and often no single person - fluent in the details of business operations. Those who remain can mechanically perform different tasks, but don't understand why they do the things they do. The business has continued to run, but it runs on momentum. It doesn't initiate change. It erodes a little bit here and there, as employees exit and clients find the offerings stale and go elsewhere. Costs increase as salaries and stay bonuses are showered on those with the most experience in the mechanics of the business. Pricing power decreases as employees lose the ability to articulate the value of what they provide to their customers. As more time passes, the more acute this crisis becomes: margins get squeezed while the business itself becomes operationally sclerotic.

Just as there are zombie loans (banks keep non-performing loans on their books because they don't want to take the writedown), there are zombie businesses. They transact business and generate revenue. They have years of history and long-standing client relationships. Such a business may look like it can be successful with some investment, but lacking that core operating knowledge, it's a zombie: animated but only semi-sentient.

These firms will only have attracted risk capital late in the post-Lehman investment cycle. Because they haven't been making investments in efficiency or customer service, their first use of fresh capital will be to hire new operational support people in an attempt to get caught up. That's costly and inefficient, and it just adds capacity of people who know how to perform the same mechanical tasks. It won't change the fact that austerity depleted the firm of fundamental operating knowledge. New managers brought in with this investment will struggle to unwind the piled on (and often undocumented) complications of the business, while new people in execution roles will get no further than replay of the mechanical processes for running the business.

Resuscitating these businesses - bringing much needed innovation and structural reform to a firm that has been starved of it for a long time - is a time-consuming and costly proposition. First, nobody has time to spare: they're constantly on fire trying to control operations and contain costs of the fragile machinery of the business. Second, they don't know what to do: because nobody knows the business context very well, there isn't anybody who can competently partner on initiatives to reform the business. Third, middle managers lack the will: the trauma of the cuts and years of thin investment will have rendered decision makers reactive (keep operational flare ups under control) instead of aggressive (reinvent the business and supporting systems). Fourth, those same middle managers can only conceptualize the business as what it has always been; they'll lack the imagination to see what it could be.

Surviving a prolonged downturn is not necessarily the mark of a strong business. As Nassim Taleb pointed out in The Black Swan, a lab rat that survives multiple rounds of experimental treatment (say, exposure to high dosages of radiation) isn't "stronger" than rats that do not. The survivor will be in pretty bad shape for the experience. The heroic story of the tough and resourceful survivor isn't necessarily applicable to the business that survives tough times. The apocryphal story of the zombie is a better fit.

1 Many businesses reported a significant uptick in capital spending from 2009-2011 compared to 2006-2008.

2 Strong corporate IT spending is one reason why the tech sector was counter-cyclical from 2009-2011.

Monday, March 18, 2013

The Return of Financial Engineering - and What it Means for Tech

Soon after the financial crisis began in 2008, companies shifted attention from finance to operations. Finance had fallen out of favour: risk capital dried up, asset prices collapsed and capital sought safe havens like US Treasurys. The crisis also ushered in a period of tepid growth undermined by persistent economic uncertainty. This limited the financial options companies could use to juice returns, so they focused on investing in operations to create efficiencies or fight for market share. In the years following the crisis, the "operations yield" - the return from investing in operations - outstripped the "financial yield" - the return from turning a business into a financial plaything.

Since the start of the crisis, central banks have pumped a lot of money into financial markets, principally by buying up debt issued by governments and financial assets held by banks. This was supposed to spur business activity, particularly lending and investing, by driving down the cost of both debt (lower lending rates) and equity (motivating capital to seek higher returns by pursuing riskier investments).

Whether lending and investing have increased as a result of these policies is debatable. One thing they have done is encourage companies to change their capital structure: low interest rates have made debt cheaper to issue than equity, so companies have been busy selling bonds and buying back their own stock. There are financial benefits to doing this, namely lowering capital costs. It benefits the equity financiers by concentrating ownership of the company in fewer hands. But beyond reducing the hurdle rate for investments by a few basis points - and not very much at that given low interest rates - this doesn't provide any operational benefit.

What's not debatable is that it's brought about a return of financial engineering. CLOs and hybrids are back. Messers Einhorn and Buffet are engineering what amount to ATM withdrawals through preferred share offerings from Apple and Heinz, respectively. The OfficeMax / Office Depot merger is addition through subtraction: projected synergies exceed the combined market cap of the two firms.

When financing yields are higher than operating yields, business operations take a back seat to financial philandering. Consider Dell Computer. Dell's business lines are either in decline (services) or violently competitive (PCs and servers). Does it matter whether Dell is funded with public or private money? Will being a private firm make Michael Dell better able to do anything he cannot do today? It is hard to see how it does. But it does let him play tax arbitrage.

This suggests a bearish environment for investment in operations. The shift to debt in favour of equity, the rise in share buybacks, dividend payouts and M&A suggest that companies have run out of ideas for how to invest in themselves. Cash flow that would otherwise be channeled into the business is betrothed to financiers instead. Debt yields are kept low and roll-overs made easier by stable and consistent cash flows, and equity easier to raise when cash flows from operations are both strong and consistent. This compels executives to set a "steady as she goes" agenda - not an aggressive investment agenda - for business operations.

A reduction in business investment is not particularly good news for tech firms selling hardware, software or services to companies. But it's not all bad news. Rewarding financiers by a starving business for investment makes a company sclerotic. That clears the way for innovators to disrupt and grow quickly. But until those innovators rise up, finance is positioned to stymie - not facilitate - business innovation.

Thursday, February 28, 2013

Investing in Software Through Experts, Analysis or Discovery

Whether investing in equities or in software, there are three distinct approaches to how we make an investment decision: based on our intuition or experience, based on our analysis of an opportunity, or based on our ability to rapidly discover and respond to market feedback. Let's look at each in turn.

When we invest based on experience, our decisions are rooted in the expertise we've gained from things we've done before. When based on intuition, our decisions are based on strongly held convictions. An equity investor familiar with retail might recognize demographic changes in specific geographic areas (e.g., migration of families with young children to Florida and Texas) and intuitively invest in firms exposed to those changes (such as commercial construction businesses operating in those areas). In the same way, somebody who has first hand experience can invest in developing a technology solution: the inspiration for Square, Inc. came from a small firm that couldn't close a sale because it couldn't accept credit cards. Although expert investing will likely involve a little bit of analysis, for the most part the investor relies on gut. Because we invest primarily on the courage of our convictions, capital controls on intuitive investments tend not to be very strict. In absolute terms, the capital commitment is generally small, although in relative terms the committed capital may be quite large especially if it is a private placement. As a result, the capital tends to be impatient: trust in an expert lasts only so long; investors will get cold feet quickly if there aren't quick results.

We can invest based on research and analysis. Value investors in equities study company fundamentals looking for firms with share prices that undervalue the assets or the durability of cash flows. In the same way, we can look for value gaps in business operations or market opportunities and identify ways that technology can deliver value to fill those voids. The foundation of the analysis are things such as value-stream mapping, or competitive analyses of solutions developed by sector and non-sector peers. From this, we can produce a financial model and, ultimately, a business case. We need expertise to develop a solution, but by and large we make our investment decision based on the strength of our analysis. In absolute terms, the amount of committed capital can be quite large. But, having rationalized our way to making an investment, the capital controls tend to be strict, and the capital tends to be patient.

Finally, we can invest based on our ability to discover and adjust based on market or user feedback. Traders move in and out of positions, adjusting with changes in the market and hedging based on how the market might change. Over a long period of time, the trader hopes to end up with large total returns even if any given position is held for only a short period of time. We can do something similar with software, using approaches like Continuous Delivery and Lean Startup. In this approach, we aren't just continuously releasing, but rapidly and aggressively acquiring and interpreting feedback on what we've released. We can also use things like A/B testing to hedge investments in specific features. When we invest this way, we do so based not so much on our expertise or analysis, but based on our willingness and ability to explore for opportunities. Capital controls are strict because we have to explain what features we're spending money on and how we're protecting against downside risk of making a mistake. The capital backing a voyage of discovery will be impatient, wanting frequent assurances of positive feedback and results. But at any given time, the amount of committed capital is small, because investors continually evaluate whether to make further investment in the pursuit.

Each of these approaches makes it easy to answer "why" we are making a particular investment. Why should we part with cash for this particular feature? Go ask the expert, or see how it fits in the business case, or go get user feedback on it. "Why" should drive every decision and action made in pursuit of an investment. Without the "why" there is no context for the "what". In its absence, the "what" will suffer.

No approach is universally superior to another. The approach we take has to play to our strengths and capabilities. Either we have people with expertise or we don't. Either we have people with analytic minds and access to data or we don't. Either we have the ability to rapidly deliver and interpret feedback or we don't. The approach we take must also be suitable to the nature of the investment we're making. A voyage of discovery is well suited for developing a product for a new market, but not for an upgrade to a core transactional system. The business case for investing in a customer self-service solution is going to be much more compelling than a business case for developing a product for an emerging market segment.

Just because we take one of these approaches is no guarantee of success. Not all investments are going to pay off: our experts may turn out to have esoteric tastes that aren't appealing to a mass audience. Our thoroughly researched market analysis might very well miss the market entirely. We might deliver lots of features but not find anybody compelled to use them.

Worse still, each of these approaches can be little more than a veneer of competency to unprofessional investing. A hired-in expert may be a charlatan. Many a manager has commissioned a model that inflates benefits to flatter an investment - only for those benefits to never be realized. Just because we can get continuous feedback from users does not mean that we can correctly interpret what that feedback really means.

Most of the time, of course, we take a hybrid approach to how we invest. We supplement expertise with a business case, or we charter an investment with a business case but use Continuous Delivery to get feedback. However we go about it, we need to get the essential elements of the approach right if we're to have any chance of success. Otherwise, we're just unprofessional investors: investing without experience, thoughtful analysis or an ability to respond quickly is reckless use of capital.

Entirely too much software investing fits this bill.

Thursday, January 31, 2013

Sector Seven Is Clear

Many years ago, there was a television ad that showed an intruder being chased through a building by two security guards. The guards chase him from room to room, and ultimately down a long hallway. At the mid-point of the hallway, there's a line painted on the floor and wall. On one side of the line is a large number 7, on the other is the number 8. The intruder runs down the hallway, over the line. The two security guards come to a sudden stop right at the line. They watch as the intruder continues to run down the hallway into some other part of the building. After a pause, one of the security guards grabs his walkie-talkie and announces: "sector seven is clear". The intruder is still in the building, but the security guards no longer consider him to be their responsibility.

Every now and again I reference this ad in a meeting or presentation, in the context of Industrial IT. I've been reminded of it again recently.

Industrial IT encourages specialization: It is easier for HR to hire, staff, train and procurement to contract for people in specialist roles. And specialization is comfortable for a lot of people. Managers - particularly managers who have a poor grip on the work being done - like specialization because it is easier to assign and track tasks done by specialists. People in execution roles take comfort in specialization. It's easy to become proficient in a narrow field, such as a specific database or programming language. Given the slow rate of change of any given technology, you don't have to work too hard to remain acceptably proficient at what you do. You only face a threat of obsolescence. A commercial technology with sufficient market share and investment mitigates that risk to the individual.

Specialization means the most critical information - systemic understanding of how a solution functions and behaves from end-to-end - will be concentrated in a few people's heads. This knowledge asymmetry means those few people will be overwhelmed with demands on their time, creating a bottleneck while others on the team are idle. There will be a lot of hand-offs, which increases the risk of rework through misunderstanding. Because no single specialist can see a solution through to completion, nobody will have ownership for the problem. At least, not beyond making sure Sector 7 is clear.

I've written about it many times before, but Industrial IT prioritizes scale over results, specialists over professionals, predictability over innovation, and technology over value. Industrial IT is large but fragile: it struggles to get anything done, there aren't enough heroes to go around, its delivery operations are opaque, and it produces high-maintenance assets.

Even when there is executive commitment to change, it takes a long time and concentrated effort to change the industrial mind-set at a grass roots level.

We have to reframe problems above the task level. Everything we do should be framed as a meaningful business result or outcome, complete with acceptance criteria against which we can verify success in the business context. For example, the problem isn't to fix the payload of a specific webservice, the problem is to allow multiple systems to integrate with each other so that sales transactions can be exchanged. Agile Stories are particularly helpful for defining actions this way, whether new feature or defect. Stories make it possible for each person to explain why something is important, why something is valuable, why they are working on it. Back to our example, I'm fixing this webservice because until I do, there won't be order flow from a principal commercial partner. Stories are also helpful as they let us measure the things we do in terms of results, and not effort.

But there's more to this than process. Each person must feel personal ownership for the success of their actions. The job isn't to code to a specification, or to test against a test case. The job is to create a solution that enables a business outcome. Each person must ask questions about the completeness of the solution, and be motivated to verify them in the most complete manner possible.

Which makes the point that this is, fundamentally, a people challenge. We're asking people to change how they understand the problems they're solving and what "done" means. We're asking them to change their behaviours in how they investigate, test and verify what they do. More broadly, we're asking them to build a contextual understanding for the work they do, and more importantly why they are doing it. And we are asking them to take responsibility for the outcome: I will know this is complete when ...

Do not under-estimate the challenge this represents. The transition from industrial to professional is amorphous. There are false positives: people who sign up for this too quickly don't understand what's going to be required of them. It isn't long before the never ending chorus of "I don't" starts: I don't know how to do something, I don't have that information, I don't know how to find that out. And we can't take anything for granted. We must constantly challenge people's contextual understanding: can they explain, in a business context, why they are working on something, who it benefits, why it is important.

Not everybody will make this transition. For some, because this isn't their calling: not all assembly line workers can become craftsmen. Others will self-select out, preferring the comforts afforded by a specialist's cocoon.

All of these things - changes in process, practice, behaviours and people - require a tremendous amount of intestinal fortitude. The would-be change agent must be prepared for a frustratingly slow rate of change, and to invest copious amounts of time into people to help them develop new context and new muscle memories. On top of it, leaders are in short supply and mentors are even scarcer in Industrial IT shops. Legacy assets and systems (and their legacy of patchwork integration, bandaged maintenance and situational knowledge) will slow the rate at which you can make new hires productive.

The benefits of changing from industrial to professional are obvious. While the destination is attractive, the journey is not - and be under no illusions that it is. But who we work with, how we work, and what we get done in a professional IT business make it worth doing.