Saturday, November 30, 2024

Industrial firms are struggling with policy change. They can be designed to respond to change.

News media have been trying to interpret the economic and commercial ramifications that will come about as a result of the US elections earlier this month. How will tariffs be used in policy and what will that mean to consumer goods prices and manufacturing supply chains? What are the risks to industrial contractors of anticipated cuts in federal government spending? How will regulations change in areas like telecommunications and emissions? How will bond markets price 10 year Treasurys?

No doubt, industrial firms are facing highly disruptive policy changes. But if we zoom out for a minute, highly disruptive policy changes are the norm. Emissions, finance, energy, telecommunications, trade, healthcare and lots of other areas have been subject to significant regulatory change in the last two decades. To wit: when adding 70,000 pages to the Federal Register is considered a light year for new regulations, policy change is the norm, not the exception. Add to that non-policy sources of volatility - labor strikes, electrical blackouts, markets that failed to materialize, armed combat - and it is accurate to say that industrial firms have been subject to non-stop, if not increasing, volatility in their operating environments.

* * *

Wall Street rewards consistency in free cash flows above all else. Consistency in cash flows mollifies bond markets, which gives equity investors confidence that there will be ample cash for distributions through buybacks and dividends.

In manufacturing companies, strong operating cash flows are achieved through highly efficient production processes, from supply chain to transportation. Just-in-time inventory management is one of these practices. JIT flatters the balance sheet by minimizing cash tied up in raw materials inventory and in Property, Plant and Equipment (warehouse space) to hold that inventory. As implemented, though, JIT creates tight coupling within a production system: a hiccup in fulfillment from a supplier interferes with the efficiency of the entire production process (e.g., Boeing parking work-in-process in what is actually an employee car park due to a lack of fasteners earlier this year).

In short, industrial firms can throw off copious amounts of cash, but their processes - implemented as tightly integrated, complex systems - are fragile. Nassim Taleb pointed out this same phenomenon in financial markets: interlocking dependencies create systemic fragility. By way of example, the beaching of the Ever Given looked like a black swan event, but it was not: the problem wasn’t a global transportation problem, but a lack of robustness in end-to-end production processes themselves.

* * *

The more rigid the underlying processes, the more acute the need for external stability. Right now, uncertainty about policy change is creating external instability, rendering internal decisions about supply chain, shop floor, distribution and capital investment difficult to model, let alone make.

If constant volatility from one source or another is the new norm, "optimization" in manufacturing is no longer as simple as securing timely delivery of raw material inputs, squeezing labor productivity, and designing production plans around cheaper energy prices. Nor is optimization easily protected through crude contingency plans like holding excess raw materials as a hedge against supply chain disruption. An optimized production system must be not just tolerant to but accommodative of volatility.

Contemporary manufacturing operating systems solve for this.

  • Digital twins enable production modeling, simulation of disruptive events, and modeling of production responses to combinations of disruptive events.
  • Adaptive manufacturing - software defined production that integrates design with digital printing and robotic assembly - accelerates research and development and reduces friction created by NPI.
  • Flexline manufacturing allows Porsche to switch from making a combustion vehicle to an electric vehicle to a hybrid vehicle, in any sequence, all on the same line. The line is orchestrated with autonomous guided vehicles and does not require retooling or reconfiguration.

“Optimization” in a volatile world prioritizes resiliency over efficiency.

* * *

Wall Street gives a pass to companies when operations underperform due to external forces, because external forces are outside the control of the company. CEOs are graded on how well the company reacted to external disruption. But at some point, equity analysts and activist investors will figure out that manufacturing operations are unnecessarily vulnerable to external shocks. Why is the company not sufficiently resilient to take more of these changes in stride? At how many AGMs will we hear the same excuses?

There is need and opportunity to invest, but the climate isn’t conducive to investment. These are tech-heavy investments, and tech is still paying for largess during the immediate pre-COVID years, when CEOs were fired for showing insufficient imagination for how to spend cheap capital to digitally disrupt their industry. Unfortunately, a post-mortem analysis on that era exposes that not only did too many of the investments made during that time come to naught, the propensity to use contract labor and subsequent employee turnover meant no intangible benefit like institutional learning materialized (even of the "we know what doesn’t work" variety). They were just boondoggles that vaporized cash.

Tech has a bruised reputation and capital is more pricey now, just in time for manufacturing to find itself at a crossroads. The intrinsic sclerosis of legacy manufacturing operations forces industrial firms to react to external changes. If they had intrinsic flexibility, they could respond rather than be forced to simply react. With volatility the new norm, tech investments into modern manufacturing processes and technology are a pretty good bet.

A good bet, but with competing gamblers. Tech ("with your money, and our ability to spend your money…") and legacy manufacturing (fixed production) have to figure out how to partner with capital (10 year Treasurys are north of 400 bps) to make it a profitable bet. There’s a visible win, but the CIO, CTO, COO and CFO have to get out of the way.

Thursday, October 31, 2024

Bosses at troubled companies say they want growth through innovation. They prefer growth through girth.

The headlines are heavy with iconic companies that have hit the skids recently, from Starbucks, to Boeing, to Intel. All have relatively new CEOs, each of whom has said that their respective company's path to salvation lies in returning to their roots, to once again be a coffee shop or an engineering firm. The assertion is that by going back to basics, they can regain the crown of leadership in their respective markets.

The problem is, there is no going back. The combination of circumstances that created the conditions for rapid growth and market dominance are long gone. The socio-economic factors have changed. The regulatory environment has changed. The key technologies have changed. The supply chain has changed. The competitive landscape has changed. Don't bother with the flux capacitor.

What those bosses are really saying, of course, is “we need a do-over.” But there is no do-over. Not only are years of financial engineering not easily un-done, they created a financial burden that operations have to carry by generating copious free cash flow. Sorry, but no matter how desperate the situation, the new CEO is not the William Cage figure in the Edge of Tomorrow.

While the Starbucks and Boeings grab the headlines at the moment, there are many once iconic companies that backed themselves into a corner by starving operations to feed the balance sheet. Time ran out, old management was shown the door, new management is ushered in.

This is the playbook that new management follows.

The first step is to alleviate immediate financial pressures, starting with the balance sheet. This means any or all of: maxing out credit facilities, selling assets, raising cash through equity sales, taking the company private, breaking up the company, and in the extreme filing for bankruptcy protection. Investors may win (a company breakup can work out well), investors may lose (dilution of equity), and investors may get wiped out (common equity is valueless in bankruptcy).

Fixing the balance sheet buys time, but not a lot of time, so the income statement needs to be shored up as well. Quality problems? Losing customers? Margins too thin? The playbook here is well established: simplify operations, promote quality above all other metrics, reduce contractors and staff, cut discretionary costs, slash prices, over-reward customer loyalty, etc. The good news is, customers mostly win. The bad news is, employees mostly lose, as they will face perpetual cost cutting efforts ranging from the structural (opportunistic terminations) to the petty (workplace surveillance).

Which brings us back to the CEO statement that “we need to become who we once were”.

Unless there are high-value trophy assets, or a large portion of the debt can be saddled onto divisions spun off, the financial stabilization effort will leave a balance sheet that is out of proportion with the income statement. That is, the balance sheet is structured for a company with higher sales growth and stronger cash earnings. Before it does anything else, the company must face the fact that financial stabilization isn’t going to return the capital structure to what it was before all of the financial engineering happened.

That has serious repercussions for operations. Recapturing lost competence only solves the growth problem if the core business can once again be a growth business. This is the fundamental hypocricy of CEOs trafficking in "back to the future" statements: the core business has to be a growth business or returning to core competencies solves nothing. Absolutely nothing. And they know it. Being good at what you used to do well will staunch decline, but it offers no guarantee of a return to growth if there’s not much growth to go round. It is also worth pointing out that the “we need to become what we once were” statement also masks the actual objective: it isn’t to be the pre-eminent firm in the market the company made its name at, as much as it is to resucitate the income statement to a point that the balance sheet makes sense again.

If the core market is slow- or ex-growth, the go-to strategy is to capture a greater share of total customer spend, a.k.a. “revenue grab”. The opportunities here include extending the brand by selling adjacnet products and services (e.g., as GE famously expanded from selling nuclear power plant technology to servicing nuclear power plants in the 1980s). Another is selling against type: where a company once differentiated from its competition by refusing to sell what it derided as low-value offerings, it will cheerfully sell anything and everything because every dollar of revenue is now the same. Yet another opportunity is predatory monetization: charging for things that were previously given away for free (e.g., whereas once upon a time, all economy seats were priced the same, those near bulkheads or the aisle cost more than seats in the middle).

None of these alternatives are revolutionary. All have the advantage of not being “bet the business” pursuits. Which is helpful, because the balance sheet limits the investment the company can make in pursuing any growth opportunities. All of these can be pursued through partnership and small acquisition, as opposed to organic development; this jumpstarts capability, minimizes cash outlay, and promises faster time to return.

Yet none of these are truly growth strategies in that they open new markets or compel buyers to spend where they had not spent before. They are only growth strategies because they assume that customer income statements are growing: growth in wallet share creates exposure to growth in customer income statements. Restated, as aggregate customer topline grows, aggregate customer expenses grow, and a greater capture of customer spend leads to growth. It’s growth by girth, not by invention, innovation or creativity.

The intermediate-term success of this strategy is a return to modest growth and sufficient cash flows to make its interest payments and modest - if only occasional - dividends. These are not steps that will help the company regain its lost edge. Just the opposite: they signal capitulation that it will never regain that edge. Its long-term success is either to be acquired (a possibility because inflation increases revenue and simultaneously reduces the debt burden), or to be in a position to grab more revenue should a competitor stumble or outright fail.

To become what the company once was - a company that made its own growth by obsoleting its own products and services (why keep that 286 PC when you can have a Pentium? Why fly a fleet of 707s when you could fly a fleet of 747s? Why keep using that old phone since it can’t keep a charge anyway?) - requires both a capacity for innovation and a growth market. Yes, I just wrote “the company makes its own growth” because it does so by proxy. Technological advances bring new consumers into the market: personal computer manufacturers expected to sell more PCs once they had more powerful CPUs that could solve more complex problems; airlines expected to fly more passengers once they had more planes and larger planes to drive down costs in the post-deregulation airline industry.

A cash strapped, debt laden business is an unlikely candidate to invent the market-creating technologies of the kind that brought it to prominence in the first place, if for no other reason than a company pursuing revenue grab is focused on yesterday’s growth markets and is therefore poorly attuned to tomorrow’s. That is not to say it is utterly hopeless, but that the path back to growth markets isn’t going to be self-directed. Finding the way back into a growth market requires that the company be quick to recognize, implement and operationalize something new. But that, too, requires balance sheet leeway that, depending how dire the straits it finds itself at the start of the retrenching cycle, will limit its ability to spend on the R&D necessary to innovate. This is the cundrum of retrenchment: while fortifying the balance sheet is unavoidable, the resulting fortress imprisons cash flows and, ultimately, the income statement. Add - well, subtract - labor severed during retrenchment and the disengaged labor that remains, and it is highly unlikly that a fallen icon will return to the vim and vigor of its go-go days.

The retrenching company needs balance sheet and operational restructuring. It will then look for easy money anywhere that it can extend its brand and monetize its offerings. It will hope to buy time for inflation to work its magic on the debt burden and the topline, and to be ready for a competitor to stumble. That is the definition of success.

While a nice sentiment, nothing in this playbook benefits from the business returning to what it used to be. Because success of the grafted-on rescue management isn't "to win", it is "not to lose."

Monday, September 30, 2024

It isn’t “return to office.” It’s “malicious destruction of trust.”

Return-to-office mandates continue to trickle in, and every now and again a prominent employer makes a headline-grabbing decision that all employees must be back in office. The news articles focus on the obvious impact of RTO mandates: the threat to dual income families successfully managing school and daycare with career-advancing employment; loss of quality of life to nonproductive commute time. These are real. But RTO mandates also indicate something else: they are a public acknowledgement of an erosion of trust within a company.

Every company is a society unto itself, with values and social norms that determine how people behave and interact. There are high-integrity and low-integrity workplaces, the distinguishing characteristic being the extent to which people are “free from corrupting influences or motives”. Integrity manifests itself in how people interact with one another, in commitment to craft, and in the administration of the business itself.

First is interpersonal integrity, things that define whether the company is a toxic or fulfilling place to work. Do people take credit for the work of others? Do people want to look good to a point they will make others look bad? Is it safe for a person to acknowledge things they do not know, or to accept responsibility for a mistake?

There is operational integrity, things that define a firm’s commitment to excellence. Does recruiting pursue competent candidates or are they just filling vacancies? Do salespeople inflate the pipeline with low-probability or worthless leads? Do colleagues complete their work without taking shortcuts that could impair results? Does finance send invoices in the hopes the payer will pay without scrutinizing the bill?

Administrative systems indicate whether a workplace is high- or low-integrity because they communicate the extent to which trust is extended to individuals. Are the administrative systems an enabling mechanism for labor or a control mechanism over labor? For example, how highly restricted are employees in how they incur travel expenses? Is performance measurement designed to drive good practice, or confirm adherence to practice? Are annual reviews designed for personal development and advancement, or as a means of gathering structured data to rank order the workforce?

Societies work best when they run on trust. Companies cannot escape the need to spend money to demonstrate or investigate compliance - violations of trust are unfair and can be expensive when they occur - but it is not value generative expenditure. The more a company invests in controls and surveillance to compensate for a lack of trust, the higher the operating costs to the business. Conversely, the lighter the controls on labor, the lower the administrative burden, the greater the productive and creative output from labor. A company with few - ideally no - bad actors - has little real reason to incur cost to compensate for a trust void.

Which brings us back to the RTO mandates.

One of the primary justifications given for RTO is to increase worker collaboration with an eye toward driving creativity and innovation. That sounds plausible as most of us have had an experience where a creative solution came together quickly because of high-bandwidth, in-person collaboration. But if in-person collaboration is so compelling, having globally sourced teams and departments is an impairment of convenience, not a strategic advantage for sourcing best-in-class capability. Nor would firms severely cut employee travel budgets in the face of declining revenues - isn’t that precisely the time a company needs more innovation? Toss in the free productivity harvested from the individual laborer from flexible working, and justifying RTO because of a “paucity of innovation” is a bit of a stretch.

The more likely explanation is a desire for greater workforce control. Working from home proved that a lot of jobs can be done from anywhere. Knowledge is transferable. It isn’t a big stretch that jobs that can be done from anywhere can be done by anyone from anywhere. Physical supervision does not improve management’s ability to provide higher fidelity performance profiles, but it does allow management to assess performance with less friction. Is this person executing at the highest level of throughput, or are they dogging it? Is that person really an expert with deep knowledge, or are they expert at gaming the system? Spot productivity audits are a lot easier in cubeland than in Teamsland. If - when - the edict comes that we have to contract operational labor spend, middle managers may not have better data than they would have with a distributed workforce, but they'll have no excuse for not having it.

Why a push to increase control now? Because corporate income statements are still being buffeted about. Interest rates have cooled off but remain generationally high, depressing corporate capital spending. Price increases have masked drops in unit sales volume. Cumulative inflation has increased input costs. Management has little control over the topline, so it must exercise what control it can over the bottom line.

Labor is a big input cost, and labor working from home is an invisible workforce. Income statement pressures twined with future economic uncertainty make that invisible workforce an easy target. When asked about the number of people who work at The Vatican, Pope John XXIII is credited - probably erroneously - with having replied “about half”. It’s not hard for a COO to be cynical about labor productivity given supply chain, labor, price and cost roller coasters of the last four years.

Before RTO came shelter-in-place necessitating work from home. A lot of people who had never worked in a distributed fashion figured out how to make it work. They’re certainly not heroes in an altruistic sense as they were motivated by self-interest: preserving the company preserved the job. Still, this cohort kept the internal workings of the business functioning during a period of unprecedented uncertainty. That, in turn, merited an increase in operational trust (they responded with excellence) and interpersonal trust (they will do the right thing). RTO negates all of that earned trust.

Saturday, August 31, 2024

For years, tech firms were fighting a war for talent. Now they are waging war on talent.

In the years immediately following the dot-com meltdown, there was more tech labor than there were tech jobs. That didn’t last long. By 2005, the tech economy had bounced back on its own. After that, the emergence of mobile (a new and lucrative category of tech) plus low interest rate policy by central banks fueled demand for tech. Before the first decade of the century was out, “tech labor scarcity” became an accepted norm.

The tech labor market heated up even more over the course of the second decade of the century. Rising equity valuations armed tech companies with a currency more valuable than cash, a currency those companies could use to secure labor through things like aggressive equity bonuses or acqui-hires. COVID distorted this overheated tech labor market even further, as low interest rates for longer, a massive fiscal expansion, and even more business dependency on tech spurred demand. Growth was afoot, and this once-in-a-lifetime growth opportunity wasn’t going to be won with bog standard ways of working: it was going to be won with creativity, imagination and exploration. The tech labor pool expanded as tech firms actively recruited from outside of tech.

The point of this brief history of the tech labor market in the 21st century is to point out that it went from cold to overheated over the span of many years. Not suddenly, and not in fits and starts. And yes, there were a few setbacks (banks pulled back in the wake of the 2008 financial crisis), but in macro terms the setbacks were short lived. It was a gradual, long-lived, one-way progression from cold to super hot.

Then the music stopped, abruptly. COVID era spending came to an end, inflation got out of hand, and interest rates soared. Almost instantly, tech firms of all kinds went from growth to ex-growth. Unfortunately, they built businesses for a market that no longer exists. With capital markets unwilling to inject cash, tech companies need to generate free cash flow to stay afloat. Tech product businesses and tech services firms - those that haven’t filed for bankruptcy - as well as captive IT organizations all tightened operations and shed costs to juice FCF. (Tech firms and tech captives are also in mad pursuit of anything that has the potential to drive growth - GenAI, anyone? - but until or unless that emerges as The Rising Tide That Lifts All Tech Boats, it will not change the prevailing contractionary macroeconomic conditions tech is facing today.)

The operating environment has changed from a high tolerance for failure (where cheap capital and willing spenders accepted slipped dates and feature lag) to a very low - if not zero - tolerance for failure (fiscal discipline is in vogue again). Gone is the exception to spend freely in pursuit of hoped for market opportunities through tech products; tech must now operate within financial constraints - constraints for which there is very, very little room for negotiation. Everybody’s gotta hit their numbers.

While preventing and containing mistakes staves off shocks to the income statement, it doesn’t fundamentally reduce costs. Years of payroll bloat - aggressive hiring, aggressive comp packages to attract and retain people - make labor the biggest cost in tech. Wanton labor force expansion during the COVID years was done without a lot of discipline. Filling the role was more important than hiring the right person. A substantial number were “snowflakes”: people staffed in a role for an intangible reason, whether potential to grow into the role, possession of skills or knowledge adjacent to the position into which they were staffed, or appreciation for years of service - essentially, something other than demonstrable skill derived from direct experience. That means getting labor costs under control isn’t a simple matter of formulaic RIFs and opportunistic reductions with a minor reshuffling of the rank and file. Tech companies must first commoditize roles: define the explicit skills and capabilities an employee must demonstrate, revise the performance management system to capture and measure on structured evaluation data, and stand up a library of digital training to measure employee skill development and certification specifically in competencies deemed relevant to the company’s products and services. Standardizing roles, skills and training makes the individual laborer interchangeable. Every employee can be assessed uniformly against a cohort, where the retention calculus is relative performance versus salary. This takes all uncertainty out of future restructuring decisions - and as long as tech firms lurch between episodic cost cutting and bursts of growth, there will in fact be future restructuring decisions. For management, labor standardization eliminates any confusion about who to cut. The decision is simply whether to cut (based on sales forecasts) and when to cut (systemically or opportunistically to boost FCF for the coming quarter).

Of course, companies can reduce their labor force through natural attrition. Other labor policy changes - return to office mandates, contraction of fringe benefits, reduction of job promotions, suspension of bonuses and comp freezes - encourage more people to exit voluntarily. It’s cheaper to let somebody self-select out than it is to lay them off. FCF is a math problem.

These are clinical steps intended to improve cash generation so that a company can survive. While the company may survive, these steps fundamentally alter the social contract between labor and management in tech.

* * *

A lot of companies in tech used what they called “the war for talent” as marketing fodder, in both sales and recruiting. You should buy Big Consulting because it employs engineers a non-tech firm will never be able to employ on its own. Come to work for Big Software and get the brand on your resume. Every war has profiteers.

Small and mid sized tech has always had to be clever in how it competes for labor. Because it couldn’t compete with outsized comp packages, small tech relied on intangible factors, such as flexible role definitions and strong, unique corporate cultures.

The prior meant the employee would not only learn more because they had the opportunity to do more, they weren’t constrained by a RACI and an operating model that rewarded the employee for “staying in their lane” over doing what was necessary, best, right. This was a boon to the small tech employer, too, because one employee was doing the job of 2, 3, or even 8 employees at any other company, but not for 2x, 3x or 8x the comp.

The latter meant that by aggressively incubating well defined corporate norms and values, a smaller tech firm could position itself as a “destination employer” and compete for the strata of people it most wanted to hire. That might be a culture that values, say, engineering over sales. That might be a purpose-driven business prioritizing social imperatives over commercial imperatives. Culture was a material differentiator, and it’s fair to say that these values had some footing in reality: tech firms on the smaller end of the business scale had to mostly live their values or they wouldn’t retain their staff for very long given the increasing competition for tech labor. There was some “there” there to the culture.

Small and mid sized tech carved out a niche, but even these firms caught the growth bug. Where growth was indexed to labor, small and mid sized tech also went on a hiring spree. Again, where growth was the imperative, hiring lacked discipline. Bloated payrolls meant new people needed a corporate home; shortly after a hiring binge, the company is staffing twenty people to do the work of ten. In comes the RACI, out goes the self-organizing team. Plus the erosion of culture - the move away from execution that is representative of core values - was accelerated (if not initiated) by undisciplined hiring twined with natural labor attrition of long-timers during the go-go years for tech labor. Like it or not, the pursuit of growth is a factor in redefining culture: even if a growth agenda by itself injects no definitive identity, it does have a dilutive effect on established identity. To wit: new employees did not find the strong values-based culture described during the interview process, and long-time employees saw their values-based practices marginalized, because too many new hires with no first hand experience of cultural touch points to lean on were staffed on the same team. Culture devolves into a free-for-all that favors the newbie with the strongest will. The culture is dead, long live the growth agenda.

As mentioned above, the music stopped, and the company has to prioritize FCF. Prioritized over growth, because growth is somewhere between non-existent and just keeping pace with inflation. Prioritized over culture, because the culture prioritized people, and people are now a commodity.

Restated, labor gets the short end of the stick.

Employees recruited in more recent years from outside the ranks of tech were given the expectation that we’ll teach you what you need to know, we want you to join because we value what you bring to the table. That is no longer applicable. Runway for individual growth is very short in zero-tolerance-for-failure operating conditions. Job preservation, at least in the short term for this cohort, comes from completing corporate training and acquiring professional certifications. Training through community or experience is not in the cards.

For all employees, it means that the intangibles a person brings cannot be codified into a quarterly performance analysis and are completely irrelevant. The “X factor” a person has that makes their teams better, the instinct a person has for finding and developing small market opportunities, the open source product with the global community of users this person has curated for years: none of these are part of the labor retention calculus. It isn’t even that your first bad quarterly performance will be your last, it’s that your first neutral quarterly performance could very well be your last. The ability to perform competently in multiple roles, the extra-curriculars, the self-directed enrichment, the ex-company leadership - all these things make no matter. The calculus is what you got paid versus how you performed on objective criteria relative to your cohort. Nothing more. That automated testing conference for practitioners you co-organized sounds really interesting, but it doesn’t align with any of the certifications you should have earned through the commoditized training HR stood up.

Long time employees - those who joined years ago because they had found their “destination employer” - hope that “restructuring” means a “return to core values”. After all, those core values - strongly held, strongly practiced - are what made the company competitive in a crowded tech landscape in the first place. Unfortunately, restructuring does not mean a return to core values. Restructuring to squeeze out more free cash flow means bloodletting of the most expensive labor; longer tenured employees will be among the most expensive if only because of salary bumps during the heady years to keep them from jumping ship.

Here is where the change in the social contract is perhaps the most blatant. In the “destination employer” years, the employee invested in the community and its values, and the employer rewarded the loyalty of its employees through things like runway for growth (stretch roles and sponsored work innovation) and tolerance for error (valuing demonstrable learning over perfection in execution). No longer.

“Culture eats strategy for breakfast” is relevant when labor has the upper hand on management because culture is a social phenomenon: it is in the heads and hearts of humans. When labor is difficult to replace, management is hostage to labor, and culture prevails. But jettisoning the people also jettisons the culture. Deliberately severing the keepers of culture is not a concession that a company can no longer afford to operate by its once-strongly-held values and norms; it is an explicit rejection of those values and norms. By extension, that is tantamount to a professional assault on the people pursuing excellence through those values and norms.

Tech firms large and small once lured labor by values: who you are not what you know makes us a better community; how we work yields outcomes that are better value for our customers; how we live what we believe makes us better global citizens. Today, those same tech firms can’t get rid of the labor that lives those values fast enough.

Wednesday, July 31, 2024

US automakers are struggling with electrification. They won’t have that luxury bringing autonomy to market.

Four years ago today, I blogged about the difficulty automakers faced in transitioning to electric vehicles, specifically that there were consequences to transitioning too soon or too late. Here we are, four years later, and US automakers are in a tight place. Manufacturers invested heavily in the factories, only for sales to stall right when OEMs need them to soar. EV product discounts are eroding margins. Legacy US automaker losses on EVs have been papered over by strong sales of products in the combustion portfolio. They are deferring investments in PPE and new models.

It’s not just the legacy automakers that are finding the electric vehicle business difficult. Lordstown filed. Fisker filed (different legal entity, same outcome). Rivian - losing money making vehicles - needs VW’s cash lifeline as much as VW needs software to sort out their struggles creating an EV platform.

Regulation requires automakers to make more EVs but do not obligate consumers to buy EVs. Range limitations, inadequate charging infrastructure, power loss in cold weather, higher repair costs, higher insurance costs and the occasional fire are turning out to be disincentives that are overwhelming Treasury’s tax credit incentive.

Four years on, the electric future is still the future.

My point then was that making a one-way, all-in bet is a risky strategy. While the future may be a legislated certainty, the path to that future is not. The best way to deal with transitional uncertainty is to “muddle through” with policies that enable adaptability, attentiveness, and awareness. Toyota’s preference for hybrids over pure electric, and Porsche’s and Mercedes-Benz investment in flexline manufacturing, are examples of maintaining optionality by transitioning product and operations.

With the all-in strategy stalling, automakers are pulling back on electric vehicle production and lobbying congress to ease the timing of electrification mandates. If they are successful, it will buy automakers more time but create more market confusion. How committed are regulators to transition? Will consumers be forced to buy EVs? Will suppliers extend production for parts to keep older model combustion vehicles roadworthy for longer?

Transitory states do not pander to human impatience because they create the appearance of extended transition. But transitory states give OEMs and their suppliers, dealers, lenders, insurance companies, consumers and regulators the opportunity to learn and adjust. And in this case, the counterfactual - that bringing EVs to market in large numbers will result in a rapid transition of the fleet - is known to be untrue.

Smart strategy is transitory and adaptive, not all-in. That is just as true today as it was four years ago, as it has been for all of human existence.

* * *

Automobiles are in a multistage transition. Along with electrification, automakers must transition from building human operated to autonomous vehicles.

The theory supporting aggressive investment in electrification by incumbents and new entrants is that the new regulatory regime will create conditions for a financial windfall through share capture during transition. As pointed out above, disjointed public policy has not created those conditions in the US, and an investment frenzy has yielded an abundance of EVs which, in turn, has depressed returns for OEMs. As mentioned previously, any changes (i.e., relaxation) in public policy will only create more uncertainty that further threaten returns.

Autonomy is a much different opportunity. A bet on autonomy is a bet on the belief that autonomy brings entirely new and different use cases into the transportation sector. E.g., airlines stand to lose passenger volume on short haul flights to autonomous vehicles available through transportation-as-a-service. The payoff for autonomy is much, much larger than electrification.

The prize is bigger, the price is bigger. Electrification has gobbled up billions of dollars; autonomy will gobble up even more. The technology is more complex, the liability (for passenger, pedestrian and property) is greater, and the business models that exploit it are as yet unknown and unproven. Not to mention, autonomy will become even more complex once there is critical mass of autonomous vehicles on the road as the fleet can be made to behave collectively, not just individually.

Automakers hope the regulatory clock slows down to give them time to sort out electrification. Meanwhile, the race - a higher stakes race - is on for autonomy. There turned out to be first mover advantage in electrification in the US, as Tesla is still the sales leader by a wide margin. There will be first mover advantage in autonomy if only because being first to offer “free time for all the humans” will capture a lot of unit sales.

But the financial windfall from autonomy won’t come from vehicle sales: it will come from the services built around autonomous transportation. Figuring out those transformative services, everything from design to offer to pricing to availability - will emerge through discovery and thoughtful experimentation, organizational learning, adaptability and attentiveness. They will emerge by muddling through, not grand design.

The race to provide autonomy-based services starts once comprehensive autonomy is in-market. Coming in a distant runner-up in the race for autonomy will be very costly indeed.

Sunday, June 30, 2024

The yield curve is inverted. Tech's problem is asset price inflation.

The business of custom software development is, at its core, an asset business. Software development is the business of converting cash to intangible assets by way of human effort. Plenty of people opine about how important human labor is to software, and of course it is. Good development practices reduce time to delivery and create low-maintenance, easy-to-evolve software. What labor does and does not do is extremely important to the viability of software investments.

But software is an asset, not an operating expense. If there is no yield on a software asset, investing in software is a bad use of capital. No yield, no capital, no cash for salaries for people developing software. Money matters, whether or not we like to admit it.

This is a stark reversal for tech. When money was cheap and abundant as it was for over a decade, tech had the opposite problem: no yield, no problem! When capital wasn’t a constraint, the investment qualification wasn’t “what is this asset going to do for us” but “what are we denying ourselves if we don’t try to do something in this area.” Trying was more important than succeeding.

There are those who want to believe that financial markets are unemotional, but they are not. Momentum is a crucial factor in finance. Momentum is what gets investors to pile into the same position. Momentum turns a $100k plot of land into a $2m real estate “investment”. Momentum is an emotional justification in that the rationalization is hope, not fundamentals.

Tech rode momentum for a long, long time. Before COVID, the story that built momentum for tech was disruption. During COVID, the story was tech as a commercial coping mechanism. Momentum put abundant amounts of cash into the tech sector. Abundant cash inflated more than just salaries: it also inflated technical architectures and solution complexity. Money distorts.

That momentum has run its course. Tech is reaching - grasping - for any growth story. To wit: GenAI here, there, everywhere.

There are two winning hands in momentum trades: “hold to maturity” and “greater fool theory”. The prior requires a lot of intestinal - not to mention free cash flow - fortitude. The latter requires finding somebody foolish enough to spend as much (and ideally more). Nearly two years of contraction in the tech sector indicates a shortage of greater fools. Yes, some subsets of tech still command premium pricing; suffice to say there is no rising tide lifting all boats, and has not for quite some time.

Tech rode the wave of price inflation. The yield curve indicates that the wave has crested.

Friday, May 31, 2024

I can explain it to you, but I can't comprehend it for you

I’ve given my share of presentations over the years. I am under no illusions that I am anything more than a marginal presenter. My presentations are information dense, a function of how I learn. Many years ago, I realized that I learn when I’m drinking from the fire hose, not when content is spoon fed to me. I am focused and engaged with the prior; I become disinterested and disengaged with the latter. Of all the recommended presentation styles I’ve been exposed to over the years, I find the “tell them what you’re going to tell them / tell them / tell them what you just told them” pattern intellectually insulting. I prefer to treat my audience with respect and assume they are intelligent, so I err on the side of content density.

For this style to be effective, the audience has to also want to drink from the fire hose. If they do not, you won’t get past the first couple of paragraphs. But in over 30 years in the tech business, I find tech audiences generally respond to high-content-density presentations.

As the person leading a briefing or presentation, it is your responsibility to connect with the audience. However, there are limitations. The content as prepared is only as good as the guidance you’ve received to shape the subject and depth of detail. A presenter with subject matter expertise isn’t (or at any rate, should not be) wed to the content and can generally shift gears to adjust when there is a fidelity mismatch between content and audience. But being asked, even demanded, to explain even a moderately advanced concept in a limited amount of time to an audience lacking in subject matter basics is going to fall flat every single time.

* * *

People buy things - large capital things - for which they have little or no qualifications to purchase other than the fact that they have money to spend. Few people who buy houses are carpenters, plumbers or electricians. Few people who buy used cars are mechanical engineers.

This expertise disconnect plagues the software business. There are, unfortunately, times when contracts for custom software development are awarded by individuals or committees who have (at best) limited understanding of the mechanics of software delivery. And there are times when contracts for software product licenses are awarded by individuals or committees who have (at best) limited understanding of the complexity of the domain into which the licensed product must work.

An egregious, although not atypical, example is an 8-figure custom software development contract with payouts indexed to “story points developed”. Not “delivered”, not “in production.” The delivery vendor tapped the contract for cash by aggressively moving high-point story cards to “dev complete”. Never mind that nothing had reached production, never mind that nothing had reached UAT. By the time I got a look at it (they were looking - hoping - for process improvements that would yield deployable software rather than deplorable software), every story had an average of 7 open defects with a nearly 100% reopen rate. And yes, smart reader, ignore the fact that the apparent currency was “story points,” because it was not. The currency was the cash value of the contract; story points were simply a proxy for extracting that cash. Ironically, the buyer thought the arrangement was shrewd because it tied cash to work. Sadly it failed to tie cash to outcomes. In the event, the vendor had the buyer hostage: there were no clawbacks, so the buyer would either have to abandon the investment or have to sign extension after extension in the hopes of making good on it.

Licensed software products are no different. I’ve seen too many occasions where a buyer entered into a license agreement for some product without first mapping out how to integrate that product into their back office processes. When the buyer doesn’t come to the table prepared with a detailed understanding of their as-is state, they default into allowing the vendor to take the lead in designing solution architecture for the to-be state based entirely on generic and simplistic use cases, with disastrous outcomes to the buyer. Licensed products tend not to be 100% metered cost, and the vendor sales rep has a quota to meet and a commission to earn, so the buyer commits to some minimum term-based subscription spend with metered usage piled on top of that. In practice this means the clock is ticking on the buyer to integrate the licensed product the second the ink is drying on the contract. Finding out after the contract is signed that intrinsic complexity of the buyer environment is many orders of magnitude beyond the vendor supplied architecture is the buyer’s problem, not the vendor’s.

To level this information asymmetry between buyer and seller, buyers have independent experts they can call on to give an opinion of the contract or product or vendor or process. But of course there are experts and there are people with certifications. An expert in construction can look beyond things like surface damage to drywall and trim and determine whether or not a building is structurally sound. Then there are the “certified building inspectors” who look closely at PVC pipe covered in black paint and call it “cast iron plumbing.” All the certification verifies is that once upon a time, the certificate bearer passed a test. What is true in building construction is equally true in software construction. Buyers have access to experts but that doesn’t do them a bit of good if they don’t know how to qualify their experts.

Of course there’s a little more to it than that. Buyers have to be able to qualify their experts, want their expertise, and be willing and able to act on it. I’ve advised on a number of acquisitions. No person mooting an acquisition wants to hear “it’s a bad acquisition at any price”, especially if their job is to identify and close acquisitions. Years ago, I was asked to evaluate a company that claimed to have a messaging technology that could be used to efficiently match buyers and sellers of digital advertising space. They had created a messaging technology that was different from JMS only in that (a) theirs was functionally inferior and (b) it was not free. Instead of expressing relief at avoiding a disastrous deployment of capital, the would-be investor was desperate for justification that would overshadow these… inconveniences. As the saying goes, “you cannot explain something to somebody whose job depends on not understanding it.”

* * *

I have been fortunate to have worked overwhelmingly with experts and professional decision makers over the years, people who have been thoughtful, insightful, willing to learn, and who in turn have stretched me as well. I sincerely hope I have done the same for them.

Unfortunately, I have had a few brushes with those who fell irreconcilably short. The CTO of a capital markets firm who requested an advanced briefing on the mechanics of how distributed ledger technology could change settlement of existing and open the door for new complex financial products, but had done nothing before the briefing to learn the basics of what “this blockchain thing” is. The mid-level VP leading a RFP process who derailed a vendor presentation because she simply could not fathom how value stream mapping of business operations exposes inefficiencies that have income statement ramifications.

When we fail to connect with an audience, we have to first internalize the failure and look for what we might have done differently: what did we hear but not process at the time, what question should we have asked to clarify the perspective of the person asking. What is spoken is less important than what is heard.

At a certain point, though, responsibility for understanding lies with the listener. The audience member adamantly demanding further explanation may be doing so for any number of reasons, ranging from simple neglect (a failure to have done homework on the basics) to a deliberate unwillingness to understand (i.e., cognitive dissonance).

Which is where the title of this blog comes in. It’s a comment Ed Koch, the 105th mayor of New York, made to a constituent who demanded to know why the mayor’s office was introducing policy to lighten taxes, some time after New York had financially imploded and was still hemorrhaging high-income earners and businesses. “I can explain it to you” he told this constituent, “but I can’t comprehend it for you.”

Tuesday, April 30, 2024

The era of ultra low interest rates is over. Tech has painful adjustments to make.

Interest rates have been climbing for two years now. The Wall Street Journal ran an article yesterday with the headline the days of ultra low interest rates are over. Tech will have to adjust. It’s going to be painful.

When capital is expensive, we measure investments against the hurdle rate: the rate of return an investment must satisfy to exceed to be a demonstrably good use of capital. When capital is ridiculously cheap, we no longer measure investment success against the hurdle rate. In practice, cheap capital makes financial returns no more and no less valuable than other forms of gain.

There are ramifications to this. As fiduciary measures lapse, so does investment performance. We go in pursuit of non-financial goals like "customer engagement rate". We get negligent in expenditure: payrolls bloated with tech employees, vendors stuffing contracts with junior staff. We get lax in our standard of excellence as employees are aggressively promoted without requisite experience. We get sloppy in execution: delivery as a function of time is simply not a thing, because the business is going to get whatever software we get done when we get it done.

Capital may not be 22% Jimmy Carter era expensive, but it ain’t cheap right now. Tech has to earn its keep. That means a return to once familiar practices, as well as change that orchestrates purge of tech largesse. Business cases with financial returns first, non-financial returns second. Contraction of labor spend: restructuring to offload the overpromoted, and consolidation of roles or lower compensation for specialization. Transparency of what we will deliver when for what cost, and what the mitigation is should we not. An end to vanity tech investments, because the income statement, much less the balance sheet, can no longer support them.

Some areas of the tech economy will be immune to this for as long as they are thematically relevant. AI and GenAI are TINA (there is no alternative) investments: a lot of firms have no choice but to spend on exploratory investments in AI, because Wall Street rewards imagination and will reward the remotest indication of successful conversion of that imagination that much more. Yet despite revolutionary implications, AI enthusiasm is tempered compared to frothy valuations for tech pursuits of previous generations, a function of investor preference for, as James Mackintosh put it, profits over moonshots.. Similarly, businesses where there is a tech arms race on because innovation offers competitive advantage, such as in-car software, it will be business as usual. But these arms races will end, so it will be tech business as usual until it isn’t. (In fact, in North America, this specific arms race may not materialize for a long, long time as EV demand has plateaued, but that’s another blog for another day.)

Tech has had the luxury of not being economically anchored for a long time now. If interest rates settle around 400 bps as the WSJ speculated yesterday, those days are over. The adjustment to a new reality will be long and painful because there’s a generation of people in tech who have not been exposed to economic constraints.

This is the Agile Manager blog, as it has been since I started it in 2006. Good news, this change doesn’t mean a return to the failed policies of waterfall. Agile had figured out how to cope with these economic conditions. Tech may not remember how to use those Agile tools, but it has them in the toolkit. Somewhere.

That said, I also blog about economics and tech. If the Fed funds rate lands in the 400 bps range, tech is in for still more difficult adjustments. More specifically, the longer tech clings to hopes for a return to ultralow interest rates, the longer the adjustment will last, and the more painful it will be.

The ultralow rate party is over. It’s long past time for tech to sober up.

Sunday, March 31, 2024

Don’t queue for the ski jump if you don’t know how to ski

I’ve mentioned before that one of my hobbies is lapidary work. I hunt for stone, cut it, shape it, sand it, polish it, and turn it into artistic things. I enjoy doing this work for a lot of reasons, not least of which being I approach it every day not with an expectation of “what am I going to complete” but “what am I going to learn.”

As a learning exercise, it is fantastic. I keep a record of what I do on individual stones, on how I configure machines and the maintenance I perform on them, and for the totality of activities I do in the workshop each day. I do this as a means of cataloging what I did (writing it down reinforces the experience) and reflecting on why I chose to do the things that I did. Sometimes it goes fantastically well. Sometimes it goes very poorly, often because I made a decision in the moment that misread a stone, misinterpreted how a tool was functioning, or misunderstood how a substance was reacting to the machining.

My mistakes can be helpful because, of course, we learn from mistakes. I learn to recognize patterns in stone, to recognize when there is insufficient coolant on a saw blade, to keep the torch a few more inches back to regulate the temperature of a metal surface.

But mistakes are expensive. That chunk of amethyst is unique, once-in-a-lifetime; cut it wrong and it’s never-in-a-lifetime. If there isn’t coolant splash over a stone you’re cutting, you’re melting an expensive diamond-encrusted saw blade. Overheat that stamping to a point where it warps, or cut that half hard wire to the wrong length, and you’ve just wasted a precious metal that costs (as of today’s writing) $25+ per ounce for silver, $2,240+ for gold.

Learning out of a video or website or a good old fashioned book is wonderful, but that’s theory. We learn through experience. Whether we like to admit it or not, a lot of experiential learning results in, “don’t do it that way.”

Learning is the human experience. Nobody is omnipotent.

But learning can be expensive.

* * *

A cash-gushing company that has been run on autopilot for decades gets a new CEO who determines they employ thousands doing the work of dozens, and since most of these people can’t explain why they do what they do, the CEO concludes there is no reason why, and spots an opportunity to squeeze operations to yield even better cash flows. Backoffice finance is one of those functions, and that’s just accounting, right? That seems like a great place to start. Deploy some fintech and get these people off the payroll already.

Only, nobody really understands why things are the way they are; they simply are. Decades of incremental accommodation and adjustment have rendered backoffice operations extremely complicated, with edge cases to edge cases. Call in the experts. Their arguments are compelling. Surely, we can we get rid of 17 price discounting mechanisms and only have 2? Surely, we can we have a dozen sales tax categories instead of 220? Surely, we can get customers to pay with a tender other than cash or check? All plausible, but nobody really knows (least of all Shirley). Nobody on the payroll can explain why the expert recommendations won’t work, so the only way to really find out is to try.

Out comes a new, streamlined customer experience with simplified terms, tax and payments. Only, we lose quite a lot of customers to the revised terms, either because (a) two discounting mechanisms don’t really cover 9x% of scenarios like we thought or (b) we’re really lousy at communicating how those two discounts work. We lost transactions beyond that because customers have trust issues sharing bank account information with us. And don’t get started on the sales tax remittance Hell we’re in now because we thought we could simplify indirect tax.

Ok, we tried change, and change didn’t quite work out as we anticipated. It took us tens of millions of dollars of labor and infrastructure costs to figure out if these changes would actually work in the first place. Bad news is, they didn’t. Good news is, we know what doesn’t work. Hollow victory, that. That’s a lot of money spent to figure out what won’t work. By itself, that doesn’t get us close to what will work. Oh and by the way, we spent all the money, can we please have more?

Let’s zoom out for a minute. How did we get here? Since the employees don’t really know why they do what they do, and since all this activity is so tightly coupled, what is v(iable) makes the m(inimum) pretty large, leaving us no choice but to run very coarsely grained tests to figure out how to change the business with regard to customer facing operations that translate into back office efficiencies. Those tests have limited information value: they either work or they do not work. Without a lot of post-test study, we don’t necessarily know why.

This is not to say these coarse tests are without information value. With more investment of labor hours, we learn that there are really four discounting mechanisms with a side order of optionality for three of them we need to offer because of nuances in the accounting treatment our customers have to deal with. That’s not two but still better than the nineteen we started with. And it turns out with two factor authentication we can build the trust with customers to share their banking details so we can get out of the physical cash business. Indirect tax? Well, that was a red herring: the 220 categories previously supported is more accurately 1,943 under the various provincial and state tax codes. Good news is, we have a plan to solve for scaling up (scenarios) and scaling down (we’ll not lose too much money on a sales tax scenario of one).

Of course, we’ll need more money to solve for these things, now that we know what “these things” are.

That isn’t a snarky comment. These are lessons learned after multiple rounds of experiments, each costing 7 or 8 figures, and most of them commercially disappointing. We built it and they didn’t come, they flat out rejected it. We got it less wrong the second, third, fourth, fifth time around and eventually we unwound decades of accidental complexity that had become the operating model of both backoffice and customer experience, but that nobody could explain. Given unlimited time and money, we can successfully steer the back office and customers through episodic bouts of change.

Given unlimited time and money. Maybe it took five times, or seven times, or only three. None was free, and each experiment cost seven to eight figures.

* * *

There are a few stones I’ve had on the shelf for many, many years. They are special stones with a lot of potential. Before I attempt to realize that potential, I want to achieve sufficient mastery, to develop the right hypothesis for what blade to use and what planes to cut, for what shape to pursue, for what natural characteristics to leave unaltered and what surfaces to machine. Inquisitiveness (beginner’s mind) twined with experience on similar if more ordinary stones have led me to start shaping some of those special ones, and I’m pleased with the results. But I didn’t start with those.

Knowledge is power as the saying goes, and “learn” is the verb associated with acquiring knowledge. But not all learning is the same. The business that doesn’t know why it does what it does is in a crisis requiring remedial education. There is no shame in admitting this, but of course there is: that middle manager unable to explain why they do the things they do will feel vulnerable because their career has peaked as the “king of the how in the here and now.” Lessons learned from being enrolled in the master class - e.g., being one of the leads in changing the business - will be lost on this person. And when the surrogate for expertise is experimentation, those lessons are expensive indeed.

Leading change requires mastery and inquisitiveness. The prior without the latter is dogma. The latter without the prior is a dog looking at a chalkboard with quantum physics equations: it’s cute, as Gary Larson pointed out in The Far Side, but that’s the best that can be said for it. When setting out to do something different, map out the learning agenda that will put you in the position of “freely exercising authority”. But first, run some evaluations to ascertain how much “(re-)acquisition of tribal knowledge” needs to be done. There is nothing to prevent you from enrolling in the master class without fluency in the basics, but it is a waste of time and money to do so.

Thursday, February 29, 2024

Patterns of Poor Governance

As I mentioned last month, many years ago I was toying around with a governance maturity model. Hold your groans, please. Turns out there are such things. I’m sure they’re valuable. I’m equally sure we don’t need another. But as I wrote last month there seemed to be something in my scribbles. Over time, I’ve come to recognize it not as maturity, but more as different patterns of bad governance.

The worst case is wanton neglect, where people function without any governance whatsoever. The organizational priority is on results (the what) rather than the means (the how). This condition can exist for a number of reasons: because management assumes competency and integrity of employees and contractors; because results are exceedingly good and management does not wish to question them; because management does not know the first thing to look for. Bad things aren’t guaranteed to happen in the absence of governance, but very bad things can indeed (Spygate at McLaren F1; rogue traders at Société Générale and UBS). Worse still, the absence of governance opens the door to moral hazard, where individuals gain from risk borne by others. We see this in IT when a manager receives quid pro quo - anything from a conference pass to a promise of future employment - from a vendor for having signed or influenced the signing of a contract.

Wanton neglect may not be entirely a function of a lack of will, of course: turning a blind eye equals complicity in bad actions when the prevailing culture is “don’t get caught.”

Distinct from wanton neglect is misplaced faith in models, be they plans or rules or guidelines. While the presence of things like plans and guidelines may communicate expectations, they offer no guarantee that reality is consistent with those guidelines. By way of example, IT managers across all industries have a terrible habit of reporting performance consistent with plans: the “everything is green for months until suddenly it’s a very deep shade of red” phenomenon. Governance in the form of guidelines is often treated as “recommendations” rather than “expectations” (e.g., “we didn’t do it that way because it seemed like too much work”). A colleague of mine, on reading the previous post in this series, offered up that there is a well established definition of data governance (DAMA). Yes there is. The point is that governance is both a noun and a verb; governance “as defined” and “as practiced” are not guaranteed to be the same thing. Pointing to a model and pointing to the implementation of that model in situ are entirely different things. The key defining characteristic here is that governance goes little beyond having a model communicating expectations for how things get done.

Still another pattern of bad governance is governance theater, where there are governance models and people engaged in oversight, but those people do not know how to effectively interrogate what is actually taking place. In governance theater, some governing body convenes and either has the wool pulled over their eyes or simply lacks the will to thoroughly investigate. In regulated industries, we see this when regulators lack the will to investigate despite strong evidence that something is amiss (Madoff). In corporate governance, this happens when a board relies almost exclusively on data supplied by management (Hollinger International). In technology, we see this when a “steering committee” fails to obtain data of its own or lacks the experience to ask pertinent questions of management. Governance theater opens the door to regulatory capture, where the regulated (those subject to governance) dictate the terms and conditions of regulation to the regulators. When governance is co-opted, governance is at best a false positive that controls are exercised effectively.

I’m sure there are more patterns of bad governance, and even these patterns can be further decomposed, but these cover the most common cases of bad governance I’ve seen.

Back to the question of governance “maturity”: while there is an implied maturity to these - no controls, aspirational controls, pretend controls - the point is NOT to suggest that there is a progression: i.e., aspirational controls are not a precursor to pretend controls. The point is to identify the characteristics of governance as practiced to get some indication of the path to good governance. Where there is governance theater, the gap is a reform of existing institutions and practices. Misplaced faith requires creation of institutions and practices, entirely new muscle memories for the organization. Each represents a different class of problem.

The actions required to get into a state of good governance are not, however, an indication of the degree of resistance to change. Headstrong management may put up a lot of resistance to reform of existing institutions, while inexperienced management may welcome creation of governance institutions as filling a leadership void. Just because the governance gap is wide does not inherently mean the resistance to change will be as well.

If you’re serious about governance and you’re aware it’s lacking as practiced today, it is useful to know where you’re starting from and what needs to be done. If you do go down that path, always remember that it’s a lot easier for everybody in an organization - from the most senior executive management to the most junior member of the rank and file - to reject governance reform than to come face to face with how bad things might actually be.

Wednesday, January 31, 2024

Governance Without Benefit

I’ve been writing about IT governance for many years now. At the time I started writing about governance, the subject did not attract much attention in IT, particularly in software development. This was a bit surprising given the poor track record of software delivery: year after year the Standish CHAOS reports drew attention to the fact that the majority of IT software development investments wildly exceeded spend estimates, fell short of functional expectations, were plagued with poor quality, and as a result quite a lot of them were canceled outright. Drawing attention to such poor results gave a boost to the Agile community who were pursuing better engineering and better management practices. Each is clearly important to improving software delivery outcomes, but neither addresses contextual or existential factors to investments in software. To wit: somebody has to hold management accountable for keeping delivery and operations performing within investment parameters and, if it is not, either fix the performance with or without that management or negotiate a change in parameters with investors. Governance, not engineering or management, is what addresses this class of problem.

If IT governance was a fringe activity twenty years ago, it is everywhere today: we have API governance and data governance and AI governance and on and on. Thing is, there is no agreement as to what governance is. Depending on who you ask, governance is “the practice” of defining policies, or it “helps ensure” things are built as expected, or it “promotes” availability, quality and security of things built, or it is the actual management of availability, quality and security. None of these definitions are correct, though. Governance is not just policy definition. Terms like “promote” and “helps ensure” are weasel words that imply “governance” is not a function held accountable for outcomes. And governance intrinsically cannot be management because governance is a set of actions with concomitant accountability that are specifically independent of management.

That governance is still largely a sideline activity in IT is no surprise. For years, ITIL was the go-to standard for IT governance. ITIL defines consistent, repeatable processes rooted in “best practices”. The net effect is that ITIL defines governance as “compliance”. As long as IT staff follow ITIL consistent processes, IT can’t be blamed for any outcome that resulted from its activity: they were, after all, following established “best practices.” As there is not a natural path from self-referential CYA function to essential organizational competency, it is unrealistic to expect that IT governance would have found one by now.

I’ve long preferred applying the definition of corporate governance to IT governance. Corporate governance boils down to three activities: set expectations, hire managers to pursue those expectations, and verify results. When expectations aren’t met, management is called to task by the board and obliged to fix things. If expectations aren’t met for a long period of time, the managers hired to deliver them have to go or the expectations have to go. And if expectations aren’t met after that, the board goes. Before it gets to anything so drastic, governance has that third obligation, to “verify results.” Good governance sources data independently of management by looking directly at artifacts and constructing analyses on that data. In this way, good governance has early warning as to whether expectations are in jeopardy or not, and can assess management’s performance independently of management’s self-reporting. Governance is not “defining policies” or “helping to ensure” outcomes; governance is actively involved in scrutinizing and steering and has the authority to act on what it has learned.

Governance is concerned with two questions: are we getting value for money, and are we receiving things in accordance with expectations. Multiple APIs that do the same thing, duplicative data sources that don’t reconcile, IT investments that steamroll their business cases, all make a mockery of IT governance. We’ve got more IT “governance” than we’ve ever had, yet all too often it just doesn’t do what it’s supposed to do.

I’m picking up the topic of IT governance again because it does not appear to me that the state of IT governance is materially better than it was two decades ago, and this deserves attention. Soon after I started down this path, I thought it would be helpful to have a governance “maturity model.” No, the world does not need another maturity model, let alone one for an activity that is largely invisible and only conspicuous when it fails or simply isn’t present. It doesn’t help that good governance does not guarantee a better outcome, nor that poor governance does not guarantee a bad outcome. Governance is a little too abstract, difficult to describe in simple and concrete terms, and subsequently difficult for people to wrap their heads around. That, in turn, renders any “maturity model” an academic exercise at best.

Still, there is room for something that characterizes all this governance on an IT estate and frames it as an agent for good or bad. That is, in the as practiced state, is governance of this activity (say, API or appdev) materially reducing or increasing exposure to a bad outcome. That’s a start.

* * *

Dear readers,

I took extended leave from work last year, and decided to also take a break from writing the blog. I’m back.

Also, I do want to apologize that I’ve been unable all of these years to get this site to support https. It’s supposed to be a simple toggle in the Google admin panel to enable https, but for whatever reason it has never worked, which I suspect has to do with the migration of the blog from Blogger into Google. Despite admittedly tepid efforts on my part, I've not found a human who can sort this out at Google. I appreciate your tolerance.