Monday, July 31, 2023

Resistance

Organizational change, whether digital transformation or simple process improvement, spawns resistance; this is a natural human reaction. Middle managers are the agents of change, the people through whom change is operationalized. The larger the organization, the larger the ranks of middle management. It has become commonplace among management consultants to target middle management as the cradle of resistance to change. The popular term is “the frozen middle”.

There is no single definition of what a frozen middle is, and in fact there is quite a lot of variation among those definitions. Depending on the source, the frozen middle is:

  • an entrenched bureaucracy of post-technical people with no marketable skills who only engage in bossing, negotiating, and manipulating organizational politics - change is impossible with the middle managers in situ today
  • an incentives and / or skills deficiency among middle managers - middle managers can be effective change agents, but their management techniques are out of date and their compensation and performance targets are out of alignment with transformation goals
  • a corporate culture problem - it’s safer for middle managers to do nothing than to take risks, so working groups of middle managers respond to change with “why this can’t be done” rather than “how we can do this”
  • not a middle management problem at all, but a leadership problem: poor communication, unrealistic timelines, thin plans - any resistance to change is a direct result of executive action, not middle management

The frozen middle is one of these, or several of these, or just to cover all the bases, a little bit of each. Of course, in any given enterprise they’re all true to one extent or another.

Plenty of people have spent plenty of photons on this subject, specifically articulating various techniques for (how clever) “thawing” the frozen middle. Suggestions like “upskilling”, “empowerment”, “champion influencers of change”, “communicate constantly”, and “align incentives” are all great, if more than a little bit naive. Their collective shortcoming is that they deal with the frozen middle as a problem of the mechanics of change. They ignore the organizational dynamics that create resistance to change among middle management in the first place.

Sometimes resistance is a top-down social phenomenon. Consider what happens when an executive management team is grafted onto an organization. That transplanted executive team has an agenda to change, to modernize, to shake up a sleepy business and make it into an industry leader. It isn’t difficult to see this creates tensions between newcomers and long-timers, who see one another as interlopers and underperformers. Nor is it difficult to see how this quickly spirals out of control: executive management that is out of touch with ground truths; middle management that fights the wrong battles. No amount of “upskilling” and “communication” with a side order of “empowerment” is going to fix a dystopian social dynamic like this.

One thing that is interesting is that the advice of the management consultant is to align middle management’s performance metrics and compensation with achievement of the to-be state goals. What the consultants never draw attention to is executive management receiving outsized compensation for as-is state performance; compensation isn’t deferred until the to-be state goals are demonstrably realized. Plenty of management consultants admonish executives for not “leading by example”; I’ve yet to read any member of the chattering classes admonish executive to be “compensated by example”.

There are also bottom-up organizational dynamics at work. “Change fatigue” - apathy resulting from a constant barrage of corporate change initiatives - is treated as a problem created by management that management can solve through listening, engagement, patience and adjustments to plans. “Change skepticism” - doubts expressed by the rank-and-file - is treated as an attitude problem among the rank-and-file that is best dealt with by management through co-opting or crowding out the space for it. That is unfortunate, because it ignores the fact that change skepticism is a practical response: the long timers have seen the change programs come and seen the change programs go. The latest change program is just another that, if history is any guide, isn’t going be any different than the last. Or the dozen that came and went before the last.

The problematic bottom up dynamic to be concerned with isn’t skepticism, but passivity. The leader stands in front of a town hall and announces a program of change. Perhaps 25% will say, this is the best thing we’ve ever done. Perhaps another 25% will say, this is the worst thing we’ve ever done. The rest - 50% plus - will ask, “how can I not do this and still get paid?” The skeptic takes the time and trouble to voice their doubts; management can meet them somewhere specific. It is the passengers - the ones who don’t speak up - who represent the threat to change. The management consultants don’t have a lot to say on this subject either, perhaps because there is no clever platitude to cure the apathy that forms what amounts to a frozen foundation.

Is middle management a source of friction in organizational change? Yes, of course it can be. But before addressing that friction as a mechanical problem, think first about the social dynamics that create it. Start with those.

Friday, June 30, 2023

How Agile Management Self Destructs

I’ve been writing about Agile Management for over 15 years. Along the way, I’ve written (as have many others) how to get Agile practices into a business, how to scale them, how to overcome obstacles to them, and so forth. I’ve also written about how Agile gets co-opted, and a few months ago I wrote about how Agile erodes through workforce attrition and lowered expectations. I’ve never written about how Agile management can self-destruct.

The first thing to go are results-based requirements. Stories are at the very core of Agile management because they are units of result, not effort. When we manage in units of result, we align investment with delivery: we can mark the underlying software asset to market, we can make objective risk assessment and quantify not only mitigation but the value of mitigation. Agile management traffics in objective facts, not subjective opinion.

The discipline to capture requirements as Stories fades for all kinds of reasons. OKRs become soft measures. “So that” justification become tautologies. Labor specialization means that no developer pair, let alone a single person, can complete a Story. Team boundaries become so narrow they’re solving technical rather than business problems. And, you know what, it takes discipline to write requirements in this way.

Whatever the reason, when requirements no longer have fidelity to an outcome, management is back to measuring effort rather than results. And effort is a lousy proxy for results.

The next thing to go is engineering excellence. Agile management implicitly assumes excellence in engineering: in encapsulation, abstraction, simplicity, build, automated testing, and so forth. Once managers stop showing an active interest in engineering discipline, the symbiotic relationship between development and management is severed.

The erosion of engineering discipline is a function - directly or indirectly - of a lapse of management discipline. Whereas a highly-disciplined team decides where code should reside, an undisciplined team negotiates who has to do what work - or more accurately, which team doesn’t have to do what work. This is how architectures get compromised, code ends up in the wrong place, and abstraction layers create more complexity than simplicity.

The loss of engineering excellence is traumatic to management effectiveness. How something is built is a good indicator of the outcomes we can expect. Is the software brittle in production? Expensive to maintain? Does it take forever to get features released? Management has to reinforce expectations, verify that things are being built in the way they’re expecting them to be built, and make changes if they are not. When excellence in engineering is gone, management is no longer able to direct delivery; it is instead at the mercy of engineering.

The third thing to go is active, collaborative management. I’ve previously described what Agile management is and is not, so I’ll not repeat it here. The short version is, Agile management is a very active practice of advancing the understanding of the problem (and solution), protecting the team’s execution, and adjusting the team as a social system for maximum effectiveness. Now, management can check-out and just be scorekeepers even when there is engineering excellence and results-based requirements. But suffice to say, when saddled with crap requirements and becoming a vassal to engineering, management is reduced to the role of passenger. There is no adaptability, no responding to change, beyond adding more tasks to the list as the team surfaces them. Management is reduced to administration.

Agile requires discipline. It also requires tenacity. If management is going to lead, it has to set the expectations for how requirements are defined and how software is created and accept nothing less.

Wednesday, May 31, 2023

Give us autonomy - but first you've gotta tell us what to do

The ultimate state for any team is self-determination: they lead their own discovery of work, self-prioritize that work, self-organize their roles and self-direct the delivery.

Self-determination requires meta awareness. The team knows the problem space - the motivations of different actors (buyers, users, influencers), the guiding policies (regulatory and commercial preference), the tech in place, and so forth. Conversely, they are not whipsawed by external forces. They do not operate at the whim of customers because they evolve their products across the body of need and opportunity. They do not operate at the mercy of regulation because they know the applicable regulation and how it applies to them. They know their integrated technology’s features and foibles and know what to code with and code around. They know where the bodies are buried in their own code. And the members of the team might not be friends or even friendly to one another, but they know that no member of the team will let them down.

In the early 1990s, a Chief Technology officer I knew once replied to a member of his customer community during the Q&A at their annual tech meetup thusly: “there is nothing you can suggest that we’ve not already thought of.” Arrogance incarnate. But he and his entire team knew the product they were trying to build and they product they were not trying to build. They were comfortable with the tech trends they were latching onto and those they were not. They had not only customer intimacy but intimacy with prospective customers. They sold what they made; they did not make what they sold. Best of all, they’re still in business today, over 30 years later. They achieved a state of sustainable economic autonomy.

Freedom is most often associated with financial independence. There is a certain amount of truth to this. Financial independence means you can reach as far as “esteem” in Maslow’s hierarchy without doing much more than lavish spending. Unfortunately, money only buys autonomy for as long as the money lasts. History is littered with case studies where it did not. Sustained economic performance - through business cycles and tech cycles - yields the cash flow that makes self-determination possible. That requires evolution and adaptability, and those are functions of meta awareness.

Which brings us to the software development team that insists on autonomy. The team wants the freedom to tell their paymasters how a problem or opportunity space is defined, what to prioritize, how to staff, how much funding they need, and when to expect solution delivery will begin. That’s a great way to work. And, for decades now, management consultants have advised devolving authority to the people closest to the need or opportunity. But that proximity is only as valuable as the team’s comprehension of the problem space, familiarity with the domain, experience with similar engineering challenges, and the ability to think abstractly and concretely. When a team lacks in these things, devolving authority will simply yield a long and expensive path of discovery while the team acquires this knowledge. The less a priori knowledge the team has, the less structured and more haphazard the learning journey; so when the problem space is complex, this becomes a very long and very expensive discovery path indeed - and one that sometimes never actually succeeds.

Autonomy increasingly became the norm in tech as a result of the shortage of capable tech people, driven by the combination of cheap capital and COVID fueling tech investments. Under those conditions, a long learning journey was the price of admission; meta awareness was no longer a requirement for autonomy. With capital a lot more expensive today, tech spending has cooled and returns on tech investments are under much tighter scrutiny, and the longer the learning journey the less viable the tech investment. This is having the effect of exposing friction between how tech expects to operate and how business buyers expect for it to operate. Business buyers financing tech investments want tighter business cases that define returns and provide controls for capital spend. Tech employees want the space to figure out the domain, increasing expense spend and lengthening the time (and therefore cost) to deliver. With capital now having the upper hand, it is not uncommon for tech to be dichotomously demanding both autonomy and to be told exactly what to do.

While autonomy is the ultimate state of evolution for any team, the prerequisites to achieve it are extraordinarily high. It doesn’t require omnipotence, but it does require sufficient fluency with the tech and the domain to know the question to ask, to make appropriate assumptions, to anticipate the likely risks, and to know the sensible defaults to make in design. Devolved authority is a fantastic way to work, but autonomy must be earned, never granted.

Sunday, April 30, 2023

Measured Response

Eighteen months ago, I wrote that there is a good case to be made that the tech cycle is more economically significant than the credit cycle. By way of example, customer-facing tech and corporate collaboration technology contributed far more to robust S&P 500 earnings during the pandemic than the Fed’s bond buying and money supply expansion. Having access to capital is great; it doesn’t do a bit of good unless it can be productively channeled.

Twelve months ago, I wrote a piece titled The Credit Cycle Strikes Back. This time last year, rising interest rates and inflation reminiscent of the 1970s cast a pall over the tech sector, most obviously with tech firms laying off tens of thousands. Arguably, it cast a pall over the tech cycle in its entirety, from households forced to consolidate their streaming service subscriptions to employers increasingly requiring their workforce to return to office. Winter had come to tech, courtesy the credit cycle.

Silicon Valley Bank collapsed last month. The balance sheet, risk management, and regulatory reasons for its collapse are well documented. The Fed responded to SVB’s collapse by providing unprecedented liquidity in the form of 100% guarantees on money deposited at SVB. The headline rationale for unlimited deposit insurance - economic policy, political exigence - are also well documented elsewhere. Still, it is an economic event worth looking into.

An interesting aspect to the collapse of SVB is the role that social media played in the run on the bank. A recent paper presents prima facie evidence that the run on SVB was exacerbated by Twitter users. In a pre-social media era, SVB’s capital call to plug a risk management lapse may very well have been a business as usual event; that is, at least, what it appears SVB’s investment banking advisors anticipated. Instead, that capital call was a spark that ignited catastrophic capital flight.

If the link between Tweets and capital flight from SVB is real, the Fed’s decision looks less like a backstop for bank failures caused by poor risk management decisions, and more a pledge to contain the impact of a technology cycle phenomenon on the financial system. As the WSJ put it this week, “… Twit­ter’s role in the saga of Sil­i­con Val­ley Bank re­it­er­ated that the dy­nam­ics of fi­nan­cial con­ta­gion have been for­ever changed by so­cial me­dia.” Most banks had paid attention to the fact that Treasurys had declined in value and took appropriate hedge positions to protect their core business of maturity transformation. Based on fundamentals it wasn’t immediately obvious there was a systemic crisis at hand. Yet the rapidity with which SVB had collapsed was unprecedented. The Fed’s response to that rapidity was equivalent to Mario Draghi’s “whatever it takes” moment.

Social media-fueled events aren’t new in the financial system; by way of example: meme stock inflation. And assuming SVB’s collapse truly was a social media phenomenon, the threat was still at human scale: even if those messengers had a more powerful megaphone than the newspaper reporter of yore observing a queue of people outside a bank branch, it was a message propagated, consumed and acted upon by humans. Thing is, the next (or more accurately, the next after the next) threat will be AI driven, the modern equivalent to program trading that contributed to Black Monday in 1987. Imagine a deepfake providing the spark fueling adjustments by like-minded algorithms spanning every asset class imaginable.

As tech has become an increasingly potent economic force, it represents a bigger and bigger challenge to the financial system. To wit: eventually there will be a machine scale threat to the financial system, and human regulators don’t have machine scale. As the saying goes, regulation exists to protect us from the last crisis - as in, regulations are codified well after the fact; the scale mismatch we’re likely to face implies a low tolerance for delay. The last line of defense are kill switches, and given the tightly coupled, interconnected, and digital nature of the modern financial system, orchestrating kill switches presents a machine scale problem itself. The Fed, the Department of the Treasury, the OCC, the FDIC, the European Central Bank, and all the rest need new tools.

Let’s hope they don't build HAL.

Friday, March 31, 2023

Competency Lost

The captive corporate IT department was a relatively early adopter of Agile management practices, largely out of desperation. Years of expensive overshoots, canceled projects, and poor quality solutions gave IT not just a bad reputation, but a confrontational relationship with its host business. The bet on Agile was successful and, within a few years, the IT organization had transformed itself into a strong, reliable partner: transparency into spend, visibility into delivery, high-quality software, value for money.

Somewhere along the way, the “products not projects” mantra took root and, seeing this as a logical evolution, the captive IT function decided to transform itself again. The applications on the tech estate were redefined as products, assigned delivery teams responsible for them with Product Owners in the pivotal position of defining requirements and setting priorities. Product Owners were recruited from the ranks of the existing Business Analysts and Project Managers. Less senior BAs became Product Managers, while those Project Managers who did not become part of the Product organization were either staffed outside of IT or coached out of the accompany. The Program Management Office was disbanded in favor of a Product Portfolio Management Office with a Chief Product Officer (reporting to the CIO) recruited from the business. Iterations were abandoned in favor of Kanban and continuous deployment. Delivery management was devolved, with teams given the freedom to choose their own product and requirements management practices and tools. With capital cheap and cashflows strong, there was little pressure for cost containment across the business, although there was a large appetite for experimentation and exploration.

As job titles with "Product" became increasingly popular, people with work experience in the role became attractive hires - and deep pocketed companies were willing to pay up for that experience. The first wave of Product Owners and Managers were lured away within a couple of years. Their replacements weren't quite as capable: what they possessed in knowledge of the mechanical process of product management they lacked in fundamentals of Agile requirements definition. These new recruits also had an aversion to getting deeply intimate with the domain, preferring to work on "product strategy" rather than the details of product requirements. In practice, product teams were "long lived" in structure only, not in institutional memory and capability that matter most.

It wasn't just the product team that suffered from depletion.

During the project management years of iterative delivery, something was delivered every two weeks by every team. In the product era, the assertion that "we deploy any time and all the time" masked the fact that little of substance ever got deployed. The logs indicated software was getting pushed, but more features remained toggled off than on. Products evolved, but only slowly.

Engineering discipline also waned. In the project management era, technical and functional quality were reported alongside burn-up charts. In the product regime, these all but disappeared. The assumption was, they had solved their quality problems with Agile development practices, quality was an internal concern of the team, and primarily the responsibility of developers.

The hard-learned software delivery management practices simply evaporated. Backlog management, burn-up charts, financial (software investment) analysis and Agile governance practices had all been abandoned. Again, with money not being a limiting factor, research and learning were prioritized over financial returns.

There were other changes taking place. The host business had settled into a comfortable, slow-growth phase: provided it threw off enough cash flow to mollify investors, the executive team was under no real pressure. IT had decoupled itself from justifying every dollar of spend based on returns to being a provider of development capacity for an annual rate of spend. The definition of IT success had become self-referential: the number and frequency of product deployments and features developed, with occasional verbatim anecdotes that highlighted positive customer experiences. IT's self-directed OKRs were indicators of activity - increased engagement, less customer friction - but not rooted in business outcomes or business results.

The day came when an ambitious new President / COO won board approval to rationalize the family of legacy of products into a single platform to fuel growth and squeeze out inefficiency. The board signed up provided they stayed within a capital budget, could be in market in less than 18 months, and could fully retire legacy products within 24 months, with bonuses indexed to every month they were early.

About a year in, it became clear delivery was well short of where it needed to be. Assurances that everything was on track were not backed up by facts. Lightweight analysis led to analysis work being borne by developers; lax engineering standards resulted in a codebase that required frequent, near-complete refactoring to respond to change; inconsistency in requirements management meant there was no way to measure progress, or change in scope, or total spend versus results; self-defined measures of success meant teams narrowed the definition of "complete", prioritizing the M at the expense of the V to meet a delivery date.

* * *

The sharp rise of interest rates has made capital scarce again. Capital intensive activities like IT are under increased scrutiny. There is less appetite for IT engaging in research and discovery and a much greater emphasis on spend efficiency, delivery consistency, operating transparency and economic outcomes.

The tech organization that was once purpose built for these operating conditions may or may not be prepared to respond to these challenges again. The Agile practices geared for discovery and experimentation are not necessarily the Agile practices geared for consistency and financial management. Pursuing proficiency of new practices may also have come at the cost of proficiency of those previously mastered. Engineering excellence evaporates when it is deemed the exclusive purview of developers. Quality lapses when it is taken for granted. Delivery management skills disappear when tech's feet aren't held to the fire of cost, time and, above all, value. Domain knowledge disappears when it walks out the door; rebuilding it is next to impossible when requirements analysis skills are deprioritized or outright devalued.

The financial crisis of 2008 exposed a lot of companies as structurally misaligned for the new economic reality. As companies restructured in the wake of recession, so did their IT departments. Costly capital has tech in recession today. The longer this condition prevails, the more tech captives and tech companies will need to restructure to align to this new reality.

As most tech organizations have been down this path in recent memory, restructure should be less of a challenge this time. In 2008, the tech playbook for the new reality was emerging and incomplete. The tech organization not only had to master unfamiliar fundamentals like continuous build, unit testing, cloud infrastructure and requirements expressed as Stories, but improvise to fill in the gaps the fundamentals of the time didn't cover, things like vendor management and large program management. Fifteen years on, tech finds itself in similar circumstances. Mastering the playbook this time round is regaining competency lost.

Tuesday, February 28, 2023

Shadow Work

Last month, Rana Foroohar argued in the FT that worker productivity is declining in no small part because of shadow work. Shadow work is unpaid work done in an economy. Historically, this referred to things like parenting and cleaning the house. The definition has expanded in recent years to include tasks that used to be done by other people that most of us now do for ourselves, largely through self-service technology, like banking and travel booking. There are no objective measures of how much shadow work there is in an economy, but the allegation in the FT article is that it is on the rise, largely because of all the fixing and correcting that the individual now must do on their own behalf.

There is a lot of truth to this. Some of the incremental shadow work is trivial, such as having to update profile information when an employer changes travel app provider. Some is tedious, such as when people must patiently hurdle through the unhelpful layers of primitive chat bots to finally reach a knowledge worker to speak to. Some is time consuming, such as when caught in an irrops travel situation and needing to rebook travel. And some is truly absurd, such as spending months navigating insurance companies and health care providers to get a medical claim paid. Although customer self-service flatters a service provider’s income statement, it wreaks havoc on the customer’s productivity and personal time.

But it is unfair to say that automated customer service has been a boon to business and a burden to the customer. Banking was more laborious and inconvenient for the customer when it could only be performed at a branch on the bank’s time. And it could take several rounds - and days - to get every last detail of one’s travel itinerary right when booking a business trip through a travel agent. Self-service has made things not just better, but far less labor intensive for the ultimate customer.

It is more accurate to say that any increase in shadow work borne by the customer is not really a phenomenon of the shift to customer self-service as much as it lays bare the shortcomings of providers that a large staff of knowledgable customer service agents were able to gloss over.

First, a lot of companies force their customers to do business with them in the way the company operates, not in the way the customer prefers to do business. A retailer that requires its customers to put an order on a specific location rather than algorithmically routing the order for optimal fulfillment to the customer - e.g., for best availability, shortest time to arrival, lowest cost of transportation - forces the customer to navigate the company’s complexity in order to do business. Companies do this kind of thing all the time because they simply can’t imagine any other way of working.

Second, edge cases defy automation. Businesses with exposure to a lot of edge cases or an intolerance to them will shift burden to customers when they arise. The travel industry is highly vulnerable to weather and suffers greatly with extreme weather events. Airline apps have come a long way since they made their debut 15 years ago, but when weather disrupts air travel, the queues at customer service desks and phone lines get congested because there is a limit to the solutions that can be offered through an app.

Third, even the simplest of businesses in the most routine of industries frequently manage customer service as a cost to be avoided, if not outright blocked. A call center that is managed to minimize average call time as opposed to time to resolution is incentivized to direct the caller somewhere else or deflect them entirely rather than resolve the customer problem. No amount of self-service technology will compensate for a company ethos that treats the customer as the problem.

There is no doubt that shadow work has increased, but that increase has less to do with the proliferation of customer self-service and more to do with the limitations of its implementation and the provider’s attitude toward their customer.

Perhaps more important is what a company loses when it reduces the customer service it provides through its people: the inability to immediately respond humanely to a customer in need; the aggregate loss of customer empathy through a loss of contact. This makes it far more difficult for a company to nurture its next generation of knowledge workers to troubleshoot and resolve increasingly complex customer service situations.

But of greater concern is that as useful as automation is from a convenience and scale perspective, its proliferation drives home the point that customers are increasingly something to be harvested, not people with whom to establish relationships. Society loses something when services are proctored at machine rather than human scale. In this light, the erosion of individual productivity is relatively minor.

Tuesday, January 31, 2023

Relics

I recently came across a box of very old technology tucked away in my basement: PDAs, mobile phones, digital cameras and even a couple of old laptops, all over two decades old. It was an interesting find, if slightly disturbing to think this stuff has moved house a couple of times. Before disposing of something, I try to repurpose it if I can. That's hard to do with electronics once they're orphaned by their manufacturers. Still, electronics recycling wasn't as easy to do twenty years ago, so perhaps just as well that I held onto them until long after it was.

In addition to bringing back fond memories, finding this trove got me thinking about about how rapidly mobile computing evolved. In the box from the basement were a couple of PDAs, one each by HP and Compaq; phones by Motorola, Nokia (including a 9210 Communicator) and Ericcson; and a digital video recorder by Canon. The Compaq brand has all but disappeared; the makers of two of the three phones exited the mobile phone business years ago; the Mini-DV technology of the camcorder was obsolete within a few years of its manufacture.

There were also a couple of laptops in the box, one each made by Compaq and Sony. The interesting thing about the laptops is how little the form factor has changed. My first laptop was a Zenith SuperSport 286. The basic design of the laptop computer hasn't changed much since the late 1980s (although mercifully they weigh less than 17 lbs). The Compaq and Sony laptops in that box from the basement are not physically different from the laptops of today: the Sony had a square screen and lots of different ports, where a modern laptop has a rectangular screen and a few USB ports.

The laptop, of course, replaced the luggable computer of the 1970s and early 1980s made by the likes of Osborne and Kaypro and Compaq. The luggable was a statement for the era: what compels a person to haul around disk drives, CPU, keyboard and a small CRT? Maybe it was the free upper-body workout. The laptop was a quantum improvement in mobile computing.

But once that quantum improvement happened, the laptop became decidedly less exciting. As the rate of change of capabilities in the laptop slowed, getting a new laptop became less of an event and more of a pain in the ass. Not to mention that, just like the PDA and phone manufacturers mentioned above, the pioneers and early innovators didn’t survive long enough to reap the full benefits of the space maturing.

And the same phenomenon happened in the PDA/Phone/camera space. The quantum leap was when these converged with the original iPhone. Since then, a new phone has become less and less of an event. Yes, just like laptops, they get incrementally better. Fortunately, migration via cloud makes upgrading less of a pain in the ass.

The transition from exciting to ordinary correlates to the utility value of technology in our lives: in personal productivity, entertainment, and increasingly as the primary (if not only) channel for doing things. There are, of course, several transformative technologies in their nascent stages. Somehow, I don’t think any are spawning the Zenith Data Systems and Compaqs making a future relic that somebody someday will be slightly amused to find in a box in their basement.