AI as a Historically Significant Capability: Why Boards Should Reassess Organization Design Before They “Implement AI”
- Richard Diamond
- 5 days ago
- 5 min read

Organizations have always been shaped—sometimes quietly, sometimes violently—by the constraints of their era. When capital intensity demanded scale, firms built functional hierarchies. When markets expanded geographically, organizations aligned around regions. When product lines proliferated, divisional business units emerged to restore focus and accountability. When knowledge work and motivation became central, matrix structures and team-based operating models rose to prominence.
These were not stylistic shifts. They were structural adaptations—attempts to optimize efficacy and efficiency in response to a dominant constraint.
Artificial intelligence now belongs in that same lineage. Not as another productivity tool, and not as a digitization wave, but as a transformative capability enabler that materially changes the economics of coordination, expertise, and decision-making. That is why AI deserves something more fundamental than a use-case portfolio or a technology roadmap. It deserves a reassessment of organizational structure itself.
The hidden premise of today’s org charts
Most modern structures—functional departments, business units, matrices—were built around a set of implicit limits:
Information was expensive to gather and slow to move.
Expertise was scarce and unevenly distributed.
Coordination required meetings, managers, and repeated handoffs.
Oversight demanded human attention.
Decisions needed escalation because context was fragmented.
Even the most sophisticated organizational models often assumed that leadership and managerial layers were necessary to aggregate information, create alignment, and coordinate work across boundaries.
AI attacks these constraints directly.
It compresses information into decision-ready summaries, distributes expertise broadly through on-demand assistance, automates much of the follow-up and coordination work that consumes managerial capacity, and can surface anomalies and exceptions faster than traditional reporting chains. The result is not merely “more efficiency.” It is a shift in what structure is for.
Structure started as purpose-driven—then became permanent
A key insight often overlooked in organization design is that structures originally formed to serve a purpose—and then hardened into permanence even when that purpose changed.
A functional organization may have emerged to drive economies of scale and deep specialization. A geographic organization may have been essential when proximity to markets was the primary driver of responsiveness. A divisional structure may have become the best way to manage product complexity and enforce accountability.
But once established, structure becomes embedded in careers, budgets, status, and institutional identity. It becomes a fixture. The organization stops asking “what is this grouping for?” and starts asking “who owns this?” Meanwhile, purpose continues to evolve—markets change, competitors shift, the basis of advantage moves. When purpose changes faster than structure, organizations accrue friction: more interfaces, more handoffs, more committees, longer decision cycles.
AI doesn’t create this misalignment. It reveals it—because it changes the cost of the very coordination mechanisms that legacy structures were built to manage.
Why this is different from earlier technology waves
Previous waves—enterprise IT, the internet, digital workflows—certainly improved speed and scale. But they largely preserved the underlying coordination logic: organizations still relied on managerial aggregation and structural boundaries to distribute information and coordinate action.
AI is different because it changes the coordination regime itself. The old tradeoffs—such as functional efficiency versus business-unit responsiveness—are no longer as structurally binding.
Historically, the functional model excelled at standardization and depth but struggled with market agility. The business-unit model excelled at market focus and accountability but duplicated expertise and raised overhead. The matrix attempted to reconcile the two—often by adding coordination cost.
AI reduces the cost of coordination and makes expertise more portable. That reopens a design space that organizations haven’t had access to before: functions can shrink from departments that “do work” into platforms that “own standards,” and market-facing units can become smaller, more fluid, and assembled from shared capabilities rather than permanently staffed fiefdoms.
This is why AI deserves organization reassessment. It changes what can be centralized, what can be distributed, what can be automated, and where judgment must remain deeply human.
Span of control is a consequence, not the starting point
A common superficial conclusion is that AI simply increases a manager’s span of control: “Managers can handle 2–3× more direct reports now.” Sometimes that’s true. Often it’s not.
The deeper reality is that AI changes the nature of management work. It reduces the need for managers to act as routers of information and coordinators of routine tasks. It does not eliminate the need for judgment, conflict mediation, mentorship, sensemaking, ethical oversight, and decision accountability. In an AI-enabled organization, span of control becomes an emergent property of a redesigned system—not a lever pulled in isolation.
More fundamentally: AI increases the number of people who don’t need to be controlled at all. It shifts attention from average supervision to exception handling and human-critical moments.

What boards should ask: magnitude, not incrementalism
If you were on a board, the right question is not, “What pilots are we running?” It is: What magnitude of organizational change might be possible if we fully adopted AI capabilities?
The credible answers are structural in nature:
Management layers can compress because AI handles information synthesis and coordination.
Large centralized functions can transform into governance-and-platform models rather than headcount-heavy service organizations.
Business units can become less permanent and more modular as shared AI-enabled capabilities reduce duplication and speed reconfiguration.
Decision cycles can collapse from weeks to days—or hours—because information asymmetry declines and options can be modeled quickly.
None of this is guaranteed. But it is plausible—and it’s the scale of possibility boards must understand to govern responsibly.
Why the next step is not “a plan,” but an impact assessment
This is where many organizations go wrong. They jump from “AI is important” to “we need an AI transformation plan,” and they treat the organization chart as fixed. They produce use-case lists, pick vendors, and stand up centers of excellence. Those may deliver value—but they rarely answer the strategic and structural question: Is our current organization still fit for purpose under AI-enabled conditions?
Before committing to definite reorganization plans, what’s needed is a high-level AI Organizational Impact Assessment—specific to the company, not generic, and oriented toward structural implications rather than tool deployment.
This is not a technology assessment. It is not a workforce reduction exercise. It is not a glossy innovation program. It is a disciplined effort to identify where AI alters the company’s coordination economics and therefore its structural logic.
What a high-level AI Organizational Impact Assessment should do
A credible assessment should answer four board-level questions:
Where does AI materially change coordination economics?Identify where information aggregation, monitoring, advisory work, and cross-functional handoffs are currently the binding constraints—and where AI could collapse them.
Where is the organization’s structure misaligned with its current purpose?Revisit the purpose of major groupings. If their original reason for existence has changed, name it plainly.
What magnitude of structural change is plausible (in ranges, not promises)?Boards need a credible “possibility space”—conservative to aggressive scenarios—without pretending to forecast exact outcomes.
What should not change (yet), and why?Regulatory accountability, safety-critical operations, and trust-dependent roles require stability. Stating boundaries increases credibility and reduces fear.
The output should be concise and decision-oriented: a heat map of impact, a short list of structural pressure points, scenario bands for magnitude, and a set of design principles and decision options.
Why this sequence matters
AI makes redesign technically feasible. But organizations are not machines—they’re human systems embedded in law, incentives, careers, identity, and politics. Moving too fast creates churn and distrust. Moving too slowly allows legacy structures to become a strategic liability.
An impact assessment is the right bridge. It enables the organization and its board to see the shape of change before committing to it, to distinguish inevitable shifts from optional ones, and to sequence action without destabilizing the enterprise.
The takeaway
Organizations have repeatedly reinvented themselves in response to transformative forces—economies of scale, geographic expansion, specialization, and the demands of knowledge work. AI belongs in that category because it alters the economics of coordination, expertise, and decision-making.
That is why AI deserves organization reassessment. And that is why the responsible next move—especially from a board perspective—is not to rush into definite restructuring plans, but to conduct a high-level, company-specific AI Organizational Impact Assessment to understand the magnitude and locus of change before acting.
The organizations that treat AI as a historical capability shift will redesign themselves deliberately. The ones that treat AI as an add-on will eventually be redesigned by the consequences.





Comments