Most conversations about artificial intelligence collapse into tooling debates, prompt tactics, or surface-level productivity gains. But what is actually unfolding inside companies is less about software and more about structural stress. AI is not arriving as a clean layer of efficiency — it is exposing where organizations are already misaligned, overloaded, or operating on assumptions that no longer hold. In that sense, the real story is not what AI can do, but what it reveals.
The discussion with Jill Delgado sharpens that distinction by reframing adoption as a human systems problem first, with technology as the secondary variable. When you look at it this way, the current wave of stalled implementations, abandoned pilots, and underutilized licenses stops being surprising and starts looking inevitable.
The Acquisition Model Is the Wrong Model
The prevailing approach most companies are following is deceptively simple: acquire tools, distribute access, and expect usage to follow. It is the same pattern used for SaaS over the last two decades, and it breaks almost immediately in the context of AI. The assumption is that if capability exists, behavior will adjust to meet it. In reality, behavior resists anything that introduces ambiguity, risk, or additional cognitive load — especially inside environments where employees are already operating beyond sustainable capacity.
Delgado's framing of teams running at 125% capacity is not exaggerated. It reflects a broader condition where incremental demands are layered continuously without corresponding reductions elsewhere. When AI is introduced into that environment without subtracting existing responsibilities, it does not feel like leverage. It feels like pressure.
That pressure manifests in subtle ways. Employees do not openly reject the tools. They nod, attend initial sessions, and then quietly deprioritize usage in favor of tasks that are directly tied to performance evaluation. This creates the illusion of adoption without the reality of integration. Metrics show access and initial engagement, but not sustained behavioral change. Leadership interprets this as a tooling issue or a training gap, when in fact it is a prioritization problem rooted in how work is structured. If AI usage is not directly connected to how success is measured, it will always lose to the existing system of incentives.
The Trust Fracture That Messaging Cannot Fix
The second layer of friction is trust, and this is where most strategies begin to fail in ways that are difficult to reverse. Over the past two years, a consistent pattern has emerged across industries: organizations announce investments in AI alongside workforce reductions. Whether the reductions are directly caused by AI or simply framed that way is almost irrelevant. The signal received by employees is clear — AI is associated with job loss.
Once that association is formed, every subsequent attempt to position AI as a supportive tool is filtered through that lens. Delgado points out that this creates a multi-level trust breakdown: individuals question their own long-term viability, teams question whether collaboration is still safe, and the organization's narrative loses coherence. Trust, once fractured at that scale, cannot be rebuilt through messaging alone. It requires consistent, observable alignment between what is said and what is done — over time, not over a single all-hands meeting.
What complicates this further is that many organizations are already discovering that their initial assumptions about AI-driven cost reduction were incomplete. There is a growing pattern of companies reducing headcount under the expectation that AI can absorb entire roles, only to encounter operational gaps that force them to rehire. The underlying issue is a misunderstanding of how work is actually composed.
Roles are bundles of tasks, and while AI can effectively handle a portion of those tasks, it does not inherently replace the full scope of responsibility. A junior analyst, for example, may spend a significant percentage of time on data preparation or routine analysis — activities that AI can accelerate or partially automate — but the remaining work involves interpretation, context building, and iterative learning. Those elements are not just residual. They are developmental. They are what enable that individual to evolve into a more senior role over time.
The Succession Risk Nobody Is Talking About
When organizations remove entry-level functions entirely, they are not just reducing cost — they are interrupting the mechanism through which future capability is built. Delgado frames this as a succession risk rather than an efficiency gain, and it is a critical distinction. Without a clear pathway for skill development in an AI-augmented environment, companies risk creating a structural gap where mid-level and senior talent becomes increasingly scarce.
This is not a theoretical concern. It is already beginning to surface in environments where rapid automation has outpaced workforce planning. The organizations that move fastest to eliminate junior roles may find themselves, within three to five years, without the internal pipeline required to fill the roles that AI cannot perform — the roles that require judgment, institutional knowledge, and the kind of contextual reasoning that only develops through years of accumulated experience.
The more effective approach, and the one Delgado advocates, starts at a much lower level of abstraction. Instead of beginning with tools or platforms, it begins with work itself. The process involves breaking down roles into discrete tasks and identifying where friction exists within those tasks. This is not a high-level exercise. It requires understanding the actual sequence of actions people take throughout their day, from the moment they open their inbox to the way they process, interpret, and respond to information. Only after that mapping is complete does AI enter the conversation, positioned as a targeted solution to specific points of friction rather than a generalized upgrade.
Why Targeted Beats Generalized Every Time
This shift in approach changes the adoption dynamic in a fundamental way. When AI is introduced as a solution to a known problem — something that already consumes time or creates frustration — it is more readily accepted. The value is immediate and tangible, and the user does not need to be convinced of its relevance. In contrast, when AI is introduced as a broad capability without clear alignment to daily work, it remains abstract and optional. Optional tools do not survive in environments where time and attention are constrained.
The failure of large-scale deployments of generalized AI tools can often be traced back to this misalignment. Organizations invest heavily in enterprise licenses, expecting widespread usage, but without a clear connection between tool functionality and individual workflows, engagement plateaus. Even when companies provide extensive libraries of use cases or training materials, the majority of employees do not engage with them in a meaningful way. The issue is not access to information. It is the lack of translation between abstract capability and concrete application. People do not operate in terms of "use cases." They operate in terms of immediate problems that need to be solved within the constraints of their role.
Another layer of complexity emerges when considering data quality and confidence. AI systems are highly sensitive to the structure and integrity of the data they interact with. In environments where data is fragmented, inconsistent, or poorly governed, the outputs of AI systems become unreliable. Delgado emphasizes that even high levels of nominal accuracy — 96%, for example — may not be sufficient in contexts where precision is critical. Small errors can compound, particularly when outputs are used as inputs for further decisions. This creates a situation where users either over-rely on AI without sufficient validation or reject it entirely due to perceived inconsistency. Both responses undermine the potential value of the technology.
The more sustainable path involves building a clear understanding of where AI can be trusted and where human validation remains necessary. This is not a static boundary. It evolves as both the technology and the organization mature. However, it requires deliberate effort to define and communicate those boundaries. Without that clarity, AI becomes either a black box that is blindly trusted or a tool that is dismissed as unreliable — neither of which supports effective integration.
The Shadow Adoption Problem
At the individual level, a different dynamic is unfolding. High-performing operators are already integrating AI into their workflows in ways that significantly increase output and efficiency. Reports of individuals achieving two to three times their previous productivity are becoming more common, particularly in roles that involve information processing, analysis, and content generation. However, this adoption is often occurring outside formal organizational structures, using tools and methods that are not officially sanctioned.
This creates a visibility gap where organizations are unaware of how work is actually being done, even as they invest in internal solutions that see limited use. This divergence between individual and organizational adoption introduces both opportunity and risk. On one hand, it demonstrates the potential impact of AI when aligned with real work. On the other, it raises questions about governance, security, and consistency. Bridging this gap requires more than standardizing tools. It requires understanding why individuals are choosing certain tools and how those tools are being used to solve specific problems. Without that understanding, attempts to centralize or control usage are likely to fail.
There is also a longer-term consideration that sits beneath these operational challenges. As AI becomes more integrated into daily work, the nature of skill development begins to shift. Delgado references concerns about declining cognitive performance in newer generations entering the workforce, suggesting that reliance on AI without a strong foundational understanding may weaken critical thinking over time. Whether or not this trend continues, it highlights the importance of maintaining a balance between leveraging AI for efficiency and preserving the underlying skills required to evaluate and guide its outputs.
Cultural Buoyancy Over Resilience
This is where concepts like growth mindset, resilience, and what Delgado describes as "cultural buoyancy" become relevant. Traditional models of resilience focus on recovery after disruption, but in an environment where change is continuous, the more useful capability is the ability to remain stable while adapting incrementally. Cultural buoyancy reflects this shift — it is the capacity of individuals and organizations to stay operationally and psychologically afloat amid constant change, rather than cycling through periods of breakdown and recovery.
The distinction matters because the standard resilience framework assumes disruption is episodic. You absorb the shock, you recover, you stabilize. But AI-driven change is not episodic. It is continuous and compounding. The organizations that treat each new capability release as a discrete disruption to manage will exhaust themselves. The ones that build cultural buoyancy — where adaptation is normalized, where experimentation is expected, where uncertainty does not trigger paralysis — will absorb each wave without losing operational footing.
What becomes clear through this lens is that AI is not a discrete initiative that can be implemented and completed. It is an ongoing condition that interacts with every aspect of how work is structured and performed. The organizations that will benefit most are not those that move fastest in terms of tool adoption, but those that align technology with behavior, incentives, and long-term capability development. This requires a level of coordination that many organizations are not currently designed to achieve, which is why so many efforts stall before delivering meaningful results.
The BackTier Perspective: Operational Redesign, Not Feature Layering
From a BackTier perspective, the implication is straightforward. AI should not be treated as a feature layer or a tactical upgrade. It should be approached as an operational redesign problem, where the goal is not simply to increase output, but to create systems that can sustain higher levels of performance without increasing fragility. That means focusing on how work is decomposed, how decisions are made, how trust is established, and how capability is developed over time. Technology is a critical component of that system, but it is not the system itself.
The current moment is less about who has access to the best models and more about who can integrate those models into environments that are capable of absorbing and amplifying their value. That distinction will determine which organizations translate AI into durable advantage and which continue to cycle through pilots, initiatives, and incremental gains that never fully compound.
The organizations winning with AI right now are not the ones with the largest budgets or the most aggressive deployment timelines. They are the ones that did the slower, less visible work first — mapping their workflows, identifying their friction points, rebuilding their incentive structures, and earning back the trust of the people who have to use these systems every day. That work is not glamorous. It does not generate press releases. But it is the only work that actually produces the results that AI is supposed to deliver.
If your organization is sitting on a stack of underutilized AI licenses, the answer is not a better tool. The answer is a clearer understanding of the work those tools are supposed to support — and an honest assessment of whether the environment you have built is capable of absorbing the change you are asking it to make.
