The Optimistic Iconoclast - Issue #9
Theme of the week: Normal AI: the pragmatic counter-narrative.
1. The end of the AI honeymoon: why “normalization” is the true test of leadership
Forget the question, "Is the bubble going to burst?". The question now is who survives the operational complexity.
Whenever a technology enters the hype cycle, corporate anxiety manifests in a binary way: will it explode or will it fail? With AI, it's no different. The headlines oscillate between the promise of omnipotence and the fear of collapse.
However, the data tells a different story. While the public debate questions the bubble, private capital injected US$33.9 billion into Generative AI in 2024 alone, and corporate use doubled to 71% of organizations. The technology is not going away; it is doing something much more complex: it is infiltrating.
AI has ceased to be a curious novelty to become infrastructure. We dare to call this “Normal AI.” But make no mistake: “normal” does not mean easy, secure, or mature. It means the magic is over and the hard work of management has begun.
2. The reality check: investment vs. return
The dominant narrative sold AI as a quantum leap in immediate productivity. The operational reality, however, is a cold shower for the unsuspecting CFO.
Market data indicates that, although adoption is growing, 95% of AI programs fail to deliver tangible return on investment (ROI). We are seeing a brutal disconnect between strategic intent and execution capacity. Consider the rise of “Agentic AI”: while 82% of organizations intend to integrate autonomous agents within the next 1 to 3 years, only 7% have managed to scale these solutions so far.
Why this abyss? Because AI does not arrive as a clean break. It overlays legacy systems, inconsistent data, and fragmented processes. Instead of simplifying, “Normal AI” often adds a layer of complexity and recurring cost (OPEX) that needs to be managed, not just “deployed.”
3. Normalization is asymmetric (and unfair)
The distribution of AI gains is not democratic; it aggressively favors organizational maturity.
We see a clear bifurcation in the market. On the one hand, “future-built” companies are reskilling more than 50% of their workforce, transforming employees into AI orchestrators. On the other hand, “laggard” companies invest in tools but train less than 20% of the team, expecting the software to solve cultural problems.
This asymmetry creates dangerous illusions. Software development teams may be experiencing speed gains of 35% to 45% with code assistants, creating islands of efficiency. But, without a systemic view, these local gains do not translate into business agility. On the contrary: they generate more code, more “technical debt,” and more complexity to be maintained.
4. The price of normality: new risks on the balance sheet
Treating AI as “normal” requires accepting that it brings systemic risks to the center of the operation. We are not just talking about model hallucinations, but about asset security.
As AI becomes critical infrastructure, the attack surface expands. AI-related incidents increased by 56.4% in 2024, and the average cost of a data breach reached a record US$4.88 million.
Furthermore, there is the silent risk of “Shadow IT” and “AI Slop” (mass-generated low-quality content). When leadership treats AI only as a tool to be bought, and not as a capacity to be governed, it loses traceability. Who is held responsible when an autonomous agent makes a wrong pricing or hiring decision?
5. The leadership mistake: buying tools instead of building capacity
Many executives continue to treat AI with the logic of software acquisition: choose vendor, approve, train, use. This model does not work here.
“Normal AI” requires a change in posture: from technology buyer to organizational architect. The competitive differential is not in access to the model—that is a commodity. The differential is in Process Engineering.
The companies that are winning (that select group with positive ROI) are not just automating old tasks. They are reimagining entire workflows. They understand that AI does not eliminate human judgment; it makes it premium. In a world where the cost of producing content and code tends towards zero, the value of curation, validation, and strategy soars.
Conclusion: the pragmatic counter-narrative
The era of grand announcements is over. We have entered the era of maintenance, fine-tuning, and difficult choices.
The relevant question for the Board is no longer “What can this technology do?”, but rather “Who are we now that it is here?”. Mature organizations accept that AI will bring permanent ambiguity and require active governance. They trade the promise of “magic disruption” for the reality of incremental capacity building.
The normalization of AI is not the end of the line. It’s just the moment when the adults enter the room.
Practical tips
The CEO’s radar: 3 invisible risks
Based on current Organizational Intelligence trends.
The hidden cost of inference: The focus has shifted from training to inference (usage). With complex reasoning models (“System 2”), the computational cost can explode. The lack of rigorous FinOps can quickly drain margins.
The danger of “Shadow AI”: With ease of access, entire departments may be using non-approved agents, creating vulnerable data silos and intellectual property risks.
The post-quantum threat: It seems distant, but current data indicates that standard encryption can be broken by 2029. Adversaries are already collecting encrypted data today to decipher tomorrow. Preparation starts now.
Action plan: Monday morning
Where to start pragmatizing your AI strategy.
Stop isolated pilots: Move the budget from “innovation experiments” to central business problems that require scale.
Talent audit: If you are not reskilling at least 40-50% of your team to act as “orchestrators” and not just “operators,” your AI plan will fail.
Governance as an accelerator: Implement a layer of data governance (Data Intelligence) now. Without clean and traceable data, your AI is just a generator of legal liability.
Focus on the process, not the task: Don’t ask “how does AI do this task faster?”. Ask “with AI available, does this process still need to exist this way?”.
📗 Recent Publications
Seven realities about AI that executives cannot ignore (Nov 19, 2025)
To conclude the issue on Normal AI, I went back to the archives to find a publication that accurately outlined the transition we are experiencing now. The post “7 Realities about AI Adoption” already warned: when the hype dust settles, what remains is engineering, economics, and architecture.If “Normal AI” is about integrating technology into the imperfect daily life of companies, this file serves as the “instruction manual” for this new phase. It takes the focus off the brilliance of the models and shines a light on the complexity of execution.Here is a summary of what really matters for those operating in “normal mode”:
Economics dictates the rule: Strategy does not start with the prompt, it starts with the economic model. Energy, TCO (Total Cost of Ownership), and operational capacity have ceased to be technical details to become indicators of organizational readiness.
Quality > size: In the normal phase, we stop chasing the “biggest model” to focus on the best orchestration. The competitive advantage has migrated from the raw LLM to the quality of your workflows and your semantic governance.
The new bottleneck is coherence: There is no lack of technology, there is a lack of meaning. Real maturity arises from the sum of technical coherence (systems that talk to each other) and semantic coherence (data that means the same thing to everyone).
Scaling in scarcity: “Normal AI” needs to work where resources are limited. If your solution only runs in ideal and unlimited environments, it is an experiment, not transformational infrastructure.
Readiness is efficiency: Mature companies are not just “using AI”; they are structuring internal platforms, observability standards, and integrated security.
The verdict: The normalization of AI requires us to stop trying to grow with “brute force” and start growing with architecture. It’s less about the miracle of the machine and more about the competence of the organization.
🌎 What the world is saying…
Each week, I share and comment on some of the many references I use to learn more and keep up with news and trends.
The AI bubble isn’t bursting – it’s diffusing
To conclude the reasoning on the normalization of AI, I recommend reading the article “The AI bubble isn’t bursting — it’s diffusing”, recently published on UX Collective by Ian Batterbee.The text offers the perfect visual counterpoint to our theme of the week. While the financial market anxiously searches for the “burst” of the bubble (the collapse), Batterbee argues that we are seeing a diffusion (the spread).Why read it? The author argues that the sudden silence on certain tools is not failure, but integration. Technology is following the path of electricity or the internet: ceasing to be a “destination” to become a “utility.”The Connection with “Normal AI”: If my thesis is that AI has become infrastructure (“Normal AI”), Batterbee’s thesis is that AI is becoming invisible.
In the Hype: AI is the noun in the sentence (“Look at this AI”).
In the Diffusion: AI becomes the adjective or, better yet, disappears (“I improved this report”).
The Blind Spot: The article reminds us that financial bubbles burst loudly, but technological bubbles often “burst” inwards, dissolving into all the software we use without us even noticing. The danger for the executive is not the collapse of the technology, but the complacency of thinking that because the noise has stopped, the revolution is over. In reality, it has only become silent and omnipresent.
References
Accenture. (2022). The Art of AI Maturity: Advancing from Practice to Performance. Accenture Research.
Antelmi, J., et al. (2025). 2026 Planning Guide for IT Operations and Cloud Management. Gartner Insights.
Bean, A. M., et al. (2025). Measuring what Matters: Construct Validity in Large Language Model Benchmarks. NeurIPS 2025.
Capgemini Research Institute. (2024). Harnessing the value of generative AI: 2nd edition — Top uses cases across sectors. Capgemini.
Castro, A. (2025). CFO Trends 2026. Evermonte Institute.
Challapally, A., et al. (2025). The GenAI Divide: State of AI in Business 2025. MIT NANDA.
Chui, M., et al. (2025). Seizing the agentic AI advantage. McKinsey & Company.
de Bellefonds, N., et al. (2024). Where’s the Value in AI? Boston Consulting Group.
Gandikota, V. S., & Prabhu, J. (2025). Driving Sustainable, Inclusive, and Scalable Innovation. Frugal AI Hub, Cambridge Judge Business School.
Gartner. (2024). 10 Questions to Ask About Your AI Build vs. Buy Decisions. Gartner Research.

