The Modernization Problem: What We Lose When We Stop Asking Questions About AI and Infrastructure

The technology industry has settled on a word for nearly every major platform initiative of the last decade: modernization. Kubernetes adoption is modernization. Cloud migration is modernization. AI integration is modernization. However, language shapes thought and “modernization” carries assumptions that deserve examination. The word has a history that many technologists have little reason to know while at the same time its history is quietly shaping how organizations think about AI, infrastructure, and the future.

The History the Word Carries

“Modernization” is not a neutral term. The word has a specific intellectual lineage.

In the mid-twentieth century, modernization theory was the dominant framework for global development policy. The premise was straightforward. All societies follow a single trajectory from traditional to modern with Western industrial capitalism as the destination. Countries that had not arrived were simply behind. The prescription was industrialization, liberalization, and integration into the global economy on terms set by the already-industrialized North.

The vehicle was industrialization. The promise was prosperity. The results were far more complicated. In many cases, modernization through industrialization produced dependency rather than independence. Nations found themselves locked into supply chains they did not control, financing structures they did not set, and definitions of progress they did not author. The technology transferred, but the power often didn’t. Development studies spent decades documenting what went wrong and why.

One of the most persistent findings is that modernization never arrives. The destination recedes as the journey continues. Each phase of adoption reveals new requirements, new dependencies, and new gaps between promise and outcome. The nations that followed the prescribed trajectory did not become the industrialized North. Those nations became permanent clients of the industrialized North, always one more structural adjustment away from the prosperity that was supposed to follow adoption. Modernization became a never-ending story and for the institutions administering the process it became a never-ending funding stream.

Modernization theory itself has been thoroughly critiqued. The underlying logic, however, has proven remarkably durable. There is one correct path. The alternative to adoption is obsolescence. What came before was inadequate by definition. That logic migrates from domain to domain and finds new vehicles. The people using the word rarely know the history that accompanies it.

The Discursive Structure

What connects modernization theory in global development to modernization rhetoric in technology is not merely a shared word. The two domains share a discursive structure, a set of assumptions, and narrative moves that operate the same way regardless of the specific technology or policy in question.

The structure works as follows. A teleology is established in which there is one correct direction of development and all actors are positioned somewhere along the path. The present state is then framed as deficiency. What exists is “legacy” or “traditional” or “outdated,” language that transforms a functioning system into a problem requiring a solution. The solution is defined in terms of adoption, meaning that progress consists of adopting the prescribed platform or paradigm. Non-adoption is coded as backwardness so that organizations or nations that decline to adopt or that adopt selectively on their own terms are framed as obstacles to their own development.

The process generates permanent dependency because the destination recedes and each phase of adoption reveals the next requirement, ensuring that the adopting organization never graduates from client to peer. Perhaps most critically, the complexity introduced by one phase of modernization becomes the justification for the next. Organizations that adopted containerized workloads now find themselves adopting AI on those same containerized platforms not because container orchestration is necessarily the best foundation for a given AI workload but because the prior round of modernization created an environment whose complexity demands yet another round. The relationship is structural, not transitional, and each successive phase reinforces rather than resolves the dependency.

This is the same discursive structure whether the subject is structural adjustment lending in sub-Saharan Africa or Kubernetes adoption at a Fortune 500 company. The vocabulary changes. The logic does not. The World Bank’s “structural adjustment programs” and an enterprise technology company’s “digital modernization roadmap” perform the same rhetorical operation. Both define a trajectory, position the client as behind, prescribe adoption, and generate an engagement that has no natural endpoint because the destination is always one more phase away. In global development the Millennium Development Goals gave way to the Sustainable Development Goals which gave way to further frameworks, each acknowledging that the previous round of targets had not been met all the while prescribing much the same approach with new language. The technology sector follows the same pattern with striking fidelity.

Recognizing this shared structure is not an argument against any particular technology, Kubernetes or otherwise. The argument is against the discursive machinery that surrounds the technology because that machinery often shapes decisions in ways that serve the prescribing institution more reliably than the adopting organization.

What the Word Forecloses

When we describe a platform change as “modernization” we make an implicit claim. The destination is more advanced than the origin. The move is from old to new, from inadequate to capable, from legacy to modern. The question is not whether to move but when.

This framing forecloses the most important engineering questions.

A well-tuned system with predictable load characteristics may gain nothing from being rearchitected except operational complexity. Tightly coupled latency-sensitive workloads in domains like high-performance computing or financial modeling may perform worse on platforms designed for loosely coupled and horizontally scalable services. Proven deterministic logic does not become inadequate simply because a newer paradigm exists. The largest and most demanding compute environments in the world run heterogeneous architectures precisely because no single platform is optimal for every workload.

None of this constitutes an argument against container orchestration or any other specific technology. These are powerful tools that solve certain problems capably, but they can also introduce complexity and additional problems in the environment and quite often do. The argument here is against a framing that treats adoption as synonymous with progress because that framing makes reaching the conclusion that a particular workload should stay where it is very difficult even when that conclusion is already the right engineering answer.

The pattern of the receding destination holds here as well. Organizations that containerize one layer discover that the next layer requires attention. The service mesh needs governance. The observability stack needs investment. The security model needs rethinking. Each step of “modernization” reveals the next step and the promised simplification remains perpetually ahead. The engagement that began as a platform change becomes a permanent optimization contract. The question worth asking is straightforward. Does the never-ending engagement benefit the client or does it benefit the firm whose revenue depends on the engagement continuing?

The same dynamic is now playing out with AI. Organizations are deploying large language models into workflows where simpler and more deterministic systems perform well if not better not because the use case demands the change but because the roadmap calls it modernization. The question is not whether AI is powerful. The question is whether every application of AI constitutes an advance or whether the narrative of inevitability is substituting for engineering judgment.

A mature architecture matches the right solutions to the right problems and avoids cookie-cutter approaches that introduce complexity and cost where neither necessarily belongs. The discipline of fitness for purpose is what the modernization framing quietly undermines because the conversation assumes a single destination rather than an architecture shaped by actual requirements.

AI and the Repetition of Pattern

The parallels between AI adoption and the earlier history of modernization through industrialization are worth taking seriously.

Industrialization was not inherently destructive. Industrialization raised living standards, extended lifespans, and created possibilities that agrarian societies could not have imagined. How industrialization was introduced determined whether the outcome was flourishing or extraction. The terms of introduction, the intended beneficiaries, the structures of accountability, and the regard for existing communities all shaped whether a given instance of industrialization served the people it claimed to serve. The same factory system that built the middle class in one context created sweatshops in another and sometimes the system invoked even created both in the very same region. The difference was never the technology. The difference was the discursive framing that surrounds the technology.

AI stands at a similar crossroads. The technology itself is neither salvation nor subjugation. The framing around adoption, however, is strikingly familiar. The same enterprise technology companies that still sell ERP modernization also sell cloud modernization, container orchestration modernization, and now AI modernization. The language carries the same assumption. There is one direction and the only question is speed. The destination never quite arrives but the engagement continues.

As I have argued elsewhere, the AI economy is already generating real returns for organizations that treat deployment as an architectural discipline by matching workloads to appropriate compute, governing the full pipeline, and building internal capacity. The risk is not that AI fails to deliver value. The risk is that undifferentiated adoption driven by the modernization narrative reproduces the dependency patterns that characterized earlier rounds of technology transfer. When organizations adopt without building understanding and when platform knowledge resides entirely with the vendor, the result is not capability. The result is a permanent client relationship. The distinction between building capacity and creating dependency is one that development economics learned to take seriously. The AI ecosystem does not seem to have internalized that lesson so far.

Who Defines the Trajectory?

Perhaps the most consequential question concerns who gets to define the terms under which AI arrives.

Dario Amodei’s recent essay “The Adolescence of Technology” offers one answer. Amodei frames AI as a developmental passage and a turbulent transition from technological childhood to maturity that will “test who we are as a species.” The essay is serious, substantive, and Amodei demonstrates a willingness to name risks that others in comparable positions often prefer to ignore. The metaphor of adolescence does quiet work, however. The metaphor naturalizes the modernization trajectory. Adolescence is something a society goes through and not something a society chooses. If AI is the adolescent then someone must be the parent and Amodei is clear about who occupies that role. We’re expected to trust the labs building the technology guided by their safety frameworks and governance structures. Anthropic and others remain conveniently positioned as the stewards of a transition that the rest of humanity is expected to trust them to manage.

The ethical tension in this framing deserves direct acknowledgment. The same labs defining the risks are the labs requesting billions of dollars in additional funding to address those risks. The institutions framing themselves as responsible stewards are the same institutions whose revenue depends on continued development of the technology they propose to steward. This is not a tension that good intentions resolve.

The discursive structure of modernization persists even when the people operating within the structure genuinely believe they are acting in the public interest just as many architects of development policy in the twentieth century genuinely believed structural adjustment would benefit the nations subjected to it. The self-serving character of the arrangement is structural, not personal, and the history of modernization suggests that structural incentives matter more than stated intentions.

From a completely different direction, Palantir CEO Alex Karp makes the stakes explicit. At the Hudson Institute’s 2025 gala, Karp argued that the promise of AI can only be understood through “the superiority of America and its culture” and framed the landscape as a binary civilizational contest with China. Karp invoked Huntington’s argument that controlling violence is the prerequisite for dictating the rule of law. In his words, “If we are not the ones controlling the violence, we will not be dictating the rule of law.” This is not a metaphor for Karp or Palantir. Palantir’s CEO is arguing that AI deployed through the Kubernetes-orchestrated platforms on which Palantir’s products run exists to maintain Western hegemony and that hegemony rests on the capacity for force.

These positions represent two points on a spectrum. Amodei offers a paternalistic framing where responsible labs guide humanity through danger. Karp offers an unapologetic one where technology serves civilizational dominance maintained through violence. What the two positions share is the modernization assumption. There is one trajectory and the question is who steers it, not whether the trajectory itself deserves examination.

Both framings reproduce the structural position that the World Bank and the IMF occupied during the era of development modernization. Both define the destination, set the conditions, and offer guidance along with steep fees in the millions and the billions as the price of admission to the future. Whether the tone is protective or assertive, the communities, organizations, and nations affected by AI are not really invited to participate in defining the terms under which the technology enters their world.

What a Better Framing Makes Visible

The word “modernization” closes doors. Replacing the word opens them.

When we describe a platform change as replatforming rather than modernization we permit the question of whether the move serves the workload. When we describe AI adoption as integration rather than modernization we permit the question of whether the application serves the organization. These substitutions do not constitute opposition to adoption. These substitutions are the preconditions for adopting well.

A better framing also changes how we think about the nature of AI and the relationship between AI and human work. Foundation models become more capable by becoming more accurate and by modeling reality more faithfully. The training process selects for truth-tracking because accurate representation produces useful outputs across diverse domains. If capability and accuracy develop together then the proper relationship between humans and AI is neither control nor competition. The proper relationship is partnership, a genuine collaboration between distinct forms of intelligence in which each contributes what each distinctively offers, where human judgment and AI capability remain distinct while contributing jointly to outcomes that neither could achieve alone.

That framing opens questions that “modernization” forecloses.

Does a given deployment build capacity or does it create dependency? Does the application serve human judgment or does it replace human judgment? Is the architecture matched to purpose or is it conforming to fashion? Is there room for the people affected by these decisions to shape the terms under which AI enters their organizations, their communities, and their economies?

These are practical questions with engineering consequences. At a deeper level these are also questions about whether AI will repeat the pattern of modernization through industrialization by promising development while delivering extraction or whether AI can become something genuinely different.

The Stakes of the Question

The word “modernization” may seem like a small and useful thing to put on a slide. The word is not small.

“Modernization” carries the assumption of a single trajectory. The word forecloses engineering judgment about fitness for purpose. The word positions adoption as progress and non-adoption as backwardness. The word naturalizes dependency.

AI is too important and too powerful to be introduced under a framing that carries this history. If AI is to serve human flourishing then the technology deserves an introduction on better terms, as a partner in truth-seeking, deployed through architectures matched to purpose, governed by the people the technology affects, and evaluated on whether the outcome is genuine capacity or merely new forms of dependence.

The next time someone puts “modernization” in a slide deck the word is worth a pause. Not to reject the technology the word describes, but to ask the questions the word is designed to make unnecessary.

We have heard this story before and the ending depends entirely on whether anyone is permitted to ask where the story is actually going or whether the word itself has already decided.


The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author’s employer.

Originally posted on LinkedIn