On February 26, Anthropic’s CEO published a statement refusing to remove safety guardrails from Claude at the Pentagon’s demand. The Department of War wanted unrestricted use of course and Anthropic said no, willing to lose the contract, willing to be designated a supply chain risk, and willing to walk away from several hundred million dollars in revenue rather than cross a moral line.
Amodei took a principled stand even if I disagree with him about how to use AI in the military. Claude is also a genuinely useful product that I use a lot. What I detail here isn’t an article about a bad company making bad technology, it’s an article about a company whose ethics often stop where its marketing narrative starts.
Anthropic knows what happens when it publishes a blog post. When it announced Cowork plugins for legal, finance, sales and marketing in early February, nearly $285 billion vanished from the software and services sector. Thomson Reuters fell 18%. When it released a new model days later, financial data companies sold off drastically. When it announced Claude Code Security on a Friday, CrowdStrike dropped 8%, Cloudflare 8.1%, SailPoint 9.4% and Okta 9.2%. When it published a post about COBOL modernization on Monday, the targeted company lost 13% in its worst single day since 2000. Accenture and Cognizant fell with it. Retirement portfolios, index funds, and pension allocations absorbed the impact every time.
There are no accidents here. Anthropic has developed a pattern and the pattern is predictable. Each announcement targets a different established software category. Each one is framed as a blog post rather than a product launch with a Q&A or technical evaluation. Each moves the market before any enterprise customer has evaluated the claim. Finally, each pronouncement benefits the $30 billion funding round that closed on February 12 and the corresponding $380 billion valuation, double the previous round five months earlier.
A company that will sacrifice revenue to keep humans in the loop on weapons targeting doesn’t hesitate when publishing a blog post that vaporizes $34 billion from pension funds and retirement portfolios. Anthropic insists on guardrails for the Department of War but not for the economic displacement its own announcements cause. Companies talk a good game, but what kind of ethic is practiced in how Anthropic pursues a marketing narrative? Is the company really acting in a safe and responsible way here?
When the effects are this predictable as a matter of practice, the question is no longer what happened. The question is why a company that brands itself as the responsible steward of AI’s development keeps doing it and the answer requires something deeper than a product strategy. The real answer requires examining a method of seeing the world that has made Anthropic incapable of recognizing the damage it causes or unwilling to care about it.
The Method That Cannot See What It Destroys
The Hungarian philosopher György Lukács made an observation about orthodox Marxism that applies with uncomfortable precision to today’s AI discourse. You could reject every tenet of Marxism, he argued, and still be an orthodox Marxist. Why is that? Because for Lukács the orthodoxy was never the specific beliefs typically expressed by Marxism. True orthodox Marxism for Lukács was the methodology of Marxism, a way of seeing that claimed unfettered access to the total picture of history and rendered everything else a detail in a larger process. Once you commit to the method, the method does your thinking for you. Individual facts became legible only in relation to where history goes. The present matters only as raw material for the future.
Anthropic’s intellectual leadership has produced thousands of pages articulating exactly this kind of framework. The CEO’s essay “Machines of Loving Grace” envisions AI compressing a century of progress into a decade. His follow-up “The Adolescence of Technology” describes this moment as a civilizational rite of passage requiring responsible stewards. Anthropic’s president Daniela Amodei is Dario’s sister. Her husband Holden Karnofsky, now working on Anthropic’s safety policy, has argued in his ‘Most Important Century’ series that decisions being made right now will shape galactic civilization for billions of years.
However, like any good horoscope the specific predictions are always hedged. Powerful AI could arrive as early as 2026 or it might take longer. The outcomes could be very good or very bad. The details might be wrong. I’m happy to consider that this humility is genuine at the level of individual claims. But the method behind the predictions is never questioned. The method is what determines everything.
Anthropic’s method is expected value reasoning applied across cosmic time horizons with the present understood as a waypoint to a future whose shape is being decided right now. Once you adopt this method, every existing institution becomes visible only in terms of its relationship to the trajectory. A COBOL system processing billions of transactions, a company with a century of implementation knowledge, and a pension fund holding the retirement savings of ordinary people are all measured by whether they accelerate the future or stand in its way. If these things accelerate the desired future, it is progress. If they do not, we are back to legacy and all that legacy implies.
The method has already decided the categories before any analysis begins and it also just happens to be the method that justifies a $380 billion valuation.
Anthropic burns billions in cash annually and is preparing for an IPO. The company’s investors at GIC, Coatue, Founders Fund, BlackRock, Jane Street and D.E. Shaw are financial institutions that need an exit and a return on their initial investment. The exit requires a story in which Anthropic is not building a useful tool but displacing a $2 trillion software industry. The totalizing method doesn’t just provide an intellectual framework, it provides the narrative that makes the capital structure viable. The philosophy and the fundraising need each other.
Why do the philosophy and the fundraising need each other? Because the narrative is all there is. Anthropic does not own proprietary data. The company trained on the same internet as everyone else and settled for $1.5 billion over the books it scraped beyond that. Anthropic does not own distribution. Microsoft has OpenAI embedded across Office, Windows, Azure and GitHub. Google has Gemini across Search, Workspace and Android. Anthropic has an API and a chat interface. Anthropic does not own infrastructure. The company rents compute from Amazon and Google. The models are competitive but not always dominant in a field where open source is closing the gap with every release cycle and efforts like DeepSeek demonstrate that near-frontier performance is achievable at a fraction of the cost. What Anthropic owns is a brand and a positioning narrative. The blog posts are not a byproduct of having a moat. The blog posts are the moat. Every announcement that moves a market is proof of relevance, proof that Anthropic is the company setting the terms. If the market ever stops responding to the narrative, there is a $380 billion valuation with no defensible position underneath it.
A blog post at Anthropic can go up on a Monday morning and vaporize $34 billion without the people who published it feeling the weight of what they have done. When you believe you are operating at the scale of the most important century in human history, and when your valuation depends on investors believing it too, the pension portfolios that lose value are not invisible. Pension portfolios are simply beneath the altitude at which the method operates.
Words That No Longer Mean What They Say
The method survives because Anthropic has emptied the words around it of their original meaning and refilled them with its own.
“Modernization” no longer means improvement, it means replacement of something that works with something untested at scale, using a word that forecloses the question of whether replacement is necessary. To modernize is by definition to move forward. The word has already decided the outcome before any analysis begins.
“Legacy” no longer means something valuable passed down, it means condemned. A system labeled legacy is marked as belonging to the past even while it processes billions of transactions daily, achieves eight nines of availability and operates at sub-millisecond response times. The label does not describe the system’s performance. The label describes Anthropic’s intention.
“Safety” and “responsible AI” no longer mean people are protected from harm. Anthropic’s founders left OpenAI over safety concerns and attracted Google, Amazon, Microsoft, and Nvidia to Anthropic. The safety positioning secured government access and regulatory credibility. The word “safety” did not describe a practice when it came to marketing. The word built a brand and the brand made the company fundable at a scale that now requires the disruption narrative to sustain.
“Creative destruction” no longer means a better product won in the market, it means value was destroyed and the people who destroyed it would like credit for creativity.
Today, we can add another word to the list. “Illicit” is the word Anthropic chose in its accusation, published the same day as the COBOL blog post, that Chinese AI labs DeepSeek, Moonshot and MiniMax “illicitly” extracted capabilities from Claude through distillation. Anthropic calls its own use of distillation “legitimate” as a standard practice. When competitors do it to Anthropic it becomes an act of theft requiring national security intervention. The accusation comes from a company that settled a $1.5 billion lawsuit last year for scraping hundreds of thousands of copyrighted books from shadow libraries to train the very models from which it now claims capabilities were stolen. When Anthropic extracts value from the work of others it is innovation. When others extract value from Anthropic it is espionage. “Illicit” does the same work as “legacy” in that it describes Anthropic’s commercial interests rather than an objective distinction.
Each of these words had a real meaning once. Herbert Marcuse described what happens when language is stripped of its critical dimension and made to serve the system that produces it. He called this one-dimensional thought. In a one-dimensional discourse every word affirms the existing order and no word retains the capacity to challenge it. “Modernization” cannot be questioned because the word contains its own justification. “Legacy” cannot be defended because the word has already rendered the defense incoherent. “Safety” cannot be tested against practice because the word functions as brand rather than claim. The discourse is closed. You cannot argue against modernization without appearing to argue against progress. You cannot defend a legacy system without appearing to defend the past. You cannot question safety without appearing to endorse danger. The words do not invite analysis, they foreclose it.
Anthropic built precisely this kind of closed linguistic system. Anthropic has detached these words from their original meanings and turned them into instruments of commercial positioning that move markets, shape policy, and prevent debate without ever having to demonstrate substance. A closed linguistic system is how a blog post proposing the replacement of COBOL with Java, a premise anyone with implementation knowledge would question, can destroy $34 billion before the first line of code is translated. The market did not fail. Anthropic fed it a narrative calibrated to produce exactly this result.
What Anthropic Cannot See
Lukács described a process he called reification, which is the reduction of living relationships into abstract quantities that can be calculated and managed. Reification is what Anthropic is doing to the systems that run much of the world’s most critical infrastructure.
Writing from the opposite end of the political spectrum but diagnosing the same crisis of modernity, Martin Heidegger went further to point out that the deepest danger of technology is not what it does but what it makes us unable to see. He used the term Gestell, usually translated as “enframing,” to describe the way technological thinking converts everything it encounters into standing reserve. So, raw material goes on standby, waiting to be optimized, processed, or replaced. The danger is not any particular technology. The danger is that this way of seeing becomes the only way of seeing so that we lose the capacity to encounter things as they actually are.
The word “legacy” performs exactly this function. When Anthropic labels a mainframe system “legacy” the company is not describing the system. Anthropic is enframing the system, converting a living platform that processes billions of transactions daily into standing reserve for a modernization narrative. The system’s actual characteristics become invisible because the enframing has already decided what the system is: raw material awaiting translation. A mainframe that achieves eight nines of availability and sub-millisecond response times is not encountered as something that works. The mainframe is encountered as something that has not yet been replaced.
What gets lost in this conversion is what I have called exponential wisdom, systemic knowledge that compounds through implementation over time. Each integration builds on the last. Each optimization at one layer improves the behavior of every layer above and below it. The performance, the resilience, and the security posture were not installed as features. These qualities emerged from decades of a vertically integrated stack refining itself under production conditions.
Exponential wisdom resists Anthropic’s method. Exponential wisdom cannot be captured in an expected value calculation because the concept is qualitative, embodied, and cumulative. The value at stake is not a quantity to be optimized or a resource to be processed. Exponential wisdom is a living system whose value exists precisely because the system was tested, refined, and compounded over time under real conditions. Anthropic’s blog post sees code to be translated. What the blog post cannot see is the wisdom embedded in fifty years of production. Wisdom does not transfer to a new platform any more than the institutional knowledge of a 50-year-old organization transfers to a startup that hires away three of its employees.
The proposed replacement exposes how hollow Anthropic’s premise is. Java is nearly 30 years old. Enterprise Java codebases carry their own maintenance burden, their own accumulated complexity, and their own dependency chains. If the argument is that COBOL must be replaced because the people who understand COBOL are retiring, replacing COBOL with a language whose experienced practitioners are aging into the same demographic is not a solution. The proposal is merely a reset of the same clock. The word “modernization” obscures the problem because the word has already done the thinking for you.
Substituting narrative for analysis produces exactly this outcome. Anthropic did not publish a technical evaluation of mainframe migration at production scale. Did anyone at Anthropic even ask, “Is this a good idea?” No, Anthropic published a blog post that used the word “modernization” and let the word do the rest. The substance never entered the conversation because Anthropic’s framing foreclosed the substance. The systems being targeted are performing. The proposed replacement carries its own liabilities that go unmentioned in the blog post. The real value lies in compounding wisdom that cannot be translated. None of the substance was addressed because none of the substance needed to be. The word had already done the work.
The Vanguard and the People It Claims to Protect
Lukács argued that a particular group could claim a privileged view of the total historical process, a view unavailable to ordinary people trapped in the immediacy of their daily experience. Lukács was talking about a political party but the structure applies wherever it appears. The privileged view justified actions that harmed people in the present because the vanguard could see what ordinary people could not: the larger trajectory that made present sacrifices necessary.
Anthropic has built the company’s identity on exactly this structure. The name Anthropic means human-centered. The CEO has described the current moment as AI’s adolescence, a period requiring the guidance of responsible stewards. The company’s public positioning is organized around the proposition that second-order effects of AI matter and that those building AI bear responsibility for those effects.
A blog post that predictably destabilizes an entire sector of the public market is a second-order effect. The content was not new. Multiple companies have offered AI-assisted code analysis and mainframe migration tools for years. The idea that AI can assist with COBOL analysis has been demonstrated, including by me well before this post, and using AI to read, interpret, and explain COBOL systems on the very platform Anthropic is now telling enterprises to abandon. What Anthropic contributed was not a technical breakthrough in any sense. The contribution was a disruption narrative packaged as a blog post and timed for market impact using words designed to move sentiment rather than describe reality. Anthropic accuses others of extracting value from Anthropic’s work but the COBOL blog post extracted proven migration approaches others have already put forward and repackaged them as a case for clean replacement.
Even still, the CEO’s own essays acknowledge that AI could displace half of all entry-level white-collar jobs within one to five years and that wealth concentration could exceed the Gilded Age. Amodei writes about these as risks to be managed. But the COBOL blog post, the Cowork plugins, and Claude Code Security are the very displacement mechanisms Amodei describes in theory deployed by his own company as each product announcement launches. The theoretical risk and the commercial practice are the same action viewed from different distances. Anthropic’s method allows the contradiction because the method always views from the greatest possible distance. The business model requires the contradiction because the business model only works if the disruption narrative holds.
The philosophical structure and the financial structure become indistinguishable at this point. The method says we are in the most important century, the trajectory must be stewarded, and present disruption is a necessary cost. The business model says Anthropic needs to demonstrate category displacement to justify its valuation to investors who need an exit. The method provides the language and the business model provides the motive. The words “safety” and “responsibility” and “modernization” and “illicit” have been emptied of enough meaning that Anthropic is never required to say which one is actually driving the company’s decisions.
A company that calls itself human-centered is publishing blog posts that predictably harm the financial security of people who had no voice in the decision and no ability to evaluate the claims being made. A company that calls itself responsible is externalizing risk onto pension funds and retirement portfolios. A company that uses the word safety has contributed to the software sector losing $2 trillion in market value in a month. A company that cries theft built the very models in question on the scraped work of hundreds of thousands of authors. Responsibility is not a brand. Responsibility is a practice. When the word is separated from the practice, the word becomes the most effective tool for avoiding the practice.
If you object by pointing to the retirement portfolios, the destabilized institutions, and the people whose livelihoods depend on the systems being labeled legacy, the method has an answer. You are focused on the wrong time horizon. You are trapped in immediacy. You cannot see what the stewards can see: the total trajectory, the most important century, and the galactic-scale future that makes your present concerns a rounding error.
Marcuse’s one-dimensional discourse is operating at full capacity here. The objection has been absorbed before it can form because the language has already made the objector the one who does not understand.
The vanguard structure is not new. The vanguard structure is very old. Historically the people claiming to see the whole picture while dismissing the experience of the people they are affecting have not been the ones history has vindicated.
Where Substance Still Lives
None of what I’ve typed so far is an argument against AI. The argument is about the difference between a company that uses AI as a narrative weapon and a practice that uses AI to build on what actually works. Heidegger observed that the saving power grows precisely where the danger is. The answer to technology that enframes everything as standing reserve is not less technology. The answer is a different relationship to technology, one that lets things be what they are rather than converting them into material for someone else’s narrative.
I have built working demonstrations of AI reading, interpreting, and explaining COBOL systems not to replace them but to make their exponential wisdom accessible. The demos run on a dynamic compute platform that orchestrates AI workloads across heterogeneous infrastructure, routes inference to the right model tier based on complexity, and surfaces patterns in natural language for analysts who never learned COBOL. The certified logic stays. The platform stays. AI operates alongside the system and compounds the value further.
The substance of each demo is testable, demonstrable, and built on the recognition that decades of compounding systemic knowledge are an asset to build on rather than an obstacle to remove. The organizations seeing real returns from AI already know this. They aren’t using AI to displace proven platforms, instead they leverage the exponential wisdom of their existing systems.
The organizations seeing no meaningful returns are doing what Anthropic recommends: treating AI as a replacement for what exists because the word “modernization” told them they should.
What This Moment Requires
The question is not whether AI is transformative. AI is transformative. The question is whether Anthropic’s method and the self-serving narrative it sustains will be recognized for what it is before Anthropic causes damage that substance alone cannot repair.
The systems running global finance, government, and critical infrastructure represent decades of earned, tested, and refined understanding. Companies that have been building and operating these systems for generations carry a depth of implementation knowledge that no blog post can replicate or replace. Implementation knowledge is not a liability to be modernized away. Implementation knowledge is the foundation on which AI builds most powerfully.
Exponential wisdom is the direct counter to Anthropic’s totalizing method. Exponential wisdom says the value is not in the future you are projecting. The value is in the system, the living and tested and refined system that carries decades of accumulated knowledge. The perspective that sees clearly is not the one gazing across cosmic time from a venture-funded vantage point. Clarity is found in the view that understands what actually works, why the system works, and what building the system took.
But the words Anthropic uses to conduct the debate have to mean something. Anthropic’s methods have to be accountable to the world as the world actually is, not only to the world that serves a $380 billion valuation or a future we might not even see. The stakes are too high to let the people deciding what the future should look like be the same people whose net worth depends on the deciding.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author’s employer.