Lately, a number of companies have been citing AI productivity gains as the reason to let senior people go, replacing them with younger, cheaper hires, sometimes three for one. I’ve watched it happen to people I know across several industries, many that were exceptional at their jobs. World-class, in fact. But letting people go because of AI? I don’t buy it.
The move is deeply troubling. The work these companies say they want to do with AI requires domain expertise only the most senior carry in the first place. AI accelerates expertise, it doesn’t replace it. The notion of AI as agent rather than partner has real limitations, and they aren’t only technical. We should be partnering with AI, not building what Aristotle called a living tool, the slave that serves without question, the instrument that never pushes back.
Companies that prefer AI-as-replacement to AI-as-partner reveal what they actually want: a workforce, human and machine, that doesn’t talk back, doesn’t require engagement with expertise, doesn’t push back, and quietly sits in their cubicle. That fits the bottom line, and it fits an appetite for power over others made more respectable by the language of technological inevitability. The “AI is taking your jobs” story is, in many cases, less a description of what AI can actually do than a more palatable way to describe headcount cuts that lift the share price.
But, what happens to the C-suite when boards of directors figure out that their domain expertise isn’t required either?
Hegel described what comes next. In the master-slave dialectic, the master who reduces the slave to a living tool becomes the dependent one over time. The slave does the actual work, develops actual skill, develops the self-consciousness that comes from engaging with the world. The master, freed from labor, atrophies into someone whose identity rests entirely on commanding what they can no longer do themselves. The reversal is structural, it is built into treating another being as an instrument.
What happens when the master becomes the slave?
The C-suite itself isn’t needed. Neither is the management chain of talking heads on conference calls all day long. The same logic applies all the way up to the top floor.
The link between that pattern and the rest of this essay is institutional misreading, decisions made on the basis of what’s visible on the surface rather than what’s actually in the work. Once in a while recruiters contact me out of the blue with roles less senior than the work I’m doing now. They’re using the same pattern-matching LinkedIn uses, and the pattern matches enough surface features of my profile to put me on some list. Many of these recruiters are themselves quite junior and lack the domain knowledge to assess what they’re seeing, a layer of human pattern-matching wrapped around the algorithmic kind, neither layer able to catch what the other misses.
Sometimes I get curious and take a look because it’s a free read on what the market is doing. Once I went further and took the call. Within two minutes it was clear the role was a mismatch by orders of magnitude. The recruiter had no way to know that, there was nothing in their workflow that would have surfaced it.
Several years ago, a recruiter asked what concerns I had about a role he was pitching. I told him my only concern was that I was overqualified for it. He seemed taken aback. I never heard from him again.
Everyone gets curious about this or that role LinkedIn throws in front of your face and I’m no exception. So, I click on a job description and watch LinkedIn’s “matching tool” run its analysis to see if I’m a fit. The verdict is typically that I’m missing most of the required qualifications somehow. Recently, on one such role, the matching tool told me I matched two of six requirements. The four marked missing: a strong engineering background, experience with containers and orchestration, proficiency in standard programming languages, and familiarity with storage systems and front-end frameworks!
I mean, I have spent the past four months publishing thirty-two technical demonstrations that exercise, in aggregate, every one of those supposedly missing qualifications, often several times over. The demos run on Docker, bare metal, KVM, in cloud, and on-prem. They are written in Python and JavaScript and Bash. They sit on top of one of the most sophisticated parallel storage systems in production anywhere, with a dynamic compute platform that enjoys seven or eight million cores across 80 percent of the world’s largest financial firms. They have dashboards built in Flask, a mobile app, and a recent integration into a production command-and-control platform. Every “missing” requirement is in the work. None of them are in the words on my profile.
The inherent contradiction here is the thing worth saying. The same system that surfaced the role in the first place, pattern-matching some subset of my profile against the job, turns around and tells me I don’t qualify when I look at the role. The role isn’t the wrong fit for me. The words for what I do simply haven’t caught up with what I actually do, and a matcher can only see words.
The gap is most dramatic the more senior you are, especially if you’re doing any sort of groundbreaking work.
What the matcher saw
The matcher saw my profile. It saw “Field CTO,” “HPC Cloud,” “Spectrum Symphony,” “Storage Scale,” and a few dozen related terms drawn from IBM’s product taxonomy. It saw a list of skills built up over years of being endorsed for things people knew me from, it did not see the work itself, because the work isn’t on LinkedIn. The work is in PDFs of technical papers, video demos, source code in a hundred Python files, and my website at https://kevindjohnson.org.
The matcher’s failure isn’t surprising. Matchers are word-counting machines. The deeper observation is that the gap between what my profile said and what my actual practice contains was, by my count with a little help from Claude, around forty distinct technical skills wide. That isn’t a tuning problem. That’s a structural feature of how profiles work. A profile is a retrospective summary of work that has already been categorized, named, and credentialed. Work that hasn’t yet been categorized, work that crosses categories, integrates novel combinations, or invents the integration itself, has no place to live in a profile until someone gives it a name.
What the demos contain
A single demo from a few weeks ago integrates BrainChip neuromorphic chips, IBM Spectrum Symphony, GPFS, NVIDIA GPUs running Nemotron, and Anduril Lattice. The integration was completed in three hours. To do that requires fluency in five platforms, the API of each, the data flow between them, geospatial coordinate systems, simulation orchestration, distributed consensus semantics, and a working command of AI-assisted development tooling at expert tempo. None of those things is a checkbox. They are a way of working.
A second demo built a sub-200-millisecond missile defense kill chain on commodity hardware, with rules of engagement encoded as Foundry ontology objects, cryptographic provenance via CKKS homomorphic encryption, and N-of-M sensor confirmation through Shamir secret sharing. To build that you need to know how lattice cryptography works, how to wire OpenFHE into a service, how to design a real-time pipeline that fits its latency budget, and how to compose all of it under a workload manager that knows about heterogeneous resources. Again, no checkbox.
A third demo wired ten public Railfan cameras to ten neuromorphic chips classifying nine kinds of railcar in real time, then routed the self-discovered spike records through GPFS into Palantir Foundry, with a Flask dashboard showing tank car counts at strategic junctions as a leading indicator of petroleum logistics. That one demo alone exercises Python, JavaScript, transfer learning, model quantization, distributed scheduling, parallel filesystems, ontology design, and front-end work. The matcher said I was missing four qualifications. The demo proves four of them in a single artifact.
I am not making the case that the matcher should have known. The matcher couldn’t have known. What I’m pointing at is that the surface of the profile and the surface of the work were, in this case, separated by a layer of words that hadn’t yet been written down.
How the work gets built
Someone might say AI is the reason I can do all this, and they’d be a little bit right but almost entirely wrong. Generative AI makes things faster, but without the years of experience and skill behind it, the work would be impossible. What I do with AI, what anyone does, is a partnership and people without my skills can’t pull off what I’ve accomplished in the last several months. The model is excellent at typing what you ask it to type, it is not good at telling you what to ask. Without architectural judgment underneath it, the model produces plausible-looking code that fails in ways that take longer to discover than any speed gain saved. Wiring five platforms together in three hours requires knowing before you start which of them can talk to which, which APIs are honest about their guarantees, which data formats survive the trip, which failure modes surface loudly and which hide silently. That knowledge doesn’t come from the model. It comes from years of having watched these systems behave under stress, having broken and fixed them, having read enough of their source to know where the surprises live. The model speeds up the typing. If the architecture is wrong, faster typing produces broken systems faster. AI-augmented development is force multiplication on existing competence, not a substitute for it.
This is most acute at the boundaries between systems. A self-contained iPhone app is something a model can hold whole, one language stack, one execution context, a documented framework, visible failure modes. Half an hour of conversation gets you a working app as long as you can bring up the infrastructure. Integration across distributed systems is entirely different. Semantic routing across heterogeneous compute substrates, KV cache coherence between vLLM workers on different nodes, ontology alignment between platforms that were never designed to meet, these don’t fit in any single frame. The model has seen each piece separately, it has not seen the seam. The seam is where the work is and the seam is what architectural judgment has to supply.
The compression ratio is real, work that would have taken weeks five years ago can take days or even hours now, but the ratio applies only to people who already know what they’re building. For everyone else, the model produces something that looks like the thing and isn’t.
What’s true at the individual level shows up at the company level too. Palantir, which has built its model around pairing experienced practitioners with AI rather than replacing them, posted 137 percent year-over-year growth in U.S. commercial revenue in Q4 2025 and projects 115 percent for 2026. The broader enterprise software market grew around 12 percent. Companies that hit only nine or ten percent call their work a great success, with accolades like “we can do even better next quarter,” all the while missing out on 100 percent gains due to their broken business model. The gap between what’s possible when you retain domain expertise and pair it with AI and what you get when you shed that expertise to hire compliant automatons is more than a hundred percentage points of growth. That’s what preferring workers who don’t talk back actually costs you.
The share-price logic driving the cuts has its own contradiction. Shedding senior workers cuts expense in the short term and lifts margins, effectively nudging the price a bit higher. The same company then settles into a much lower growth norm than it could have reached. Look at Palantir, a lean firm of a few thousand growing past a hundred percent because it pairs deep expertise with AI while an incumbent of hundreds of thousands of employees trades in the two hundreds. Per employee, the smaller company carries roughly a hundred times the market value of the larger. Move the larger toward even a fraction of that per-head efficiency and the share price spikes well above where it sits today. No single factor explains all of that, but enough of it is explained by what we’ve been describing for the trade-off to be visible. Companies cut to lift the price a few percent and forfeit several hundred on the way.
Some of the larger enterprise companies are waking up and smelling the coffee. Part of Palantir’s growth comes from incumbents bringing them in to build the ontologies that pair expertise with AI at scale.
You’ve heard of EPS, earnings per share. The number worth watching now is EPE, earnings per employee. Companies that retain expertise and pair it with AI have spectacular EPE because each individual leverages AI to do the work of several. Companies that shed expertise to hire compliant headcount have low EPE because the people remaining can’t do that kind of work, and the AI partnership never gets built. The first kind of company sees the growth show up in the share price eventually. The second kind keeps lifting share price by cutting expenses to keep those bonuses in play, which is the strategy that keeps things on the down low.
There’s an ethical dimension worth naming here. The executives running this play have a fiduciary duty to maximize shareholder value. They’re short-circuiting that duty to preserve their own bonus structures and additional compensation at the top. The company underperforms by hundreds of percentage points of growth while the people who chose the lack of performance get paid as if they had delivered. That’s fraud, writ large.
After the master
Why is Palantir successful where others aren’t? One reason is that their organization is flat and doesn’t entertain large irrelevant management tiers. A four-thousand-person company pairing experienced practitioners directly with AI without intermediate layers of compliant headcount and executives commanding that headcount is the structural opposite of useless bureaucracy. Flat means everyone touches the work. Flat means the relationship between human and AI is partnership, not a chain of command running from AI up through middle management to the C-suite. Flat means there are fewer people whose primary role consists of treating other people as instruments.
The architecture Palantir replaces is the one Aristotle and Hegel were describing, two and a half millennia and two centuries ago, in different vocabularies and for different reasons. A master who has stopped doing the work commands a slave who does the work and the master atrophies because the work is what produces capacity. Many, if not most, large organizations have inherited this architecture by accident and reinforced it deliberately when AI offered a cheaper-looking version of the slave. The architecture has always been wrong about where capacity actually lives. Every senior worker I know has a manager only because that’s the way the company works and not because they need one. We shouldn’t have structure in place simply because it’s in place, especially when it does systemic damage to productivity, deflating what could be the share price while preserving its own incentives to remain on top.
What we really need is partnership. Domain experts paired with AI sufficient to amplify them. The expert sets direction. AI executes within direction. Each side is irreducible to the other. The expert without AI is slow. AI without the expert is plausible-sounding and wrong. Together they produce work neither could produce alone and the production is exponential rather than additive because the partnership itself is generative, working as one.
That structure produces a hundred and thirty-seven percent year-over-year growth while incumbents call nine percent a success. The structure isn’t a Palantir secret, it’s available to any organization willing to depart from their useless architecture of command.
Partnership produces. Command administers. Once the producers are gone, there is nothing left to administer.
The point was never about administration. The point is to build and build anyway.
The opinions expressed in the present article are those of the author and do not necessarily reflect the views of the author’s employer.