Last week I posted about a targeting system built in five days on IBM Spectrum Symphony, BrainChip’s Akida, and Palantir’s Foundry. The response was significant enough to warrant a deeper argument, one that goes beyond the demo to a fundamental error in how the industry is currently framing the next generation of AI.
The system works like this. BrainChip AKD1000 neuromorphic processors sit at the sensor edge. Each runs a spiking neural network trained for its modality: visual, RF, acoustic, BLE. Inference runs at sub-millisecond latency on milliwatts. The output is a 128-byte observation record: a classification, a confidence score, a threat score, a timestamp, a sensor type, and a source identifier. That record travels over an encrypted tunnel. Raw sensor data never leaves the node. Imagery, audio, and signals are consumed at the point of origin. The network never carries feeds, it carries meaning.
The emergence engine collects observations across modalities and time. A vehicle classification requires three consistent observations before promotion to a confirmed event. A camera sees a truck. An acoustic classifier hears a diesel signature. A BLE scanner picks up an electronic logging beacon from the same vehicle. Three independent modalities produce one conclusion. Confirmed events project to Palantir Foundry. The operator screen is calm. Every data point on it has already been filtered, confirmed, and validated before it arrives.
The architecture is non-extractive. Data stays where it belongs. Intelligence lives where the data is generated. Only meaning crosses the wire.
Hoards of sensor data and tons of camera feeds do not travel the network. Raw material does not wait to be understood somewhere else. What comes over the wire is a descriptive act, a judgment already made, a classification already rendered, and a confidence already assigned. Each layer receives the output of another layer’s intelligence and acts upon it. The 128-byte observation record is not a data payload. It is a statement. Language in its deepest function is the medium through which one act of intelligence makes itself available to another without surrendering what generated it.
The Debate the Industry Is Having and Getting Half Right
A new wave of thinking in AI research is pushing back against large language models as the foundation of intelligence. The argument goes roughly like this, real-world sensor data is continuous, high-dimensional, and unpredictable. Generative architectures trained to predict the next token do not handle that well. Therefore, real intelligence does not start in language. It starts in the world. World models, systems that learn abstract representations of sensor data and make predictions in representation space, are the path forward.
This is half right. The half that is wrong is more consequential than the other.
The critique of language models as the universal substrate for intelligence is correct. Language is a compression of meaning that has already happened, and it is extraordinarily powerful for the vast domains it covers. A camera feed, a radar return, a seismic sensor, and an acoustic classifier at a tactical edge node are not language problems. Forcing everything through a language layer imposes a lossy, energy-intensive, latency-adding translation step between the world and the system designed to understand it. The neuromorphic approach avoids this entirely. Spiking neural networks process sensor modalities directly, in the native domain of the signal, at the point of origin on milliwatts.
So far, so good. Here, however, is where the argument goes wrong.
Moving the Starting Point Is Not Fixing the Architecture
“Real intelligence starts in the world, not in language” sounds like a correction but it remains the same mistake made one step earlier.
Both positions share the assumption that intelligence starts somewhere, originates at a point, flows from that point, and needs to be centralized somewhere capable of understanding it. The language-first approach puts understanding at the model. The world-model approach puts understanding at the representation layer that abstracts sensor data before reasoning over it. The center moves. The centripetal architecture stays.
What does a centripetal architecture look like? Data flows toward intelligence. The edge is a collection point. The center is where meaning is made. Whether the center is a large language model, a world model trained on sensor representations, or any other form of centralized cognition, the ontological commitment is identical, intelligence is located somewhere other than where the data was generated and the job of the architecture is to get data there.
Consequences follow from this commitment and do not disappear when the model at the center is replaced.
A subtler consequence is rarely identified. A centripetal architecture encodes a particular theory of power. Michel Foucault defined power not as possession or force but as structured relationship, specifically as an action upon an action. Power in this sense is not what the center holds over the edge, it is the field of possible responses that each layer opens or forecloses for every other. In a centripetal architecture, a field is asymmetric by design. The edge can only respond to what the center permits it to see. The center acts and the edge reacts. The edge’s knowledge is taken to the center and returned, if at all, as instruction. In other words, the relationship is extractive instead of mutual.
By way of contrast, a non-extractive design empowers a mutual relationship between layers. Each layer acts. Each layer receives actions it did not originate and responds with actions the originating layer did not fully determine. The sensor classifies and the emergence engine responds, not by retrieving the raw data but by evaluating the claim the sensor made. The emergence engine confirms and Symphony responds, not by inspecting the underlying observations but by routing a confirmed event. Power flows in both directions across every boundary and no layer is sovereign over any other. The architecture enforces the structure rather than policy.
Herbert Marcuse identified a related but deeper problem with dominant technological systems. In Marcuse’s analysis of technological rationality, he argued that a technology presenting itself as neutral and universal is never actually either. Marcuse found that technology encodes a particular set of values and assumptions about what counts as rational, what counts as a problem, and what counts as a valid solution. When a technology becomes the dominant paradigm, it defines the frame and the problems that are solved. Other problems are simply glossed over. Marcuse called this one-dimensional thought, the foreclosure of critical alternatives by a technological apparatus that has redefined rationality in its own image.
A world model that learns abstract representations of sensor data and reasons in representation space is vulnerable to Marcuse’s technological rationality at architectural scale. The representation space is not the world, it is a particular reduction of the world, one that reflects the training data, the schema designers, the optimization objectives, and the institutional priorities of whoever built it. The only thing that remains intelligible is whatever fits in the representation space made possible by the reductive world the model offers and what does not fit is simply noise. If world models become the dominant AI paradigm, their use will only allow certain intelligence to be enabled that may or may not fit the world where everyone actually lives.
The attack surface then does not shrink, it concentrates. A 25th Infantry Division operator whose system extracts raw feeds to a central processor has exposed those feeds to every vulnerability between the sensor and the server. A 128-byte encrypted observation record exposes nothing about the underlying sensor, its resolution, its coverage area, its revisit timing, or the intelligence collection geometry behind it. The adversary learns only that a specific target was identified. The raw data was consumed and discarded by the NPU before it ever touched a network.
Bandwidth and latency do not improve, they scale with the wrong variable. As sensors are added, a centripetal architecture adds traffic proportionally. A non-extractive architecture adds almost none. Each new node classifies locally and emits meaning. The network carries conclusions regardless of fleet size.
Partition tolerance remains fragile at the center. If the world model is down or unreachable, what does the edge do? In a genuinely distributed architecture, each node continues to function with full local capability because intelligence was never remote in the first place.
What the Targeting System Actually Demonstrates
The targeting demo I built is only one application of a deeper design. The underlying architecture is a domain-agnostic coordination substrate that connects heterogeneous environments without extracting knowledge from them to a hoarding center.
The same substrate can run financial trading, where market classification happens at the point of data generation and only confirmed signals cross into the decision layer. The substrate can run neuromorphic hive mind inference, where a trained model on several AKD1000 nodes produces a result that never exposes the model weights, the input data, or the raw computation to any other tenant or operator. These are three different domains, three different problem types, one architecture.
The architecture works across domains precisely because it does not assume that intelligence starts anywhere in particular. The edge node running a spiking neural network classifier is not a data collection point feeding a center that understands, it is itself a locus of intelligence, bounded, specific, local, and one that participates in a larger coordination without surrendering its local knowledge.
What Happens When a Paradigm Becomes Dominant
Marcuse’s concern was not with any particular technology but with what happens when a technological paradigm achieves dominance, when it stops being one approach among many and becomes the definition of what a rational approach looks like.
The world-model paradigm is not wrong as one approach to AI, for many problems it is genuinely powerful, but a paradigm that becomes the infrastructure through which AI is delivered at scale imposes the representation space of its designers as the lens through which all sensor data, all environments and all coordination problems are required to be seen.
Consider what that means concretely. A world model built primarily from European institutional data, optimized against European regulatory priorities, and designed by teams whose professional formation reflects a particular Northern Atlantic intellectual tradition will be extraordinarily good at problems that tradition knows how to see. The model in question would be systematically blind to the social organization of a Colombian rural community, the informal economic logic of a West African market, or the land-relational knowledge that structures life across much of Latin America, realities not absent from the world, but they will be absent from any representation that would put Europe at its center.
Marcuse would easily notice the one-dimensional design in play here. The richness of the world is reduced to what a single representational framework can accommodate and the reduction is invisible to everyone operating inside the framework because the framework has become the definition of what intelligence looks like.
The scaling argument compounds the problem. The more nodes added to a world-model architecture, the more data flows toward the representation layer. The representation layer becomes the bottleneck, computationally, epistemologically, and politically. Whoever controls the representation space controls what the system can perceive. At sufficient scale, the problem becomes governmental rather than technological because the architecture precludes any alternative.
A non-extractive architecture does not have a representation layer to become a bottleneck. Each node reasons in its native modality. The network carries conclusions. Adding nodes adds intelligence at the edge without adding load at any center, because there is no center accumulating representations. The scaling is genuinely horizontal. The epistemological diversity, the different modalities and classifiers and local environments contributing independent observations, grows with the fleet rather than being compressed away.
When Paradigm Becomes Policy
Marcuse’s concern about dominant technological paradigms is not merely theoretical. The concern becomes concrete when a paradigm achieves dominance not just technically but institutionally. The architecture and the regulatory framework that defines legitimate AI arrive together, built by the same actors, reflecting the same assumptions and mutually reinforcing each other’s authority.
The dynamic worth examining carefully in the current moment is the convergence of technical and institutional dominance. Substantial investment in centralized world-model AI infrastructure is flowing from major institutional and regional sources at the same time those same regions are constructing the regulatory frameworks that will define what compliant, trustworthy, and governable AI looks like globally. The European Union’s AI Act, the most comprehensive AI regulatory framework currently in force, establishes compliance requirements, transparency obligations, and risk classifications that apply not just to European companies but to any AI system deployed in European markets, which in practice means global deployment requires European compliance. The policy objective is documented and publicly stated, to establish European standards as the global baseline for AI governance in the same way that GDPR became the de facto global standard for data privacy.
The architectural consequence is not conspiratorial, it is structural. Regulatory frameworks that require transparency, auditability, and central oversight are easier to satisfy with centralized architectures than distributed ones. A world model with a defined representation space, a central training process, and auditable outputs maps cleanly onto compliance requirements built around those assumptions. A genuinely distributed non-extractive architecture, where intelligence lives at the edge, where the ontology grows from observed data rather than pre-defined schemas, where no central representation space exists to audit, is harder to fit into a compliance framework designed with centralized systems in mind. The regulation does not need to intend the asymmetry, its structure demands it.
The reason the asymmetry matters globally is one Escobar’s post-development critique makes precise. The history of universalizing institutional frameworks, development banks, structural adjustment programs, and international health initiatives consistently reproduces a specific pattern. Frameworks built in wealthy institutional centers define what rational, legitimate, and compliant practice looks like and are then applied globally as the condition of access to capital, markets, and institutional legitimacy. Local knowledge, local practice, and local order that does not fit the framework is either transformed into something the framework can accommodate or excluded from the system entirely. The framework does not present the pattern as extraction but as standards.
AI infrastructure is not exempt from the dynamic. A world-model paradigm built and governed in wealthy institutional centers, institutionalized through regulatory frameworks that favor its architectural assumptions, and deployed globally as the definition of trustworthy AI is not a neutral technical choice but a knowledge infrastructure that will determine, at architectural scale, whose ways of sensing, classifying, and coordinating are legible to the system and whose are not. Communities, environments, and problem domains that do not fit the representation space built at the center will not appear as underserved, they will simply not appear.
The justified claim here is not that any particular actor intends the outcome but that the outcome is structural. Centripetal architectures, when they achieve sufficient institutional dominance, reproduce centripetal knowledge relationships regardless of intent. The pattern has been demonstrated repeatedly in the history of development infrastructure and there is no architectural reason AI infrastructure would behave differently. The representation space is built somewhere, by someone, reflecting something; at sufficient scale the somewhere, the someone, and the something determines the boundaries of what the system can know.
A non-extractive architecture is not immune to institutional capture but resists the specific failure mode of centripetal dominance by design. If intelligence lives where data is generated, if local knowledge never has to enter a representation space to participate in coordination, if the ontology grows from what the world actually produces rather than what a schema anticipates, then the architecture cannot systematically render local knowledge invisible, because local knowledge is where the intelligence is. The edge is not a data source for a center that understands but a locus of understanding in its own right. The substrate coordinates. The difference, at global scale, is the difference between infrastructure that serves the world’s diversity and infrastructure that administers it.
Language Is Not Unimportant, It Is One Modality Among Many
Precision is required here because the argument is easy to misread.
Dismissing language as a foundation for intelligence does not mean language is unimportant. The LLM layer in my architecture is real and valuable. The same Symphony orchestration framework that manages neuromorphic inference workers can manage vLLM services for semantic validation, RAG-based enrichment, and natural language summarization of confirmed events. An LLM layer gives the operator a richer, more contextual picture, natural language summaries of what the emergence engine confirmed, semantic connections across events, and explanations that make underlying intelligence accessible to people who were never trained on the sensor modalities producing it.
Notice what the LLM is doing in that architecture. The LLM is not the substrate through which everything passes, not where intelligence originates or centralizes, but one service among many, managed by the same orchestration layer, receiving confirmed and structured inputs, producing enriched outputs that travel up the same hierarchy as everything else. Language is powerful and appropriate precisely where language is the right tool but is not required to be the universal foundation.
The mistake is not using language. The mistake is treating any single modality, whether language or world model or anything else, as the privileged starting point from which intelligence flows outward.
Language as the Medium of Relationship
The most important and least examined question in the current debate is not what language is but what language does at the boundaries between layers.
Foucault’s formulation of power as an action upon an action is useful here because it describes exactly what happens at every boundary in a well-designed coordination architecture. The sensor’s classification is an action. The emergence engine’s confirmation threshold is an action structured in response to it. Symphony’s routing decision is an action structured in response to that. The operator’s situational picture is structured in response to all of them. Each layer acts upon the action of the layer below without needing to possess or replicate what generated it.
Language is the medium that makes coordination across boundaries possible, not language in the narrow sense of natural language text, though that is one instance of it, but language in the broader sense of structured meaning that can travel between parties without requiring either party to surrender what they know. The 128-byte observation record is language. The confirmed event schema is language. The ontology object in Foundry is language. The natural language summary the LLM produces for the operator is language. Each is a different register of the same function, an act of intelligence made available so that another act of intelligence can respond.
For language to be unimportant in this architecture would mean that the layers could not communicate at all, not because they lack data but because they lack the structured medium through which one layer’s act becomes available to another’s response. Remove language in this sense and the result is not a more intelligent system but a system that cannot coordinate because no layer’s intelligence can act upon another’s without collapsing the boundary between them.
The world-model critique of language-as-substrate misses something critical precisely because it conflates the substrate question with the medium question. The problem with treating language models as the universal foundation is not that language is unimportant but that language is being asked to do the wrong job, to be the origin of intelligence rather than the medium of its relationships. Language does not generate intelligence but carries intelligence across boundaries in a form that preserves enough meaning to enable response while respecting the autonomy of the layer that generated it.
In the targeting system, the acoustic classifier does not need to know what the camera saw. The camera does not need to know what the RF sensor detected. The emergence engine, however, needs to know that all three acted independently and consistently across a five-second window and that their combined actions constitute a confirmation. Language bridges those independent acts without merging the layers that produced them. The boundary between sensor and emergence engine remains real. The coordination across that boundary is also real. Language is what holds both truths simultaneously.
The point is not technical but an architectural commitment with deep consequences for how intelligence scales, how power distributes across a system, and whether coordination requires extraction or can preserve the autonomy of every layer that participates in it.
The Architectural Principle
What I am describing is a system where local and global are genuinely unified without either collapsing into the other.
The edge is not a thin client reporting to a thick server. The center is not an omniscient model that the edge consults. Each layer does its job completely so that every other layer can do its job with the minimum necessary input. The sensor classifies. The emergence engine confirms. The orchestration layer routes. The ontology presents. The hierarchy is real. The boundaries are real. The autonomy at each layer is real. The coherent collective behavior that emerges from their interaction is real.
What makes this possible at every boundary is language, not as the origin of intelligence but as the medium of relationship between intelligent layers. Each layer speaks to the next in a register precise enough to enable response and bounded enough to protect what generated it. The power relationship Foucault describes, action upon action, runs in both directions across every boundary and no layer is sovereign. The sensor does not report to the emergence engine but makes a claim the emergence engine evaluates. The emergence engine does not report to Symphony but produces a confirmation Symphony acts upon. The difference is not semantic, it is the difference between a system that extracts and a system that coordinates.
The architecture is not a compromise between centralized and decentralized but a different ontological commitment entirely, that coordination does not require extraction, that intelligence does not require centralization, that language does not require universalization, and that the value of a system is not located in any single component but in the integrity of the relationships between components that have been tested, refined, and tightened under real conditions.
Foucault and Marcuse together identify what is actually at stake when an AI architecture becomes dominant infrastructure. Foucault tells us that the architecture encodes a power relationship, action upon action, asymmetric or mutual depending on whether the design extracts or coordinates. Marcuse tells us that a dominant technological paradigm does not merely solve problems within a frame; it redefines what problems are visible, what solutions are rational, and what kinds of intelligence are permitted to exist. A world model at sufficient scale does not merely process the world; it administers the world, deciding what fits the representation space and rendering everything else noise.
The non-extractive architecture is a direct answer to both critiques, refusing the asymmetric power relationship by making every layer a genuine actor rather than a data source and refusing the one-dimensional reduction by preserving epistemological diversity at the edge, multiple modalities, independent classifiers, and local knowledge that is never compressed into a universal representation space. The richness of the world stays where the world is. Only meaning crosses the wire.
The Prior Question
There is a question underneath the current debate that AI research almost never asks, what licenses the assumption that the world has a structure worth modeling in the first place?
Every world model, every spiking neural network classifier, every emergence engine, every coordination substrate is built on a prior conviction, that the universe is not arbitrary noise but intelligible pattern, that its order is real and discoverable, and that inquiry into it will produce knowledge rather than useful fiction. The conviction is not itself a scientific result but the precondition of scientific inquiry. Science does not establish that the world is rational but proceeds on that assumption and keeps finding it confirmed.
The theologian T.F. Torrance spent his career tracing the genealogy of that assumption. His argument, developed across Theological Science and Space, Time and Incarnation, is that the intelligibility of the universe is not philosophically self-evident. Intelligibility required a specific set of convictions to become thinkable, that the universe is contingent (it did not have to be the way it is and therefore must be investigated rather than deduced) and that its order is real rather than imposed, discovered rather than constructed. The convictions, Torrance argued, have a theological origin. The universe is rational because it was made through the Logos, the Word, and its order participates in a rationality that was there before any creature began to inquire into it. Science works because it moves along the grain of an order it did not create.
The argument matters for the architectural question in a precise way. A world model that learns abstract representations of sensor data and reasons in representation space is not discovering order but constructing a substitute for order, a representation space that reflects training distributions, schema choices, and optimization objectives. When the model generalizes well, it is because the representation happens to track real structure in the world. When it fails, it is because the representation space has no room for something that is nevertheless real. The model cannot tell the difference from the inside. The representation space is self-confirming by design.
An architecture that lets intelligence live where data is generated makes a different epistemological commitment. The sensor classifies what is actually there. The emergence engine confirms what actually happened across independent observers. The ontology in Foundry represents what is genuinely real, not what fits a pre-existing schema but what the world produced and what multiple independent sensors agreed upon. The schema grows from the data rather than the data being filtered by the schema. The commitment is not a technical preference but an epistemological posture, the conviction that the world has its own order worth tracking and that the job of the architecture is to stay close to that order rather than substitute a representation for it.
Consider what the emergence engine actually does when a camera, an acoustic classifier, and a BLE scanner independently agree on the same vehicle within a five-second window. The engine does not construct a vehicle event but discovers one, recognizing that three independent observations have converged on something that was already there before any of them looked. The confirmation threshold is not a design choice about how much evidence is sufficient but an architectural expression of epistemic humility; the world’s order is real and real order withstands independent witness. The conviction has to precede the engineering for the engineering to work.
Science and engineering built on that conviction operate with enormous freedom, not despite the prior commitment to order but because of it. The order is given. The inquiry is free to follow wherever the order leads. What is not free and what a dominant world-model paradigm threatens to foreclose is the diversity of ways that order can be approached, observed, and expressed. A representation space with a single center is not a map of the world’s rationality but one perspective on it, institutionalized as infrastructure and therefore presented as the only rational perspective available.
The Word precedes creation. Order is prior to inquiry. Every architecture that genuinely serves intelligence, rather than administering it, builds in that frame.
The principle scales. The architecture scales. The domains to which it applies are as broad as coordination itself.
Creative Destruction and What Opens Up
Schumpeter’s creative destruction is usually read as an economic observation, it is also an epistemological one. Systems that administer rather than coordinate do not simply become inefficient over time but become brittle in a specific way, accumulating unsolved problems at the boundaries of their representation space, problems they cannot see because the schema has no room for them, until the weight of what they cannot address overwhelms the value of what they can. The destruction that follows is not arbitrary but follows the grain of reality pressing against the limits of a framework that mistook itself for the world.
The brittleness is already visible in global development infrastructure. The institutions built to coordinate responses to community health crises, migration, and disaster recovery are not failing primarily because of insufficient funding or political will but because they are centripetal architectures applied to problems that are irreducibly local. A universal development schema, however well-intentioned, sees community health in rural Colombia, migration in the Sahel, and disaster response in coastal Bangladesh as instances of universal problems awaiting universal solutions. The schema produces interventions. The interventions produce dependency. The dependency produces more interventions. The local knowledge that could actually solve the problem, the knowledge that exists only where the problem exists, never enters the system because the system has no way to receive it without extracting and transforming it into something the representation space can accommodate. By the time that knowledge arrives at the center it is no longer the knowledge that mattered.
Post-development theorists have named the dynamic with precision. The critique is not that development is unnecessary but that development infrastructure built on centripetal extraction systematically destroys the local capacity it claims to support. The solution is not a better universal schema but an architectural shift, from extraction to coordination, from imposing a representation space to letting local knowledge remain local while participating in something larger.
The pluriverse, as Arturo Escobar defines it, is not relativism and not the claim that all local knowledge is equally valid and nothing can be evaluated. The claim is more precise and more demanding, that multiple worlds can coexist and coordinate without any of them being required to dissolve into a universal, that coordination is possible across genuine difference without difference being the problem to be solved, and that the richness of the world is not an obstacle to intelligence but the condition of it.
The technical question, unanswered for decades, has been whether pluriversal coordination is actually feasible at scale, whether an architecture can hold genuine local autonomy and coherent collective action simultaneously without collapsing one into the other.
The coordination substrate that can run the targeting system, the neuromorphic hive mind, and the options trading platform is an empirical answer to that question. Three genuinely different worlds can run on the same substrate, defense edge inference, neuromorphic hive mind inference, and financial signal processing. None is required to look like the others. Each domain can retain its own modalities, its own classifiers, its own local knowledge, its own emergent order. The substrate can coordinate without extracting. The boundaries are real. The coherence is also real.
What opens up when the architecture is right is not just more efficient AI infrastructure but the technical feasibility of a different relationship between intelligence and the world in which it operates, one where local knowledge is not raw material to be processed at a center but genuine intelligence to be respected at the edge, where language carries meaning across boundaries without extracting what generated it, where power flows in both directions and no layer is sovereign, and where the order the system tracks is the world’s own order rather than a representation space someone constructed and then confused for reality.
Creative destruction clears the ground, pluriversal possibilities are what grow in it.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author’s employer.