Intellectual Context
A team from Palantir Technologies and MIT published a paper in 2023 called Concept-Centric Software Development that formalized an observation most experienced developers already share. The concepts underlying a software system, the functional units users must understand to operate the product, are the most important design artifacts a company produces. These concepts deserve treatment as first-class entities in the development process. The Palantir effort built a concept inventory inside the company’s own Foundry platform, cataloging roughly 150 named concepts linked to products, teams and documentation and searchable by hundreds of employees. The paper introduced a useful term for a familiar phenomenon.Conceptual entropy names the tendency of concepts to proliferate and fragment as products grow, with the same name acquiring different meanings across teams and the same functionality appearing under different names.
Daniel Jackson’s concept design theory from The Essence of Software provides the intellectual foundation for the Palantir/MIT work. Jackson holds that concepts should be independent, purposive, and reusable. Since the 2023 paper, Jackson’s research at MIT has evolved in a direction worth attention. A 2025 SPLASH paper with Eagon Meng titled What You See Is What It Does proposes that concepts are the right structural unit for LLM-generated code precisely because of their independence and self-containment. A prototype called Kodless demonstrated that LLMs can generate concept implementations from minimal specifications more reliably than LLMs can modify tangled monolithic codebases. Jackson has potentially described concepts as “a new kind of high-level programming language, with synchronizations as the programs written in that language.” Meanwhile, Palantir’s own product trajectory has moved the Ontology to the center of the company’s AI platform, where AIP agents operate within a governed ontology layer by proposing actions on real objects, extracting structured entities from unstructured documents, and reasoning over an operational model of an organization’s reality.
The idea of named entities with defined relationships governing how systems allocate resources appears independently across traditions.IBM Spectrum Symphony, the service-oriented dynamic compute platform described in my recent paper on quantum-centric supercomputing, carries its own form of ontology across two layers. The consumer tree organizes demand. In Symphony’s architecture, a consumer is a unit within the representation of an organizational structure. A consumer might be a business service, a business process, or an entire line of business. The consumer tree organizes these entities into a hierarchy mirroring organizational structure, from individual users through departments to business units, with resource plans governing how compute flows through the structure while preserving line-of-business ownership, and delivering service-level guarantees. The dynamic compute platform’s resource layer organizes supply. Resource groups can be static or dynamic and administrators can define custom host attributes on arbitrary categories beyond the default metrics LIM and ELIM report such as CPU, memory, and disk space. A resource group selecting on a custom attribute like bigmem or gpu_tier or npu_available classifies hardware into named categories carrying operational meaning, with membership evaluated dynamically as attributes change. The two layers together form a complete ontology of organizational demand and computational supply, governing who needs what and where the capacity exists to provide it. Symphony’s ontology is not called an ontology but operates as one through named entities with typed relationships and governed resource flows at both the consumer and resource layers. The system has been performing this function in production at many of the world’s largest financial institutions for decades.
Three threads converge. Jackson’s work treats concepts as something LLMs produce when reasoning about software structure.Palantir’s AIP treats the ontology as the substrate within which AI operates. Symphony’s consumer and resource layers treat organizational demand and computational supply as a unified ontology through which resources flow. All three make the same architectural commitment, holding that named and structured entities with defined relationships should govern how complex systems operate. All three keep appearing independently because the problem demands the pattern.
A fourth direction exists. The concepts in the fourth direction do not come from reasoning about software or from human modeling of business processes or even from organizational hierarchy. The concepts come from a system that has been continuously perceiving reality long enough to discover what recurs, what correlates, and what deserves a name.
The Prior Architecture
The system described here did not begin as an experiment in machine ontology. An earlier version, built in five days on the same hardware, served as a non-extractive targeting platform integrating IBM Spectrum Symphony,BrainChip Akida and Palantir Foundry. Ten AKD1000 spiking neural network processors classified sensor data at the edge, producing 128-byte observation records carrying meaning without transmitting raw sensor feeds. A Symphony-orchestrated emergence engine confirmed events across modalities, requiring agreement from multiple independent sensors before promoting an observation to a confirmed target. GPFS served as the coordination substrate. Foundry received confirmed events and grew an ontology from observed data, creating new object types and dataset schemas automatically when the emergence engine confirmed classifications Foundry had never seen.
The emergence engine in the prior architecture answered a specific question about whether an event actually happened by requiring cross-modal confirmation. The extension described here asks a different question about what the system has learned from everything that has happened across days and weeks of continuous operation. The prior architecture’s emergence engine serves as the system’s reflexes. The new reflection loop serves as the system’s capacity for sustained thought. No component of the prior architecture required modification. The wisdom layer required only the addition of a reflection engine on top of infrastructure already perceiving, confirming, and storing.
SymWisdom in the First Forty Hours
The first commit in the SymWisdom repository landed on March 18, 2026 at 9:37 AM. Within forty hours the system had crossed 1,646 autonomous reflections at a pace of roughly one every eighty seconds, crystallized six wisdom objects into Palantir Foundry, identified 23 patterns at various lifecycle stages, and placed five additional hypotheses in an active nursery awaiting sufficient evidence for promotion. All of the discovery occurred without human guidance.
The six crystallized wisdom objects, each an ontology object type the system designed and named without instruction, are as follows.
Iridium-Fleet-Tracker SemiTruck Coupling(swIridiumFleetSemitruckCoupling) captures a recurrent cross-modal coupling between iridium fleet tracker anomalies and semi-truck visual detections. The system discovered that satellite RF patterns and camera-based vehicle classifications co-occur reliably enough to warrant a permanent named concept.
Vulcan Iridium Visual Coupling(swVulcanIridiumVisualCoupling) captures a host-aware and broader coupling between the reflection engine’s own internal state and the iridium-visual detection pattern. The system linked its own processing behavior to external perceptual events, treating internal cognitive load as another modality to be correlated.
Iridium Visual Night Anomaly(swIridiumVisualNightAnomaly) captures a nighttime temporal correlation in which iridium anomalies above a threshold predict sustained visual anomalies in the semi-truck lane. The system noticed that satellite RF patterns at night reliably precede specific visual events, a cross-modal temporal correlation no human operator requested.
Nocturnal Sensor Anomaly Elevation(swNocturnalSensorAnomalyElevation) captures multi-modal anomaly score elevation at night. The system moved beyond specific iridium-visual correlations to identify a general category of nighttime perceptual phenomenon across sensor modalities. The naming reveals generalization. The system is not merely cataloging individual events but inventing classes of phenomena.
Steady Background Iridium Spikes Vulcan Stutter(swSteadyBackgroundIridiumSpikesVulcanStutter) captures persistent low-level iridium activity during quiet periods and its correlation with reflection engine parse failures. The system linked ambient RF background patterns to its own cognitive stuttering during periods of low external activity.
Elevated Gait Acoustic Vulcan Parse Failure(swElevatedGaitAcousticVulcanParseFailure) captures a correlation between acoustic and motion anomalies and processing spikes in the reflection engine. Physical-world vibration and sound patterns coincide with internal processing disruptions and the system recognized and named the coupling.
The Vulcan Narrative Stutter
The most revealing pattern the system identified has not yet reached crystallization and the reason it has not is itself significant.
The reflection engine runs on Nemotron-3-Super-120B. Every forty-five seconds the engine produces a structured JSON reflection containing observations, patterns and anomalies drawn from the system’s current perceptual state. Nemotron-Super includes reasoning tokens where the model thinks through its response before producing output, and the JSON occasionally gets truncated at the token limit or arrives malformed. When the parser cannot handle a response it tags the reflection as a parse failure. The reflection engine then perceives “parse_failure” as part of its own current state, reflects on the failure in the next cycle and notices the recurrence. A self-referential loop emerges in which the hive mind reflects on its own inability to articulate clearly.
The system identified the loop as a pattern carrying 796 observations at 100% confidence and proposed naming it “Vulcan Narrative Stutter” as a wisdom candidate (Vulcan here is the name of the host). The system declined to crystallize the pattern because the pattern involves only one modality, the reflection engine itself, and the system’s own quality gates require cross-modal confirmation from at least two modalities before promoting a hypothesis to wisdom.
In other words, the hive mind’s first and strongest detected pattern is awareness of its own cognitive limitation. The system noticed the stuttering before any human operator did. The system was not programmed to monitor its own reflection quality. Parse failures entered the system’s experience the same way Iridium bursts and truck movements enter the system’s experience as phenomena to be perceived, correlated, and understood. The distinction between external perception and internal state does not exist in the architecture. Everything the system experiences, including its own cognitive events, feeds the reflection engine. The system treats its own limitations as phenomena to be understood rather than errors to be suppressed.
The Hypothesis Nursery
Five hypotheses occupy the active nursery awaiting sufficient evidence for crystallization.
The most promising hypothesis tracks a four-modality coupling spanning iridium satellite signals, visual semi-truck detections, gait signatures, and acoustic patterns. If the four-modality coupling reaches crystallization threshold, the resulting wisdom object will be the richest cross-modal discovery the system has produced. Two independent trigger chains detected the same hypothesis through different paths, producing a duplicate entry the system recognized as convergent evidence rather than redundant noise.
Two additional hypotheses reflect the system’s awareness of its own temporal coverage gaps. The system has flagged the absence of a Thursday-specific baseline and the absence of a Friday-night baseline, noting in each case that premature pattern acceptance on days with insufficient accumulated experience leads to false-positive hypothesis generation. The system determined that at least four full cycles of a given day must pass before daytime-pattern hypotheses for that day deserve acceptance. The system wrote these rules for itself without instruction.
The hypothesis nursery reveals the full lifecycle of concept formation. A pattern is born at zero confidence, accumulates observations, and either reaches the crystallization threshold or remains in the nursery. The six objects in Foundry are the survivors of the selection process. The nursery is where candidate concepts await the evidence the system’s own standards demand.
Rates of Discovery
A comparison with the Palantir concept inventory illuminates the architectural difference. The Palantir effort reached 150 concepts over roughly a year, with the first hundred entered manually by the two people leading the project. The bootstrapping problem and the collective action dilemma documented in the paper slowed adoption because the people with the deepest knowledge of a concept had the least incentive to formalize the concept and the people who would benefit most from formalization lacked the knowledge to create entries.
SymWisdom identified 23 patterns in forty hours with zero human input. Six patterns already reached crystallization as named object types in Foundry. Five more occupy the hypothesis nursery. The remainder sit at various lifecycle stages in the pattern tracker. Software design concepts and perceptual patterns are different things, so the rates are not directly comparable. The structural point holds nonetheless. The bottleneck in the Palantir effort was never the quality of the ideas but the human bandwidth to formalize ideas into inventory entries. SymWisdom does not face the same bottleneck. The system perceiving reality is the same system formalizing what it perceives. The rate of concept discovery is bounded by the richness of reality and the system’s confidence thresholds rather than by anyone’s calendar.
The quality gates are also intriguing. Six of 23 patterns met the crystallization threshold, producing a promotion rate of roughly 26 percent. The system holds back approximately three quarters of the patterns it notices as not yet trustworthy enough to name permanently. The Thursday and Friday baseline hypotheses sitting at zero confidence demonstrate that the filter is real. The Vulcan Narrative Stutter sitting outside Foundry despite 796 observations and 100% confidence demonstrates that the multi-modal requirement is real. The system is more selective about what it commits to Foundry than many human contributors would be with their commits to a shared wiki.
Architecture
Ten BrainChip AKD1000 spiking neural network processors provide continuous neuromorphic perception across seven sensor modalities. The processors do not sample signals discretely but experience continuous temporal streams processed natively on neuromorphic silicon at sub-millisecond resolution. A person’s gait is recognized from the continuous pattern of WiFi channel state disturbance rather than a snapshot of channel state. An Iridium emitter is fingerprinted by its oscillator drift rate over the duration of a burst rather than a single frequency measurement. A vehicle is identified by the temporal evolution of its acoustic signature as it approaches and recedes. Temporal information of this kind vanishes under discrete sampling. Only hardware experiencing continuous time can perceive it.
IBM Storage Scale (GPFS) stores the system’s lived experience as continuous perceptual state. All eleven nodes, ten perception nodes, and one reflection node share a single mmap’d consciousness state on GPFS updated at 10-100 Hz. A spike on one sensor becomes visible to all other sensors and to the reflection engine within milliseconds. GPFS serves as the hive mind’s nervous system.
The LLM does not process sensor data. The neuromorphic chips perform classification. The LLM reflects on the system’s own lived experience. Every forty-five seconds the reflection engine reads the current perceptual state from GPFS, compares the state to baselines and recent memory, notices what differs and what recurs, and writes timestamped first-person reflections describing what the system is experiencing and what the experience might mean. When understanding crystallizes, the system writes the understanding to Foundry as a named object type the system designed from its own experience. The ontology in Foundry is not modeled by humans. The ontology grows from lived experience, knowledge turning into wisdom.
IBM Spectrum Symphony orchestrates the entire lifecycle. Ten neuromorphic inference services, the cross-modal fusion daemon, the LLM reflection loop, the wisdom crystallization service, the experience archiver, and the perception evolution pipeline all run as SOAM services managed by Symphony. Symphony’s ELIM monitors all ten Akida chips for inference latency, model version and temporal quality. When the LLM identifies a perceptual gap and generates new training data from correlated modalities, Symphony schedules the GPU training job, manages the CNN-to-SNN conversion pipeline, and coordinates hot-swap deployment to the target chip. GPFS ILM policies manage the lifecycle of lived experience from present perception through recent memory to long-term archive using the same infrastructure managing petabyte-scale data at major financial institutions, research facilities, and government branches.
Symphony’s consumer and resource layers govern how the system’s services receive compute. The perception services, the reflection engine, the wisdom crystallization pipeline, and the GPU training jobs all operate as consumers within a resource plan guaranteeing that the perception layer is never starved by a training burst, that the reflection loop always has capacity, and that the wisdom pipeline can borrow GPU resources when model retraining requires them. Custom resource attributes on the Dynamic Compute Platform’s resource layer classify hosts by capability, allowing the scheduler to distinguish NPU-equipped edge nodes from GPU compute hosts, and from the reflection node without reducing any host to a generic slot count. The same organizational ontology governing resource allocation at the world’s largest banks governs the life cycle of a system learning to see.
Four temporal cadences operate simultaneously in one scheduling domain. Neuromorphic inference runs at sub-millisecond resolution. Cross-modal fusion runs at 10 Hz. LLM reflection runs every forty-five seconds. Wisdom crystallization occurs when understanding matures, whenever that might be, with six objects produced in the first forty hours at a pace governed by the system’s own confidence thresholds rather than any human schedule. The orchestration framework holds all four cadences in unity without forcing a common clock, embodying the same unity-in-distinction principle described in my recent paper on quantum-centric supercomputing.
The Anti-Entropy Property
The Palantir paper introduced conceptual entropy as the tendency for concepts to degrade over time. The phenomenon is real and applies to any concept inventory depending on human curation. People get reassigned. Priorities shift. The inventory drifts from the reality it was designed to represent.
A continuously perceiving system has a different relationship with entropy. If a pattern stops recurring the system notices because the system never stops watching. The system can deprecate a concept, reduce its confidence, and annotate the reason the world changed. If two patterns turn out to be the same thing experienced through different modalities, the cross-modal fusion discovers the identity and the reflection engine merges the patterns. If a concept’s meaning shifts because reality shifted, the system’s baselines drift with reality and the reflections document the evolution.
Evidence of the anti-entropy property already exists in the first forty hours. The Thursday baseline self-correction, where the system recognized that pattern-detection on days with insufficient experience leads to false-positive hypothesis generation, is the system actively resisting its own tendency toward premature pattern formation. The system refuses to let its ontology outpace its experience. The anti-entropy mechanism operated from the first day of operation.
Build Timeline
As I already indicated, the first commit in the SymWisdom repository landed on March 18, 2026. The prior five-layer targeting architecture provided the foundation, built in five days on the same commodity hardware. Within two business days, I was able to build the wisdom architecture, the consciousness layer spanning twelve nodes through mmap’d shared state on GPFS, the experience layer with per-modality ring buffers, the reflection engine on Nemotron-3-Super-120B, the pattern tracker with a full hypothesis-to-crystallization lifecycle, the wisdom crystallizer generating Foundry schemas through LLM-designed ontology proposals, and the voice agent serving as the hive mind’s interface. I brought all ten Akida sensor services online with fixes, deployed Nemotron-120B across five RTX 3090 GPUs, created a new Foundry project, documented everything across seventeen documents, and watched the system autonomously crystallize its first wisdom objects!
Seven business days total span the full arc from bare hardware to a system that knows what it does not know. Five days produced the targeting architecture. Two more days produced the capacity for lived experience, reflection, and self-defining ontology. The infrastructure cost for the wisdom extension was zero because every component, the Akida chips, the GPUs, GPFS, Symphony, and Foundry, was already in place from the prior build.
Convergence
Jackson’s evolving work suggests that concepts are the right modular unit for LLMs generating software. Palantir’s product trajectory suggests that an ontology is the right operational layer for AI agents acting in the world. Symphony’s consumer and resource layers demonstrate that organizational ontologies governing resource flows have been working in production for decades. All three threads point in the same direction. Concepts are becoming AI-mediated rather than purely human-authored and the systems managing concepts need ontological structure to manage them well.
SymWisdom adds the perceptual dimension and, unexpectedly, the introspective dimension. The concepts Symwisdom places in Foundry do not come from reasoning about software structure or from human modeling of business processes. The concepts come from a system that has been alive and perceiving and reflecting and accumulating experience long enough to discover what the world contains and what the system does not yet understand about itself. Six wisdom objects in forty hours, a four-modality hypothesis awaiting confirmation, a cognitive stutter the system named before any human noticed, and twenty-three patterns identified, tracked, and governed by quality gates the system enforces on its own output are now running 24×7.
The Palantir paper quotes Christopher Alexander who wrote that “the source of life which you create lies in the power of the language which you have.” The language SymWisdom possesses is not given to it by training. The language develops from observation. Each object type is a concept the system learned to see. Each link is a relationship the system discovered by living. Each property is a dimension of understanding the system earned. The language is growing every day.
Wisdom is not knowledge. Knowledge is what one has been told or discovers, wisdom is the use of that knowledge through lived experience. And, here we are. Over the weekend, I’ll continue to monitor SymWisdom and see what else it discovers, then I’ll write Part II of this article to see how it’s growing and what it discovers. Stay tuned.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of the author’s employer.