Philosophy
The KernalPhilosophy
RAG retrieves. Kernal maintains. The difference is not technical — it is architectural, and it compounds.
The Problem
Nobody Is Solving This
Kernal starts from a simple claim: an AI knowledge system should not re-derive the same insight every time someone asks a question.
Most systems treat organisational knowledge as retrievable text. They store documents, embed fragments, and synthesise an answer at query time. That works until the corpus grows, contradictions accumulate, and every answer becomes a fresh reconstruction of context the system should already understand.
Kernal takes the opposite bet. It synthesises at write time. Every source that enters the system updates a maintained knowledge base: durable pages, cross-references, confidence signals, contradictions, cluster summaries, and an Apex view of what the whole library currently believes.
The result is not better search. It is maintained understanding.
RAG retrieves.
Kernal maintains.
Foundational Shift
Write-Time Synthesis
In 2025, Andrej Karpathy articulated an insight that runs through everything Kernal does: treat your AI not as an oracle you query, but as a maintainer responsible for a knowledge artefact that compounds over time. Synthesise as you ingest. The wiki gets better with every source added, not just longer.
Every source that enters Kernal is synthesised immediately. Wiki pages are created or updated. Cross-references are built. The knowledge compounds at the moment of ingestion, not at the moment of retrieval. When you query, you are reading from a maintained, living knowledge base — not re-synthesising from raw documents on the fly.
This eliminates the Rediscovery Problem. In query-time RAG, the model re-derives the same insights every time — waste in three forms: compute, time, and consistency. Write-time synthesis eliminates this. Every source has its knowledge extracted permanently, once. The insight lives in the library as a durable fact with its source, confidence level, and cross-references. The next query reads it. It never re-derives it.
Architecture
The Five Altitudes
Knowledge lives at multiple levels of abstraction. Most systems have one: the document. Kernal has five.
This is not just better organisation. It is a deliberate epistemology. Different questions require different altitudes. The agent routes to the right altitude based on query type — navigational queries go to the relational layer, conceptual queries go to wiki pages, macro-synthesis queries go to the Apex Wiki.
Knowledge Design
Operational vs Static Knowledge
Maintenance Layer
Big Library
A knowledge base without maintenance is a pile with memory.
Kernal runs a rationalisation layer called Big Library. After each ingestion batch, it looks across all pages in a cluster and asks: Are there contradictions? Are there gaps — major topics with no coverage? Are there redundancies — two pages that should merge? What is the current macro-view of this cluster?
Big Library runs in delta mode — it only re-analyses clusters where new pages were added since the last run. The cost stays flat as the knowledge base grows.
Embedding-based redundancy detection flags pages with cosine similarity above 0.85 as merge candidates automatically. In the first live run on the blekkie knowledge base: 8 clusters analysed, 8 cluster meta-pages written, 3 cross-cluster SCOPE contradictions surfaced — including one CRITICAL tension between a $170–195B public market bet on AI adoption and the same company's own missed revenue targets. No human read across all 50 pages. The system did.
Retrieval
How Search Works
Fast and exact. Unicode-aware full-text index. Good for lookup when you know the term. Fails when the concept exists under different vocabulary across sources.
Every wiki page is embedded at write time. Queries find conceptually closest pages regardless of vocabulary. You are searching compiled understanding, not strings.
"Show me everything connected to this person" returns their activities, deals, goals, related organisations — without any text query. Structural navigation.
Browse cluster → cluster meta-page → individual pages. The system's macro-thesis is one step away. Start from the top and zoom in.
The Case
Why Kernal
The institutional knowledge problem. When a top performer leaves, what leaves with them? Not their files — those stay. What leaves is their understanding: how the client relationship actually works, what the real blockers are, which decisions were made and why. This re-accumulates nowhere.
Kernal retains it. Every meeting transcript, every strategic conversation, every decision with its reasoning — synthesised, stored, searchable, and immediately accessible to the next person in the role.
The agent as worker. An AI agent with no context is a tool. An AI agent with full context is a worker. The difference is grounding. A context-aware agent knows your goals, your open deals, your client relationships, your historical decisions.
No lock-in. Every major AI vendor wants to be the home for your knowledge. If your knowledge lives in ChatGPT's memory, you are locked into OpenAI. Kernal's design breaks this. Your knowledge lives in a portable SQLite file on hardware you control, served via MCP — an open protocol. When a better model ships, you upgrade the model. Your knowledge stays.
Capability is a ceiling.
Context is a compounding asset.
Full Stack
A Structured Intelligence OS
Each layer is independent and composable. Together they are something qualitatively different from any individual component.
Ready to build
Start with Kernal
Open source. Local-first. Your knowledge stays yours.