Philosophy

The KernalPhilosophy

RAG retrieves. Kernal maintains. The difference is not technical — it is architectural, and it compounds.

v1.2May 2026Andes Labs · Oslo

The Problem

Nobody Is Solving This

Kernal starts from a simple claim: an AI knowledge system should not re-derive the same insight every time someone asks a question.

Most systems treat organisational knowledge as retrievable text. They store documents, embed fragments, and synthesise an answer at query time. That works until the corpus grows, contradictions accumulate, and every answer becomes a fresh reconstruction of context the system should already understand.

Kernal takes the opposite bet. It synthesises at write time. Every source that enters the system updates a maintained knowledge base: durable pages, cross-references, confidence signals, contradictions, cluster summaries, and an Apex view of what the whole library currently believes.

The result is not better search. It is maintained understanding.

RAG retrieves.
Kernal maintains.

Foundational Shift

Write-Time Synthesis

In 2025, Andrej Karpathy articulated an insight that runs through everything Kernal does: treat your AI not as an oracle you query, but as a maintainer responsible for a knowledge artefact that compounds over time. Synthesise as you ingest. The wiki gets better with every source added, not just longer.

Every source that enters Kernal is synthesised immediately. Wiki pages are created or updated. Cross-references are built. The knowledge compounds at the moment of ingestion, not at the moment of retrieval. When you query, you are reading from a maintained, living knowledge base — not re-synthesising from raw documents on the fly.

This eliminates the Rediscovery Problem. In query-time RAG, the model re-derives the same insights every time — waste in three forms: compute, time, and consistency. Write-time synthesis eliminates this. Every source has its knowledge extracted permanently, once. The insight lives in the library as a durable fact with its source, confidence level, and cross-references. The next query reads it. It never re-derives it.

Never re-derive the same insight twice.

Architecture

The Five Altitudes

Knowledge lives at multiple levels of abstraction. Most systems have one: the document. Kernal has five.

01
Raw Sources
Original transcripts, articles, documents
CNBC transcript — OpenAI IPO coverage
02
Wiki Pages
Synthesised, durable conceptual knowledge
"OpenAI IPO Pressure — Missed Revenue and User Targets"
03
Cluster Meta-pages
Cross-page synthesis per domain
"State of AI Infrastructure & Capital Markets"
04
Apex Wiki
Cross-cluster macro-synthesis
"What does this entire knowledge base collectively believe?"
05
Relational Layer
People, organisations, goals, deals, projects
Hans Petter Øya → Hydro → QBR → goal: digital transformation

This is not just better organisation. It is a deliberate epistemology. Different questions require different altitudes. The agent routes to the right altitude based on query type — navigational queries go to the relational layer, conceptual queries go to wiki pages, macro-synthesis queries go to the Apex Wiki.

Knowledge Design

Operational vs Static Knowledge

Operational
What happened
Time-sensitive, personal, and ephemeral. Meeting notes, service call transcripts, activity logs. Searchable and linkable to entities — but should not pollute the durable knowledge base.
Static
What you know
Synthesised, durable, and cross-referenced. Wiki pages, cluster analyses, concept definitions. The knowledge base and the activity log are different things — and must stay that way.

Maintenance Layer

Big Library

A knowledge base without maintenance is a pile with memory.

Kernal runs a rationalisation layer called Big Library. After each ingestion batch, it looks across all pages in a cluster and asks: Are there contradictions? Are there gaps — major topics with no coverage? Are there redundancies — two pages that should merge? What is the current macro-view of this cluster?

Big Library runs in delta mode — it only re-analyses clusters where new pages were added since the last run. The cost stays flat as the knowledge base grows.

Embedding-based redundancy detection flags pages with cosine similarity above 0.85 as merge candidates automatically. In the first live run on the blekkie knowledge base: 8 clusters analysed, 8 cluster meta-pages written, 3 cross-cluster SCOPE contradictions surfaced — including one CRITICAL tension between a $170–195B public market bet on AI adoption and the same company's own missed revenue targets. No human read across all 50 pages. The system did.

Retrieval

How Search Works

The Case

Why Kernal

The institutional knowledge problem. When a top performer leaves, what leaves with them? Not their files — those stay. What leaves is their understanding: how the client relationship actually works, what the real blockers are, which decisions were made and why. This re-accumulates nowhere.

Kernal retains it. Every meeting transcript, every strategic conversation, every decision with its reasoning — synthesised, stored, searchable, and immediately accessible to the next person in the role.

The agent as worker. An AI agent with no context is a tool. An AI agent with full context is a worker. The difference is grounding. A context-aware agent knows your goals, your open deals, your client relationships, your historical decisions.

No lock-in. Every major AI vendor wants to be the home for your knowledge. If your knowledge lives in ChatGPT's memory, you are locked into OpenAI. Kernal's design breaks this. Your knowledge lives in a portable SQLite file on hardware you control, served via MCP — an open protocol. When a better model ships, you upgrade the model. Your knowledge stays.

Capability is a ceiling.
Context is a compounding asset.

Full Stack

A Structured Intelligence OS

AGENT LAYER
Skills · Sessions · Save Game · Intelligence · Proactive Agency
KNOWLEDGE LAYER
Apex Wiki · Cluster Meta-pages · Wiki Pages → Passages → Sources · Semantic Search · Big Library
RELATIONAL LAYER
People / Orgs / Deals / Goals · Projects / Tasks / Activities · Knowledge Graph + Code Graph
INFRASTRUCTURE LAYER
SQLite · Ollama · Tailscale · MCP · Cloudflare D1/DO

Each layer is independent and composable. Together they are something qualitatively different from any individual component.

Ready to build

Start with Kernal

Open source. Local-first. Your knowledge stays yours.