Structured knowledge packs that give AI agents the esoteric knowledge missing from their training data — about your products, your people, or your processes. Minimized token cost. Maximized prompt quality. Measurable value.
Esoteric knowledge (EK) is knowledge not found in the weights of frontier LLMs. It's the tribal knowledge in your support team's heads, the gotchas your engineers learned the hard way, the decision patterns your founder never wrote down — the gap between what a model can answer and what an expert actually knows.
ExpertPacks deliver this knowledge to any AI agent in a way that minimizes token cost and maximizes prompt quality through RAG. Every pack is structured for multi-layer retrieval and measured by its EK ratio — the proportion of content that frontier models cannot correctly produce on their own. During hydration, every fact is triaged: esoteric knowledge gets maximum treatment, general knowledge gets compressed to scaffolding. The result is dense, high-value context that makes your AI genuinely expert — not just articulate.
Every fact is triaged during hydration — maximize esoteric knowledge, compress what models already know
EK ratio, correctness, hallucination rate, and refusal accuracy — measured, not guessed
Summaries, propositions, glossary, and lead summaries for precision at every query granularity
Human-readable, AI-consumable, git-versionable — no proprietary formats or lock-in
Three-tier context strategy loads only what's needed per turn
Works with any AI that can read Markdown files
Three reasons web search can't replace an ExpertPack.
When a model confidently hallucinates, it doesn't trigger a search. It doesn't think "I'm unsure, let me look this up" — it thinks it already knows. An ExpertPack loaded into context preempts the hallucination with the correct answer before the model gets a chance to fabricate.
Even with tool-use, the model needs to know what to search for. If it doesn't know about a specific firmware bug, it won't search for the precise query that finds the fix — it'll search generically and get generic results. You can't search for knowledge you don't know exists.
Source code analysis reveals undocumented behavior that exists nowhere online. Expert interviews capture tribal knowledge that was never written down. Person packs contain private stories and reasoning. These are original knowledge sources — no search engine indexes them.
Capture a person — stories, beliefs, relationships, voice, and legacy.
Deep knowledge about a product or platform — concepts, workflows, troubleshooting.
Complex multi-phase processes — phases, decisions, checklists, gotchas.
Combine multiple packs into a single agent deployment with role assignments and context control.
Pick a pack type — person, product, or process. Your AI agent reads the schema and knows exactly what to build.
Talk to the agent, point it at websites, drop in documents, or hand it data exports. It structures everything automatically.
Drop the pack into any AI agent's workspace. Instant domain expertise — no prompt engineering required.
Run evals to measure correctness, completeness, and hallucination rate. Use results to guide targeted improvements.
Open-source ExpertPacks built from real documentation, community forums, and source code analysis. Each pack shows its EK ratio — the percentage of content that frontier AI models cannot produce on their own. Higher EK = more value your AI can't get anywhere else. Download individual packs directly from GitHub — no account required. ⭐ Star the repo if you find them useful!
The open-source home automation platform. Deep practitioner knowledge covering smart home protocols, automation patterns, presence detection, YAML configuration, ESPHome, dashboards, voice assistant, energy management, and security monitoring. Includes community-sourced gotchas and real-world device compatibility data.
The free, open-source 3D modeling, animation, and rendering software used by millions of artists and studios worldwide. Covers polygon modeling, sculpting, animation & rigging, physics simulation, PBR shading, Cycles/EEVEE rendering, Geometry Nodes, compositing, Python scripting, and production workflows.
A practitioner guide for residential solar panel and battery storage systems. Covers system design, panel and battery product comparisons, NEC code compliance, permitting, installation best practices, and troubleshooting.
Basic RAG embeds documents and retrieves top-k chunks. ExpertPacks go further with retrieval layers that handle every query granularity — broad questions, specific facts, and vocabulary mismatches.
RAPTOR-style hierarchical summaries. Broad questions match summaries first; agents drill into detail files for follow-ups. summaries/
Individual facts extracted from content files. Specific questions match exact propositions — not paragraphs that happen to contain the answer. propositions/
Maps user language to technical terms. "Stuck ZIP codes" → "locked territories." Bridges the vocabulary gap between queries and content. glossary.md
1–3 sentence blockquote at the top of high-traffic files. The first RAG chunk always contains the core answer, not a preamble.
When files grow too large, split them — but always generate summaries and propositions alongside. The three layers together outperform any single optimization.
Frontmatter tracks where content came from — video timestamps, doc URLs, interviews. Trace any fact back to its origin for verification.
Open-source tooling for building, measuring, and deploying ExpertPacks.
Files are authored as self-contained retrieval units (400–800 tokens each). Any RAG chunker passes them through intact — no external tooling needed. The schema IS the chunking strategy. Workflows stay atomic; reference content is naturally scoped. Per-file overrides via frontmatter.
Blind-probes frontier models to measure what % of your pack they can't produce alone.
Automated eval execution with LLM-as-judge scoring for correctness, hallucination, and refusal.
Battle-tested with OpenClaw. Add pack path to memorySearch.extraPaths — instant expertise.
Generic RAG dumps documents into a vector store and loads everything into context — hoping the model will sort it out. You pay for every irrelevant token on every turn.
ExpertPacks use a three-tier context strategy: core identity loads every session, knowledge loads on topic match, and heavy content loads only on demand. Your agent gets the right information at the right time — not everything all the time.
Standardized eval sets measure correctness, completeness, hallucination rate, and refusal accuracy. Run automated evals with the included eval runner. Track quality over time with baselines and scorecards.
Every pack type schema carries a semantic version. Packs declare their target version in the manifest. Major bumps signal breaking changes; minor bumps are additive. Evolve the framework without breaking existing packs.
Population methods guide covers every knowledge source — conversations, websites, documents, video, support tickets. Eval runner automates quality scoring. More tooling on the way.
Your agent accumulates months of knowledge — identity, preferences, infrastructure expertise, behavioral patterns, relationships. Now it can distill all of that into a portable, structured pack that bootstraps a new instance in minutes.
The agent scans its own workspace, classifies every knowledge chunk, and proposes constituent packs — agent, person, product, process.
Raw state (journals, configs, memory files) is compressed into structured, deduplicated EP-compliant files. 438KB raw → 31KB distilled.
A composite EP wires the agent pack (voice) with person/product/process packs (knowledge). Ready to import on any platform.
Your agent dies — spin up a new one from its EP. Immediately competent, not starting from scratch.
Move from one AI platform to another. Your agent's knowledge comes with it — portable by design.
Share domain expertise between agents. One agent's product knowledge becomes another's via composite.
Distribute well-trained agent configurations as portable packs. Built-in privacy controls keep secrets out.
ExpertPack was designed and battle-tested with OpenClaw — the open-source AI agent platform. Every schema change is validated against real agent deployments.
Open source. Apache 2.0. Free forever.
If ExpertPack is useful to you, a GitHub star helps others discover it.