Turn any AI agent into a
domain expert

Structured knowledge packs that give AI agents the esoteric knowledge missing from their training data — about your products, your people, or your processes. Minimized token cost. Maximized prompt quality. Measurable value.

Give your AI the knowledge it's missing

Esoteric knowledge (EK) is knowledge not found in the weights of frontier LLMs. It's the tribal knowledge in your support team's heads, the gotchas your engineers learned the hard way, the decision patterns your founder never wrote down — the gap between what a model can answer and what an expert actually knows.

ExpertPacks deliver this knowledge to any AI agent in a way that minimizes token cost and maximizes prompt quality through RAG. Every pack is structured for multi-layer retrieval and measured by its EK ratio — the proportion of content that frontier models cannot correctly produce on their own. During hydration, every fact is triaged: esoteric knowledge gets maximum treatment, general knowledge gets compressed to scaffolding. The result is dense, high-value context that makes your AI genuinely expert — not just articulate.

🧠

EK-Optimized

Every fact is triaged during hydration — maximize esoteric knowledge, compress what models already know

📊

Measurable Quality

EK ratio, correctness, hallucination rate, and refusal accuracy — measured, not guessed

🎯

Multi-Layer Retrieval

Summaries, propositions, glossary, and lead summaries for precision at every query granularity

📝

Markdown-First

Human-readable, AI-consumable, git-versionable — no proprietary formats or lock-in

Token-Efficient

Three-tier context strategy loads only what's needed per turn

🔌

Agent-Agnostic

Works with any AI that can read Markdown files

"Why can't my AI just search for this?"

Three reasons web search can't replace an ExpertPack.

🤥

Models don't know what they don't know

When a model confidently hallucinates, it doesn't trigger a search. It doesn't think "I'm unsure, let me look this up" — it thinks it already knows. An ExpertPack loaded into context preempts the hallucination with the correct answer before the model gets a chance to fabricate.

🔍

Search requires the right question

Even with tool-use, the model needs to know what to search for. If it doesn't know about a specific firmware bug, it won't search for the precise query that finds the fix — it'll search generically and get generic results. You can't search for knowledge you don't know exists.

🔒

Not all knowledge is on the internet

Source code analysis reveals undocumented behavior that exists nowhere online. Expert interviews capture tribal knowledge that was never written down. Person packs contain private stories and reasoning. These are original knowledge sources — no search engine indexes them.

Three pack types, infinite use cases

🧑

Person Packs

Capture a person — stories, beliefs, relationships, voice, and legacy.

Use cases: Personal AI assistant, family archive, memorial AI, digital legacy, founder knowledge capture
📦

Product Packs

Deep knowledge about a product or platform — concepts, workflows, troubleshooting.

Use cases: AI support agent, sales assistant, training tool, onboarding guide, product documentation
🔄

Process Packs

Complex multi-phase processes — phases, decisions, checklists, gotchas.

Use cases: Home building guide, business formation, project management, certification processes
🔗

Composites

Combine multiple packs into a single agent deployment with role assignments and context control.

Use cases: CEO AI assistant, multi-product support bot, company knowledge base, personal legacy AI

How it works

1

Point your AI at the schema

Pick a pack type — person, product, or process. Your AI agent reads the schema and knows exactly what to build.

2

Feed it knowledge

Talk to the agent, point it at websites, drop in documents, or hand it data exports. It structures everything automatically.

3

Deploy the pack

Drop the pack into any AI agent's workspace. Instant domain expertise — no prompt engineering required.

4

Measure & improve

Run evals to measure correctness, completeness, and hallucination rate. Use results to guide targeted improvements.

Free community packs

Open-source ExpertPacks built from real documentation, community forums, and source code analysis. Each pack shows its EK ratio — the percentage of content that frontier AI models cannot produce on their own. Higher EK = more value your AI can't get anywhere else. Download individual packs directly from GitHub — no account required. ⭐ Star the repo if you find them useful!

🏠

Home Assistant

Composite Pack EK 54%

The open-source home automation platform. Deep practitioner knowledge covering smart home protocols, automation patterns, presence detection, YAML configuration, ESPHome, dashboards, voice assistant, energy management, and security monitoring. Includes community-sourced gotchas and real-world device compatibility data.

📄 61 files 📏 684 KB 📝 10,400+ lines
Zigbee / Z-Wave / Matter Automations Presence Detection ESPHome Dashboards Voice Assistant Energy
🎨

Blender 3D

Product Pack EK 42%

The free, open-source 3D modeling, animation, and rendering software used by millions of artists and studios worldwide. Covers polygon modeling, sculpting, animation & rigging, physics simulation, PBR shading, Cycles/EEVEE rendering, Geometry Nodes, compositing, Python scripting, and production workflows.

📄 35 files 📏 520 KB 📝 7,200+ lines
Modeling & Topology Animation & Rigging Sculpting Shading & PBR Cycles / EEVEE Geometry Nodes Physics & Simulation Compositing Python Scripting Game Export Production Workflows
☀️

Solar & Battery DIY

Composite Pack EK 52%

A practitioner guide for residential solar panel and battery storage systems. Covers system design, panel and battery product comparisons, NEC code compliance, permitting, installation best practices, and troubleshooting.

📄 46 files 📏 428 KB 📝 3,800+ lines
System Design Panel Selection Battery Storage NEC Code Permitting Troubleshooting

Multi-layer retrieval optimization

Basic RAG embeds documents and retrieves top-k chunks. ExpertPacks go further with retrieval layers that handle every query granularity — broad questions, specific facts, and vocabulary mismatches.

📋

Section Summaries

RAPTOR-style hierarchical summaries. Broad questions match summaries first; agents drill into detail files for follow-ups. summaries/

⚛️

Atomic Propositions

Individual facts extracted from content files. Specific questions match exact propositions — not paragraphs that happen to contain the answer. propositions/

🔤

Glossary

Maps user language to technical terms. "Stuck ZIP codes" → "locked territories." Bridges the vocabulary gap between queries and content. glossary.md

📌

Lead Summaries

1–3 sentence blockquote at the top of high-traffic files. The first RAG chunk always contains the core answer, not a preamble.

📐

Three-Layer Splitting

When files grow too large, split them — but always generate summaries and propositions alongside. The three layers together outperform any single optimization.

🔍

Source Provenance

Frontmatter tracks where content came from — video timestamps, doc URLs, interviews. Trace any fact back to its origin for verification.

Tools & Integrations

Open-source tooling for building, measuring, and deploying ExpertPacks.

📐

Retrieval-Ready by Design

Files are authored as self-contained retrieval units (400–800 tokens each). Any RAG chunker passes them through intact — no external tooling needed. The schema IS the chunking strategy. Workflows stay atomic; reference content is naturally scoped. Per-file overrides via frontmatter.

📊

EK Ratio Measurement

Blind-probes frontier models to measure what % of your pack they can't produce alone.

🧪

Eval Runner

Automated eval execution with LLM-as-judge scoring for correctness, hallucination, and refusal.

OpenClaw

OpenClaw Integration

Battle-tested with OpenClaw. Add pack path to memorySearch.extraPaths — instant expertise.

Stop burning tokens on context bloat

Generic RAG dumps documents into a vector store and loads everything into context — hoping the model will sort it out. You pay for every irrelevant token on every turn.

ExpertPacks use a three-tier context strategy: core identity loads every session, knowledge loads on topic match, and heavy content loads only on demand. Your agent gets the right information at the right time — not everything all the time.

  • Token cost Tiered loading — only pay for what this turn actually needs
  • Retrieval Multi-layer: summaries, propositions, glossary, lead summaries
  • Structure Schemas model real expertise — not just document chunks
  • Quality Eval framework measures correctness and catches hallucinations
  • Provenance Every fact traceable to its source — videos, docs, interviews
  • Versioning Schema versions + pack versions — evolve without breaking
  • Composition Combine packs — person + product + process in one agent
  • Portability Plain Markdown — works anywhere, version-controlled

Built for serious knowledge engineering

📊

Evaluation Framework

Standardized eval sets measure correctness, completeness, hallucination rate, and refusal accuracy. Run automated evals with the included eval runner. Track quality over time with baselines and scorecards.

🏷️

Schema Versioning

Every pack type schema carries a semantic version. Packs declare their target version in the manifest. Major bumps signal breaking changes; minor bumps are additive. Evolve the framework without breaking existing packs.

📖

Guides & Tooling

Population methods guide covers every knowledge source — conversations, websites, documents, video, support tickets. Eval runner automates quality scoring. More tooling on the way.

NEW

Export your AI agent as an ExpertPack

Your agent accumulates months of knowledge — identity, preferences, infrastructure expertise, behavioral patterns, relationships. Now it can distill all of that into a portable, structured pack that bootstraps a new instance in minutes.

🔍

Auto-Discover

The agent scans its own workspace, classifies every knowledge chunk, and proposes constituent packs — agent, person, product, process.

⚗️

Distill

Raw state (journals, configs, memory files) is compressed into structured, deduplicated EP-compliant files. 438KB raw → 31KB distilled.

📦

Package

A composite EP wires the agent pack (voice) with person/product/process packs (knowledge). Ready to import on any platform.

💾

Backup & Restore

Your agent dies — spin up a new one from its EP. Immediately competent, not starting from scratch.

🚚

Platform Migration

Move from one AI platform to another. Your agent's knowledge comes with it — portable by design.

🤝

Agent Collaboration

Share domain expertise between agents. One agent's product knowledge becomes another's via composite.

🏪

Marketplace Ready

Distribute well-trained agent configurations as portable packs. Built-in privacy controls keep secrets out.

OpenClaw
OpenClaw Tested

ExpertPack was designed and battle-tested with OpenClaw — the open-source AI agent platform. Every schema change is validated against real agent deployments.

Start building your ExpertPack

Open source. Apache 2.0. Free forever.

If ExpertPack is useful to you, a GitHub star helps others discover it.