How It Works
Lattice adapts to any domain. Define your schema, declare your entity contexts, and Lattice writes the file tree. These examples show three different schemas producing the same pattern.
Personal agent system
voice_notes, contacts, tasks tables
const db = new Lattice('./personal.db');
db.define('contacts', {
columns: {
id: 'TEXT PRIMARY KEY',
name: 'TEXT NOT NULL',
email: 'TEXT',
notes: 'TEXT',
},
render: 'default-table',
outputFile: 'contacts/CONTACTS.md',
});
db.define('tasks', {
columns: {
id: 'TEXT PRIMARY KEY',
title: 'TEXT NOT NULL',
contact_id: 'TEXT',
status: 'TEXT DEFAULT "open"',
},
render: 'default-list',
outputFile: 'TASKS.md',
});
db.defineEntityContext('contacts', {
slug: (r) => r.name.toLowerCase().replace(/\s+/g, '-'),
index: {
outputFile: 'contacts/CONTACTS.md',
render: (rows) => rows.map((r) => `- ${r.name}`).join('\n'),
},
files: {
'CONTACT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n\n${r.notes ?? ''}`,
},
'TASKS.md': {
source: { type: 'hasMany', table: 'tasks', foreignKey: 'contact_id' },
render: (rows) => rows.map((r) => `- ${r.title}`).join('\n'),
omitIfEmpty: true,
},
'NOTES.md': {
source: { type: 'hasMany', table: 'voice_notes', foreignKey: 'contact_id' },
render: (rows) => rows.map((r) => r.transcript).join('\n\n'),
omitIfEmpty: true,
},
},
combined: { outputFile: 'CONTEXT.md', exclude: [] },
});context/
├── contacts/
│ └── CONTACTS.md ← index of all contacts
├── contacts/alice/
│ ├── CONTACT.md ← Alice's record
│ ├── TASKS.md ← tasks linked to Alice
│ ├── NOTES.md ← voice notes transcripts
│ └── CONTEXT.md ← all three combined
├── contacts/bob/
│ ├── CONTACT.md
│ └── CONTEXT.md ← TASKS.md omitted (empty)
└── TASKS.md ← global task listRestaurant management
restaurants, staff, menu_items tables
const db = new Lattice('./restaurant.db');
db.define('restaurants', {
columns: {
id: 'TEXT PRIMARY KEY',
slug: 'TEXT NOT NULL UNIQUE',
name: 'TEXT NOT NULL',
address: 'TEXT',
cuisine: 'TEXT',
},
render: 'default-table',
outputFile: 'restaurants/RESTAURANTS.md',
});
db.define('staff', {
columns: {
id: 'TEXT PRIMARY KEY',
restaurant_id: 'TEXT NOT NULL',
name: 'TEXT NOT NULL',
role: 'TEXT',
shift: 'TEXT',
},
render: 'default-list',
outputFile: 'staff/STAFF.md',
});
db.defineEntityContext('restaurants', {
slug: (r) => r.slug as string,
index: {
outputFile: 'restaurants/RESTAURANTS.md',
render: (rows) => rows.map((r) => `- ${r.name} (${r.cuisine})`).join('\n'),
},
files: {
'RESTAURANT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n${r.address}\nCuisine: ${r.cuisine}`,
},
'STAFF.md': {
source: { type: 'hasMany', table: 'staff', foreignKey: 'restaurant_id' },
render: (rows) => rows.map((r) => `- ${r.name} (${r.role}, ${r.shift})`).join('\n'),
omitIfEmpty: true,
},
'MENU.md': {
source: { type: 'hasMany', table: 'menu_items', foreignKey: 'restaurant_id' },
render: (rows) => rows.map((r) => `- ${r.name}: $${r.price}`).join('\n'),
omitIfEmpty: true,
},
},
combined: { outputFile: 'CONTEXT.md', exclude: [] },
});context/
├── restaurants/
│ └── RESTAURANTS.md ← global index
├── restaurants/downtown/
│ ├── RESTAURANT.md ← location details
│ ├── STAFF.md ← current staff roster
│ ├── MENU.md ← menu items
│ └── CONTEXT.md ← all files combined
└── restaurants/uptown/
├── RESTAURANT.md
├── MENU.md
└── CONTEXT.md ← STAFF.md omitted (empty)Multi-agent architecture
agents, skills, projects, agent_skills tables
const db = new Lattice('./agents.db');
db.define('agents', {
columns: {
id: 'TEXT PRIMARY KEY',
slug: 'TEXT NOT NULL UNIQUE',
name: 'TEXT NOT NULL',
soul: 'TEXT',
active: 'INTEGER DEFAULT 1',
},
render: 'default-table',
outputFile: 'agents/AGENTS.md',
});
db.define('skills', {
columns: {
id: 'TEXT PRIMARY KEY',
name: 'TEXT NOT NULL',
description: 'TEXT',
},
render: 'default-list',
outputFile: 'skills/SKILLS.md',
});
db.define('agent_skills', {
columns: {
agent_id: 'TEXT NOT NULL',
skill_id: 'TEXT NOT NULL',
},
tableConstraints: ['PRIMARY KEY (agent_id, skill_id)'],
primaryKey: ['agent_id', 'skill_id'],
render: 'default-table',
outputFile: 'agent_skills.md',
});
db.defineEntityContext('agents', {
slug: (r) => r.slug as string,
files: {
'AGENT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n\n${r.soul ?? ''}`,
},
'SKILLS.md': {
source: {
type: 'manyToMany',
junctionTable: 'agent_skills',
localKey: 'agent_id',
remoteKey: 'skill_id',
remoteTable: 'skills',
},
render: (rows) => rows.map((r) => `- ${r.name}: ${r.description}`).join('\n'),
omitIfEmpty: true,
},
'PROJECTS.md': {
source: { type: 'hasMany', table: 'projects', foreignKey: 'owner_agent_id' },
render: (rows) => rows.map((r) => `- ${r.name}`).join('\n'),
omitIfEmpty: true,
},
},
combined: { outputFile: 'CONTEXT.md', exclude: [] },
protectedFiles: ['SESSION.md', 'NOTES.md'],
});context/
├── agents/
│ └── AGENTS.md ← index of all agents
├── agents/forge/
│ ├── AGENT.md ← persona and soul
│ ├── SKILLS.md ← skills via junction table
│ ├── PROJECTS.md ← owned projects
│ ├── CONTEXT.md ← all files combined
│ └── SESSION.md ← agent-written (protected)
├── agents/audit/
│ ├── AGENT.md
│ ├── SKILLS.md
│ └── CONTEXT.md
└── skills/
└── SKILLS.md ← global skills indexThe pattern
Define your schema
Call db.define() for each table. Provide column specs and a render function.
Define entity contexts
Call db.defineEntityContext() to declare per-entity file structures and relationships.
Lattice writes the tree
db.render() or db.watch() generates the complete file tree — one directory per entity.
Agents load what they need
Each agent reads its own CONTEXT.md or individual files. No giant global context dump.
Writeback ingests output
db.defineWriteback() watches agent-written files and persists structured data back to the DB.
Why not just write context manually?
Manual context files go stale. Agents make decisions based on old state, which leads to contradictions, repeated work, and incorrect outputs.
Lattice keeps files in sync with the database automatically. Every time something changes, every agent sees fresh context on the next session start.
Entity context directories mean each agent only loads what's relevant to it — not a monolithic dump of everything. This reduces token usage and improves focus.
The writeback pipeline closes the loop: agents don't just read, they write back structured output that becomes permanent state in the database.