Installation
npm install latticesql better-sqlite3Requires Node.js 18+. Uses better-sqlite3 under the hood — no external database process to install or manage.
As of latticesql@1.7.0, better-sqlite3 is a peer dependency (range >=11 <13), so install it explicitly alongside latticesql. This lets your app own the native driver build — pin a major that matches your Node version and the rest of your dependency tree.
Quick start
The fastest way to get started is with a YAML config file. You describe your tables, Lattice creates them and handles rendering automatically.
1. Create a config file
# lattice.config.yml
db: ./data/app.db
entities:
agent:
fields:
id: { type: uuid, primaryKey: true }
name: { type: text, required: true }
role: { type: text }
active: { type: boolean, default: true }
render: default-table
outputFile: context/AGENTS.md
task:
fields:
id: { type: uuid, primaryKey: true }
title: { type: text, required: true }
status: { type: text, default: open }
assigned_to: { type: uuid, ref: agent }
render: default-list
outputFile: context/TASKS.md2. Write a few lines of code
import { Lattice } from 'latticesql';
// Point Lattice at your config — tables are created automatically
const db = new Lattice({ config: './lattice.config.yml' });
await db.init();
// Add some data
await db.insert('agent', { name: 'Alice', role: 'engineer' });
await db.insert('agent', { name: 'Bob', role: 'researcher' });
await db.insert('task', { title: 'Fix login bug', status: 'open' });
// Render database → context files
await db.render('./context');
// Writes: context/AGENTS.md, context/TASKS.md
// Or watch for changes and re-render automatically
const stop = await db.watch('./context', { interval: 5000 });3. Check your output
Lattice writes Markdown files that your agents can read at session start. Every time data changes, the files update to reflect current state.
context/
├── AGENTS.md ← table of all agents (name, role, active)
└── TASKS.md ← list of all tasks (title, status, assigned_to)Core concepts
Lattice does four things. Understanding these will help everything else make sense.
1. Sync loop — keep context fresh
Lattice reads your database and writes text files (Markdown, JSON, or any format you want). When your data changes, the files update. Call render() once, or watch() to keep files updated continuously. Your agents always start with current state.
2. Entity directories — scoped context per entity
Instead of one giant file with everything, Lattice can create a directory for each entity (each agent, each project, each customer). Each directory has only the files that entity needs — its own record, its relationships, and a combined context file. This means agents load less data and stay focused.
3. Writeback — persist agent output
Agents produce output — status updates, decisions, notes. The writeback pipeline watches files that agents write to, parses structured entries, and saves them to the database. Next time an agent starts, that data is already in its context files.
4. Reconciliation — clean up after deletions
When you delete an entity from the database, its directory is no longer needed. reconcile() removes those orphaned directories while preserving any files that agents wrote.
Walkthrough: Managing agents
This walkthrough builds an agent management system from scratch. Each agent gets its own context directory with a profile, assigned skills, and a combined context file. By the end, you'll have a working setup where adding an agent to the database automatically creates its context directory.
Schema
Three tables: agents, skills, and a junction table linking agents to skills.
# lattice.config.yml
db: ./data/agents.db
entities:
agent:
fields:
id: { type: uuid, primaryKey: true }
slug: { type: text, required: true }
name: { type: text, required: true }
persona: { type: text }
active: { type: boolean, default: true }
render: default-table
outputFile: context/agents/AGENTS.md
skill:
fields:
id: { type: uuid, primaryKey: true }
name: { type: text, required: true }
description: { type: text }
render: default-list
outputFile: context/skills/SKILLS.md
agent_skill:
fields:
agent_id: { type: uuid }
skill_id: { type: uuid }
primaryKey: [agent_id, skill_id]Set up entity directories
Now tell Lattice to create a directory per agent. Each agent gets a profile file and a skills file. Agents with no skills just won't have a SKILLS.md.
import { Lattice } from 'latticesql';
const db = new Lattice({ config: './lattice.config.yml' });
// Create a directory per agent with relevant files
db.defineEntityContext('agent', {
slug: (row) => row.slug as string,
// Index file listing all agents
index: {
outputFile: 'agents/AGENTS.md',
render: (rows) => '# Agents\n\n' + rows.map((r) => `- ${r.name}`).join('\n'),
},
// Files inside each agent's directory
files: {
'AGENT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n\n${r.persona ?? 'No persona defined.'}`,
},
'SKILLS.md': {
source: {
type: 'manyToMany',
junctionTable: 'agent_skill',
localKey: 'agent_id',
remoteKey: 'skill_id',
remoteTable: 'skill',
},
render: (rows) => '# Skills\n\n' + rows.map((r) => `- **${r.name}**: ${r.description}`).join('\n'),
omitIfEmpty: true, // don't create file if agent has no skills
},
},
// Combine all files into one CONTEXT.md per agent
combined: { outputFile: 'CONTEXT.md', exclude: [] },
// Never delete files the agent writes
protectedFiles: ['SESSION.md'],
});
await db.init();Add data and render
// Add agents
await db.insert('agent', { slug: 'alice', name: 'Alice', persona: 'Senior engineer. Loves TypeScript.' });
await db.insert('agent', { slug: 'bob', name: 'Bob', persona: 'Security researcher.' });
// Add skills and link them
await db.insert('skill', { name: 'TypeScript', description: 'Modern JS/TS development' });
await db.insert('skill', { name: 'Security', description: 'Vulnerability analysis' });
await db.link('agent_skill', { agent_id: aliceId, skill_id: tsId });
await db.link('agent_skill', { agent_id: bobId, skill_id: secId });
// Generate all context files
await db.render('./context');What Lattice creates
context/
├── agents/
│ └── AGENTS.md ← "# Agents" with Alice, Bob listed
├── agents/alice/
│ ├── AGENT.md ← Alice's persona
│ ├── SKILLS.md ← TypeScript skill
│ └── CONTEXT.md ← AGENT.md + SKILLS.md combined
├── agents/bob/
│ ├── AGENT.md ← Bob's persona
│ ├── SKILLS.md ← Security skill
│ └── CONTEXT.md ← combined
└── skills/
└── SKILLS.md ← all skills listedCONTEXT.md at session start. Alice sees her profile and her skills — not Bob's. This keeps token usage low and context relevant.Keep it up to date
Call watch() to re-render whenever data changes, or reconcile() to also clean up directories for deleted agents.
// Re-render every 5 seconds, clean up deleted agents
const stop = await db.watch('./context', {
interval: 5000,
cleanup: {
removeOrphanedDirectories: true,
protectedFiles: ['SESSION.md'],
},
});Walkthrough: Project tracker
This example shows a project tracker where each project gets a directory with its details, open issues, and recent activity. It uses query options to limit and sort related data.
const db = new Lattice('./projects.db');
db.define('project', {
columns: {
id: 'TEXT PRIMARY KEY',
slug: 'TEXT NOT NULL UNIQUE',
name: 'TEXT NOT NULL',
owner: 'TEXT',
status: 'TEXT DEFAULT "active"',
},
render: 'default-table',
outputFile: 'projects/PROJECTS.md',
});
db.define('issue', {
columns: {
id: 'TEXT PRIMARY KEY',
project_id: 'TEXT NOT NULL',
title: 'TEXT NOT NULL',
priority: 'INTEGER DEFAULT 2',
status: 'TEXT DEFAULT "open"',
created_at: 'TEXT',
},
render: 'default-list',
outputFile: 'ISSUES.md',
});
db.defineEntityContext('project', {
slug: (r) => r.slug as string,
index: {
outputFile: 'projects/PROJECTS.md',
render: (rows) => rows.map((r) => `- **${r.name}** (${r.status})`).join('\n'),
},
files: {
'PROJECT.md': {
source: { type: 'self' },
render: ([r]) => [
`# ${r.name}`,
`**Owner:** ${r.owner ?? 'unassigned'}`,
`**Status:** ${r.status}`,
].join('\n'),
},
'ISSUES.md': {
source: {
type: 'hasMany',
table: 'issue',
foreignKey: 'project_id',
// Only show open issues, sorted by priority, max 20
filters: [{ col: 'status', op: 'eq', val: 'open' }],
orderBy: 'priority',
limit: 20,
},
render: (rows) => rows.map((r) => `- [P${r.priority}] ${r.title}`).join('\n'),
omitIfEmpty: true,
budget: 4000, // truncate if over 4000 characters
},
},
combined: { outputFile: 'CONTEXT.md', exclude: [] },
});
await db.init();The filters, orderBy, and limit options on the source let you control exactly what data goes into each file. The budget option truncates the file if it exceeds a character limit — useful for keeping context within token budgets.
Schema & setup
YAML config
The recommended way to define your schema. Write a lattice.config.yml file and Lattice creates your tables, wires up rendering, and can generate TypeScript types for you.since v0.4
# lattice.config.yml
db: ./data/app.db
entities:
user:
fields:
id: { type: uuid, primaryKey: true }
name: { type: text, required: true }
email: { type: text }
role: { type: text, default: member }
render: default-table
outputFile: context/USERS.md
task:
fields:
id: { type: uuid, primaryKey: true }
title: { type: text, required: true }
status: { type: text, default: open }
priority: { type: integer, default: 1 }
assignee_id: { type: uuid, ref: user }
render:
template: default-list
formatRow: '{{title}} ({{status}}) — {{assignee.name}}'
outputFile: context/TASKS.mdType mappings
| YAML type | SQLite type | TypeScript type |
|---|---|---|
| uuid | TEXT | string |
| text | TEXT | string |
| integer / int | INTEGER | number |
| real / float | REAL | number |
| boolean / bool | INTEGER | boolean |
| datetime / date | TEXT | string |
| blob | BLOB | Buffer |
Composite primary keys
For junction tables or any table with a multi-column primary key, use primaryKey as an array. Lattice auto-generates the composite PRIMARY KEY constraint.since v0.17
agent_skill:
fields:
agent_id: { type: uuid }
skill_id: { type: uuid }
primaryKey: [agent_id, skill_id]This creates the table with PRIMARY KEY (agent_id, skill_id) instead of requiring a single-column primary key field.
Run npx lattice generate to create TypeScript types and a SQL migration file from your YAML config. This gives you type-safe access to your tables without writing any boilerplate.
TypeScript define()
If you prefer defining tables in code instead of YAML, use define(). This gives you full control over column types and render logic. Call it before init().
db.define('agents', {
columns: {
id: 'TEXT PRIMARY KEY',
name: 'TEXT NOT NULL',
persona: 'TEXT',
active: 'INTEGER DEFAULT 1',
},
render(rows) {
return rows
.filter((r) => r.active)
.map((r) => `## ${r.name}\n\n${r.persona ?? ''}`)
.join('\n\n---\n\n');
},
outputFile: 'AGENTS.md',
});The render field accepts a function (like above), a built-in template name ('default-list', 'default-table', 'default-detail', 'default-json'), or a template spec with hooks.
Schema-only tables
If you only need a table for data storage (no rendered context file), omit both render and outputFile. Lattice creates the table and gives you the full CRUD API without generating any files during render.since v0.17
// Schema-only — no rendered output
db.define('session', {
columns: {
id: 'TEXT PRIMARY KEY',
agent_id: 'TEXT NOT NULL',
started_at: 'TEXT',
token_count: 'INTEGER DEFAULT 0',
},
});
// Full CRUD works as normal
const id = await db.insert('session', { agent_id: 'a1', started_at: new Date().toISOString() });
const row = await db.get('session', id);Constructor
Three ways to create a Lattice instance:
// From YAML config (recommended)
const db = new Lattice({ config: './lattice.config.yml' });
// From a database path (define tables in code)
const db = new Lattice('./app.db');
// In-memory for tests
const db = new Lattice(':memory:');
// With options
const db = new Lattice('./app.db', {
wal: true, // WAL journal mode (default: true)
busyTimeout: 10_000, // ms to wait on locked DB (default: 5000)
security: {
sanitize: true,
auditTables: ['users', 'credentials'],
fieldLimits: { notes: 50_000, bio: 2_000 },
},
});init() / close()
init() opens the database, creates any tables you defined, and runs migrations. Call it once when your process starts. Call close() when you're done.
await db.init({
migrations: [
{ version: 1, sql: 'ALTER TABLE tasks ADD COLUMN due_date TEXT' },
{ version: 2, sql: 'ALTER TABLE tasks ADD COLUMN priority INTEGER DEFAULT 0' },
],
});
// Migrations run once each — safe to call init() on every process start.
db.close(); // call on shutdownYou can also apply migrations after init using migrate(). This is useful when migrations are defined separately from your init call, or when plugins add their own tables.Migration.version accepts either a number or a string (e.g. a semver tag or date-based identifier).since v0.17
await db.init();
// Apply migrations after init
await db.migrate([
{ version: 3, sql: 'ALTER TABLE tasks ADD COLUMN due_date TEXT' },
{ version: '2026-04-01', sql: 'ALTER TABLE tasks ADD COLUMN priority INTEGER DEFAULT 0' },
]);CLI commands
The CLI is bundled with the package. Run commands with npx lattice.
lattice generateGenerate TypeScript interface types, a SQL migration file, and (optionally) scaffold render output files from a lattice.config.yml.
| Flag | Default | Description |
|---|---|---|
| --config, -c <path> | ./lattice.config.yml | Path to the YAML config file |
| --out, -o <dir> | ./generated | Output directory for generated files |
| --scaffold | off | Create empty scaffold render output files |
npx lattice generate --config ./lattice.config.yml --out ./generated --scaffoldlattice renderOne-shot context generation. Reads the config, connects to the database, and writes all entity context files.
| Flag | Default | Description |
|---|---|---|
| --config, -c <path> | ./lattice.config.yml | Path to the YAML config file |
| --output <dir> | ./context | Output directory for rendered context files |
npx lattice render --config ./lattice.config.yml --output ./contextlattice reconcileRender + orphan cleanup. Writes entity context directories then removes any orphaned entity directories and files no longer in the database.
| Flag | Default | Description |
|---|---|---|
| --config, -c <path> | ./lattice.config.yml | Path to the YAML config file |
| --output <dir> | ./context | Output directory |
| --dry-run | off | Report orphans but do not delete anything |
| --protected <csv> | — | Comma-separated list of protected filenames |
npx lattice reconcile --output ./context --protected SESSION.mdlattice statusDry-run reconcile — shows what would change without writing or deleting anything.
| Flag | Default | Description |
|---|---|---|
| --config, -c <path> | ./lattice.config.yml | Path to the YAML config file |
| --output <dir> | ./context | Output directory |
npx lattice status --output ./contextlattice watchStarts a polling loop that re-renders entity context directories on each interval. Optionally runs orphan cleanup after each cycle.
| Flag | Default | Description |
|---|---|---|
| --config, -c <path> | ./lattice.config.yml | Path to the YAML config file |
| --output <dir> | ./context | Output directory |
| --interval <ms> | 5000 | Poll interval in milliseconds |
| --cleanup | off | Enable orphan cleanup after each render cycle |
| --protected <csv> | — | Comma-separated list of protected filenames (requires --cleanup) |
npx lattice watch --output ./context --interval 3000 --cleanup --protected SESSION.mdData
Reading & writing records
Standard operations for creating, reading, updating, and deleting records. All methods are async and return Promises.
// Create a record — returns the generated id
const id = await db.insert('task', { title: 'Write docs', status: 'open' });
// Update or create by primary key
await db.upsert('task', { id: 'task-001', title: 'Updated title', status: 'done' });
// Update or create by any column
await db.upsertBy('user', 'email', 'alice@example.com', { name: 'Alice' });
// Update specific fields on an existing record
await db.update('task', 'task-001', { status: 'done' });
// Fetch a single record by primary key
const task = await db.get('task', 'task-001');
// Delete a record
await db.delete('task', 'task-001');
// Insert and return the full row
const newTask = await db.insertReturning('task', { title: 'Write docs', status: 'open' });
// newTask = { id: 'generated-uuid', title: 'Write docs', status: 'open', ... }
// Update and return the updated row
const updated = await db.updateReturning('task', 'task-001', { status: 'done' });
// updated = { id: 'task-001', title: 'Write docs', status: 'done', ... }insertReturning() and updateReturning() combine a write with a read in a single call, returning the full row including any auto-generated fields (UUIDs, defaults, timestamps).since v0.17
Queries & filters
Query records with filters, sorting, and pagination. Filters support these operators: eq, ne, gt, gte, lt, lte, like, in, isNull, isNotNull.
// Find open tasks with priority >= 3, newest first
const urgent = await db.query('task', {
where: { status: 'open' },
filters: [{ col: 'priority', op: 'gte', val: 3 }],
orderBy: 'created_at',
orderDir: 'desc',
limit: 20,
});
// Count matching records
const openCount = await db.count('task', { where: { status: 'open' } });Natural-key operations
When you identify records by a name or slug instead of a UUID, these methods handle the lookup automatically.since v0.11 They work on any table, including tables not registered with define().
// Create or update a record by its name
await db.upsertByNaturalKey('agent', 'name', 'Alice', {
role: 'engineer', status: 'active',
});
// Update only the fields you pass (leaves others untouched)
await db.enrichByNaturalKey('agent', 'name', 'Alice', { title: 'Senior Engineer' });
// Look up a record by name
const alice = await db.getByNaturalKey('agent', 'name', 'Alice');
// Get all non-deleted records
const agents = await db.getActive('agent', 'name');
const count = await db.countActive('agent');
// Soft-delete records that are no longer in a source file
await db.softDeleteMissing('agent', 'name', 'agents.yaml', ['Alice', 'Bob']);
// Link / unlink records in a junction table
await db.link('agent_skill', { agent_id: 'a1', skill_id: 's1' });
await db.unlink('agent_skill', { agent_id: 'a1', skill_id: 's1' });Seeding from files
Load records in bulk from YAML or JSON files. Lattice upserts by natural key, links junction table entries, and soft-deletes anything that's no longer in the source data.since v0.13
import { parse } from 'yaml';
import { readFileSync } from 'fs';
const rules = parse(readFileSync('rules.yaml', 'utf8'));
await db.seed({
data: rules,
table: 'rule',
naturalKey: 'title',
sourceFile: 'rules.yaml',
linkTo: {
targetAgents: {
junction: 'rule_agent',
foreignKey: 'agent_id',
resolveBy: 'name',
resolveTable: 'agent',
},
},
softDeleteMissing: true,
});Context files
Rendering basics
Rendering is how Lattice turns database rows into text files. There are four methods depending on what you need:
render()One-shot. Writes all files once. Use before launching an agent or in a CI pipeline.
sync()Render + writeback. Same as render(), but also processes any agent-written output files.
watch()Continuous. Polls the database and re-renders every N seconds. Use for long-running processes.
reconcile()Render + cleanup. Same as render(), but also removes directories for deleted entities.
// One-shot render
const result = await db.render('./context');
// { filesWritten: ['context/AGENTS.md'], filesSkipped: 2, durationMs: 12 }
// Render + process agent output
await db.sync('./context');
// Watch with auto-cleanup
const stop = await db.watch('./context', {
interval: 5_000,
cleanup: { removeOrphanedDirectories: true, protectedFiles: ['SESSION.md'] },
});
// Render + clean up orphaned directories
await db.reconcile('./context', {
removeOrphanedDirectories: true,
protectedFiles: ['SESSION.md'],
dryRun: false, // set to true to preview without deleting
});Files are written atomically (write to temp, rename). If a file's content hasn't changed, Lattice skips it — so re-rendering is cheap.
Entity directories
defineEntityContext() creates a directory for each row in a table.since v0.5 You declare which files each directory should contain and where the data comes from. Lattice handles querying, directory creation, and cleanup.
db.defineEntityContext('agent', {
// How to name each directory (e.g. agents/alice/)
slug: (row) => row.slug as string,
// Apply to all relationship sources
sourceDefaults: { softDelete: true },
// Global index file
index: {
outputFile: 'agents/AGENTS.md',
render: (rows) => rows.map((r) => `- ${r.name}`).join('\n'),
},
// Files inside each agent's directory
files: {
'AGENT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n\n${r.persona ?? ''}`,
},
'TASKS.md': {
source: { type: 'hasMany', table: 'task', foreignKey: 'agent_id',
orderBy: 'created_at', orderDir: 'desc', limit: 20 },
render: (rows) => rows.map((r) => `- ${r.title}`).join('\n'),
omitIfEmpty: true,
budget: 4000,
},
'SKILLS.md': {
source: {
type: 'manyToMany',
junctionTable: 'agent_skill',
localKey: 'agent_id',
remoteKey: 'skill_id',
remoteTable: 'skill',
},
render: (rows) => rows.map((r) => `- ${r.name}`).join('\n'),
omitIfEmpty: true,
},
},
// Merge all files into one combined file
combined: { outputFile: 'CONTEXT.md', exclude: [] },
// These files (written by agents) are never deleted during cleanup
protectedFiles: ['SESSION.md'],
});Source types
Each file in an entity directory gets its data from a "source". There are six types:
selfThe entity row itself.
hasManyRelated rows that point back to this entity.
manyToManyRelated rows through a junction table.
belongsToA single parent row via a foreign key on this entity.
enrichedThe entity row with related data attached as JSON fields.
customA fully custom query you write yourself.
The hasMany, manyToMany, and belongsTo sources accept optional query options: softDelete, filters, orderBy, orderDir, and limit.since v0.6 Set sourceDefaults on the entity context to apply the same options to every source.
Render templates
Instead of writing a custom render function for every file, you can use a built-in template.since v0.9
entity-table
Renders rows as a Markdown table.
render: {
template: 'entity-table',
heading: 'Skills',
columns: [
{ key: 'name', header: 'Name' },
{ key: 'level', header: 'Level', format: (v) => String(v || '—') },
],
}entity-profile
Renders a single entity as a field-value profile with optional sections.
render: {
template: 'entity-profile',
heading: (r) => r.name as string,
fields: [
{ key: 'status', label: 'Status' },
{ key: 'role', label: 'Role' },
],
sections: [
{ key: 'skills', heading: 'Skills', render: 'list',
formatItem: (s) => s.name },
],
}entity-sections
Renders multiple rows as headed sections (good for rules, events, notes).
render: {
template: 'entity-sections',
heading: 'Rules',
perRow: {
heading: (r) => r.title as string,
metadata: [{ key: 'scope', label: 'Scope' }],
body: (r) => r.rule_text as string,
},
}Cleanup & reconciliation
When you delete an entity from the database, its directory becomes an orphan. Use reconcile() to remove it. Lattice tracks which directories it created using a .lattice/manifest.json file, so it only touches directories it owns.
Files listed in protectedFiles are never deleted. If an entity is removed but its directory still has protected files, Lattice removes only its own files and leaves the directory intact with a warning.
Reverse-sync (v0.16+)
AI agents frequently edit rendered context files directly. Without reverse-sync, those edits are destroyed on the next render. Add a reverseSync function to any EntityFileSpec to parse external edits back into the database before re-rendering.
// In your entity context definition:
'AGENT.md': {
source: { type: 'self' },
render: ([r]) => `# ${r.name}\n**Role:** ${r.role}\n`,
reverseSync: (content, entityRow) => {
const match = content.match(/^# (.+)$/m);
if (match && match[1] !== entityRow.name) {
return [{ table: 'agent', pk: { id: entityRow.id }, set: { name: match[1] } }];
}
return [];
},
}
// Control via reconcile options:
await db.reconcile(dir); // reverse-sync enabled (default)
await db.reconcile(dir, { reverseSync: 'dry-run' }); // detect but don't modify DB
await db.reconcile(dir, { reverseSync: false }); // skip entirelyLattice stores SHA-256 hashes of rendered content in the manifest. On the next reconcile, it compares hashes to detect modifications. Only files with a reverseSync function are checked — files without it are overwritten as before.
Agent I/O
Writeback pipeline
The writeback pipeline watches files that agents write to and parses their output back into the database. You define which files to watch, how to parse them, and where to store the results.
db.defineWriteback({
// Watch all SESSION.md files across agent directories
file: './context/agents/*/SESSION.md',
// Parse new content since last offset
parse(content, fromOffset) {
const newContent = content.slice(fromOffset);
const entries = parseMarkdownItems(newContent);
return { entries, nextOffset: content.length };
},
// Save each entry to the database
async persist(entry, filePath) {
await db.insert('event', {
source_file: filePath,
...(entry as Record<string, unknown>),
});
},
// Prevent duplicate processing
dedupeKey: (entry) => (entry as { id: string }).id,
});
// db.sync() renders context AND processes writeback files
await db.sync('./context');By default, writeback offsets are held in memory and lost on restart. For persistence across restarts, plug in a SQLite state store:since v0.12
import { createSQLiteStateStore } from 'latticesql';
// Offsets and dedup keys survive process restarts
const stateStore = createSQLiteStateStore('./state.db');
db.defineWriteback({
file: './context/agents/*/SESSION.md',
stateStore,
parse(content, fromOffset) { /* ... */ },
async persist(entry) { /* ... */ },
});SESSION.md pattern
SESSION.md is a convention for agent-written output. Context files generated by Lattice are read-only. SESSION.md is the one file per entity directory where agents can write structured entries that get ingested back into the database.
Agents write structured entries in this format:
---
type: write
timestamp: 2026-03-25T10:30:00Z
op: update
table: agent
target: agent-id-here
reason: Completed deployment task.
---
status: idle
last_task: deploy-api
===Lattice ships parsers for this format:
import { parseSessionWrites, parseSessionMD, applyWriteEntry } from 'latticesql';
// Parse write entries only
const result = parseSessionWrites(sessionContent);
for (const entry of result.entries) {
// entry.op, entry.table, entry.target, entry.fields, entry.reason
const applied = applyWriteEntry(db.db, entry); // pass raw SQLite connection
}
// Parse all entry types (events, learnings, writes, etc.)
const all = parseSessionMD(content, startOffset);
// all.entries: SessionEntry[], all.lastOffset: numberWrite hooks
Run code after a record is inserted, updated, or deleted. Useful for keeping denormalized fields in sync, fan-out updates, or computed columns.since v0.10
db.defineWriteHook({
table: 'agent',
on: ['insert', 'update'],
watchColumns: ['team_id', 'division'], // only fire when these change
handler: (ctx) => {
// ctx.table, ctx.op, ctx.row, ctx.pk, ctx.changedColumns
updateTeamCounts(ctx.row.team_id);
},
});Hook errors are caught and routed to error handlers — they never crash the caller. Multiple hooks per table are supported.
Intelligence
Token budgets
Limit the rendered output of a table to a token budget. When content exceeds the budget, rows are pruned by priority and a truncation footer is appended.since v1.3
db.define('tickets', {
columns: { id: 'TEXT PRIMARY KEY', title: 'TEXT', updated_at: 'TEXT' },
render: (rows) => rows.map((r) => `- ${r.title}`).join('\n'),
outputFile: 'TICKETS.md',
tokenBudget: 4000, // max estimated tokens (~4 chars/token)
prioritizeBy: 'updated_at', // keep most recent rows when pruning
});
// Output: "- Fix auth bug\n- Deploy v2\n\n[truncated: 47 of 123 rows rendered, ~3800 tokens]"prioritizeBy accepts a column name (sorted descending) or a (a, b) => number comparator. When omitted, rows at the end of the query result are dropped first.
Relevance filtering
Dynamically filter rows based on the current task context. Only relevant rows are rendered.since v1.3
db.define('knowledge', {
columns: { id: 'TEXT PRIMARY KEY', topic: 'TEXT', body: 'TEXT' },
render: (rows) => rows.map((r) => `## ${r.topic}\n${r.body}`).join('\n\n'),
outputFile: 'KNOWLEDGE.md',
relevanceFilter: (row, ctx) =>
ctx ? String(row.body).toLowerCase().includes(ctx.toLowerCase()) : true,
});
// Set the current task — only matching rows are rendered
db.setTaskContext('deployment');
await db.render('./context');
// Clear context — all rows rendered again
db.setTaskContext('');Enrichment pipeline
Transform rows between filtering and rendering. Add computed fields, cluster by category, or summarize large datasets — without modifying the underlying data.since v1.3
db.define('incidents', {
columns: { id: 'TEXT PRIMARY KEY', severity: 'TEXT', title: 'TEXT', created_at: 'TEXT' },
render: (rows) => JSON.stringify(rows, null, 2),
outputFile: 'incidents.json',
enrich: [
// Add computed age field
(rows) => rows.map((r) => ({
...r,
_age_hours: Math.round((Date.now() - new Date(r.created_at).getTime()) / 3600000),
})),
// Summarize if too many rows
(rows) => rows.length > 100
? [{ _summary: `${rows.length} incidents, ${rows.filter(r => r.severity === 'P0').length} critical` }]
: rows,
],
});Reward memory
Track which data is useful to your agents. High-reward rows are prioritized in rendering; low-scoring rows can be auto-pruned via soft-delete.since v1.3
db.define('tips', {
columns: { id: 'TEXT PRIMARY KEY', tip: 'TEXT', deleted_at: 'TEXT' },
render: (rows) => rows.map((r) => `- ${r.tip}`).join('\n'),
outputFile: 'TIPS.md',
rewardTracking: true, // auto-adds _reward_total, _reward_count columns
pruneBelow: 0.3, // soft-delete rows with reward < 0.3
});
await db.init();
const id = await db.insert('tips', { tip: 'Use batch inserts for bulk data' });
// After the agent confirms this tip was useful
await db.reward('tips', id, { relevance: 0.9, accuracy: 1.0 });
// _reward_total = 0.95, _reward_count = 1
// Second signal — running average
await db.reward('tips', id, { relevance: 0.5 });
// _reward_total = 0.7, _reward_count = 2Semantic search
Enable embedding-based semantic search on any table. Bring your own embedding function — Lattice stores vectors in a companion SQLite table and computes cosine similarity in JS. No external vector database required.since v1.3
import { Lattice } from 'latticesql';
db.define('docs', {
columns: { id: 'TEXT PRIMARY KEY', title: 'TEXT', body: 'TEXT' },
render: (rows) => rows.map((r) => `## ${r.title}\n${r.body}`).join('\n\n---\n\n'),
outputFile: 'DOCS.md',
embeddings: {
fields: ['title', 'body'],
embed: async (text) => {
const res = await openai.embeddings.create({
input: text, model: 'text-embedding-3-small',
});
return res.data[0].embedding;
},
},
});
await db.init();
await db.insert('docs', { title: 'Deploy guide', body: 'How to deploy to production...' });
// Search by meaning, not keywords
const results = await db.search('docs', 'ship to prod', { topK: 5, minScore: 0.7 });
for (const { row, score } of results) {
console.log(`${score.toFixed(2)} — ${row.title}`);
}Writeback validation
Validate agent-written data before persisting. Reject low-quality or inconsistent entries with scoring and threshold-based gating.since v1.3
db.defineWriteback({
file: './agent-output/*.md',
parse: (content, offset) => ({
entries: [content.slice(offset)],
nextOffset: content.length,
}),
persist: async (entry) => { /* save to DB */ },
validate: async (entry) => {
const text = entry as string;
const hasFields = text.includes('## Title') && text.includes('## Body');
return {
pass: hasFields,
score: hasFields ? 0.9 : 0.1,
reason: hasFields ? undefined : 'Missing required sections',
};
},
rejectBelow: 0.5,
onReject: (entry, result) => {
console.warn(`Rejected: ${result.reason} (score: ${result.score})`);
},
});Performance
Prepared statement cache
since v1.4
Lattice automatically caches compiled SQLite prepared statements. Repeated calls with the same SQL reuse the compiled statement instead of recompiling on every invocation. DDL statements (CREATE, ALTER, DROP, PRAGMA) bypass the cache. The cache clears automatically on close() and after schema or migration changes. No API changes required.
Batch entity query resolution
since v1.4
Entity context rendering pre-fetches related rows for all entities in a single WHERE IN (...) query per source, replacing the previous per-entity query pattern. hasMany, manyToMany, and belongsTo sources are batched automatically. custom and enriched sources fall back to per-entity resolution. IN clauses are chunked at 500 parameters to stay under SQLite's limit. No API changes required.
Render change detection
since v1.4
Lattice tracks per-table write version counters. Use isDirty() to check if any table has been written to since the last render, and markDirty(table?) after escape-hatch writes.
// Custom polling loop that skips redundant renders
setInterval(async () => {
if (db.isDirty()) {
await db.render(outputDir);
}
}, 5000);
// After direct DB writes, mark dirty
db.db.prepare('UPDATE tasks SET status = ?').run('done');
db.markDirty('tasks');Migration validation
since v1.4
Pass a validateMigrationSQL function in InitOptions to validate all pending migration SQL before any migrations execute. If validation fails, no migrations run and an error is thrown. Multi-statement migration SQL is fully supported.
await db.init({
migrations: [
{ version: 1, sql: 'ALTER TABLE tasks ADD COLUMN due_date TEXT' },
],
validateMigrationSQL: (sql) => {
if (sql.trim().length === 0) return { valid: false, errors: ['Empty SQL'] };
return { valid: true };
},
});Utilities
Reports
Build time-windowed reports from your data. Useful for daily summaries, activity digests, or status dashboards.since v0.14
const report = await db.buildReport({
since: '24h', // or '8h', '7d', or an ISO timestamp
sections: [
{ name: 'tasks', query: { table: 'task' }, format: 'count_and_list' },
{ name: 'events', query: { table: 'event', groupBy: 'type' }, format: 'counts' },
{ name: 'alerts', query: { table: 'event',
filters: [{ col: 'severity', op: 'lte', val: 2 }] }, format: 'list' },
],
});
report.sections.forEach(s => console.log(`${s.name}: ${s.count}`));Markdown helpers
Helper functions you can use inside render functions to reduce boilerplate.since v0.6
import { frontmatter, markdownTable, slugify, truncate } from 'latticesql';
// YAML frontmatter with auto timestamp
frontmatter({ agent: 'Alice', skill_count: 5 });
// Markdown table from rows
markdownTable(rows, [
{ key: 'name', header: 'Name' },
{ key: 'status', header: 'Status', format: (v) => String(v || '—') },
]);
// URL-safe slug
slugify('Alice Smith'); // 'alice-smith'
// Truncate to character budget
truncate(longContent, 4000);Auto-update
For applications that manage their own updates at runtime, autoUpdate() checks npm for a newer version and installs it automatically. Call it once at startup, before initializing Lattice.since v1.1
import { autoUpdate } from 'latticesql';
// Call at app startup — checks npm, installs if outdated
const result = await autoUpdate();
if (result.restartRequired) {
process.exit(0); // Let process manager restart
}Safe to call on every startup — skips if already on the latest version. Pass { quiet: true } to suppress console output.
interface AutoUpdateResult {
updated: boolean;
packages: Array<{ name: string; from: string; to: string }>;
restartRequired: boolean;
}Events
Subscribe to lifecycle events for monitoring and audit logging.
db.on('audit', ({ table, operation, id, timestamp }) => { /* ... */ });
db.on('render', ({ filesWritten, filesSkipped, durationMs }) => { /* ... */ });
db.on('writeback', ({ filePath, entriesProcessed }) => { /* ... */ });
db.on('error', (err: Error) => { /* ... */ });audit events fire on every insert/update/delete for tables listed in security.auditTables.