Use the agent
The desktop app has a built-in chat interface that runs against your archive. Continue any captured conversation, run code over the vault, bring local files into context.
BYOK setup
Kept doesn't ship its own model. You point it at a provider you have a key for.
Settings → Providers lets you configure:
- OpenAI — API key, model selection
- Anthropic — API key, model selection
- OpenRouter — API key, any of OpenRouter's catalog
- Ollama — local URL (default
http://localhost:11434), model name
You can keep multiple providers configured and switch per conversation. There's no Kept-managed billing layer — your key, your bill.
Continue a captured chat
Open any conversation in the reader. Click Continue at the bottom of the message list. A new message box opens; the full conversation is sent as context to your selected provider, and the agent's reply gets appended.
Works regardless of where the original chat came from. A Claude conversation can be continued against GPT-4. A Gemini chat can be continued against a local Ollama model. The original conversation file gets a continued_in/ sidecar with the new turns.
Code execution
The agent can run code against your vault — for things like "summarize all conversations from March", "extract every URL I've ever mentioned", "rank platforms by how often I use each".
Each execution prompts for explicit consent, showing:
- The exact code that will run
- What directories and files it has access to
- An expected duration estimate
Click Allow and the code runs in a sandboxed Python environment. Click Deny and nothing executes. There's no "always allow" toggle by design — every script run is its own decision.
The sandbox has read access to the vault and a scratch directory. It does not have:
- Internet access (no
urllib, norequests) - Write access outside the scratch directory
- Access to environment variables, your shell, or any other process
Filesystem context
For when you want the agent to reference local files outside the vault — your project source, your notes app, a directory of PDFs.
Settings → Knowledge base → Add directory — pick a folder. Kept indexes its contents (text files only by default; configurable). The next agent conversation has those files searchable via tool calls.
The agent uses MCP-style tool calls to read specific files on demand, not to dump everything into context. Reads are streamed, not bulk-loaded — opening the agent doesn't pre-read your indexed dirs.
PDF and image reading
Drop a PDF or image into the agent's chat box. Kept extracts text and passes it to the model as part of the context. PDFs are parsed locally (no third-party OCR service); images go through the model provider's vision API if you've selected a vision-capable model.
For images already saved with a captured conversation (downloaded by the extension), they're already in ~/.kept/vault/<platform>/assets/ and the agent can reference them by path.
What leaves your machine
- Your prompt + the conversation context → your selected provider's API
- Files the agent decides to read (via tool calls) → same provider
- Tool-call results → back from provider
Everything else — the vault index, the embedding store, the knowledge graph, your providers list — stays local. With Ollama selected as your provider, nothing leaves your machine end to end.
For a fuller breakdown, see How it works.