# SilverBullet AI
> SilverBullet AI is a plug for SilverBullet v2 that integrates LLMs for
> AI-powered note-taking, chat, semantic search, and content generation.
SilverBullet AI provides multi-turn chat, customizable AI agents with tools,
RAG-powered context enrichment, templated prompts, and supports multiple
providers (OpenAI, Ollama, Gemini, Mistral, OpenRouter, Perplexity).
## Core
### Quick Start
This is a short introduction to installing and using the SilverBullet AI plug with **SilverBullet v2**.
## Installation
Run `Library: Install` and enter `ghr:justyns/silverbullet-ai/PLUG.md` (see [[Installation]] for more options)
## Configuration
Add your configuration to a Space Lua block. This example uses OpenAI, but see [[Providers]] for other options including self-hosted models with Ollama.
```lua
config.set {
ai = {
providers = {
openai = {
apiKey = "your-openai-key-here",
preferredModels = {"gpt-4o", "gpt-4o-mini"}
}
},
defaultTextModel = "openai:gpt-4o",
-- Optional: enable embeddings for semantic search
defaultEmbeddingModel = "openai:text-embedding-3-small",
indexEmbeddings = true,
-- Optional: configure DALL-E for image generation
defaultImageModel = "openai:dall-e-3",
-- Optional: customize chat behavior
chat = {
userInformation = "I'm a software developer who likes taking notes.",
userInstructions = "Please give short and concise responses."
}
}
}
```
With this configuration:
- Models are fetched automatically from the provider's API
- `preferredModels` appear first in the model picker (marked with ★)
- `defaultTextModel`, `defaultEmbeddingModel`, and `defaultImageModel` use the same API key
- Run **"AI: Refresh Model List"** to update cached models after config changes
See [[Configuration]] for all options, including [[Configuration/Chat Instructions]], [[Configuration/Embedding Models]], and [[Configuration/Image Models]].
## Usage
Open a new note, run [[Commands/AI: Chat on current page]] or press ++ctrl+shift+enter++ (++cmd+shift+enter++ on Mac) to start a chat session.
Or use [[Commands/AI: Toggle Assistant Panel]] (++ctrl+shift+a++) for a side-panel chat that persists across pages.
And that's it! Look at the other [[Commands]] available, as well as check out the [[Templated Prompts]] to go further.
### Troubleshooting
If something didn't work right, try using the `AI: Connectivity Test` command and also checking your browser's javascript console.
### Installation
## Library Manager (Recommended)
Requires SilverBullet v2.3.0+
1. Run `Library: Install` command
2. Enter one of the following:
**Latest release:**
```
ghr:justyns/silverbullet-ai/PLUG.md
```
**Specific release:**
```
ghr:justyns/silverbullet-ai@0.6.4/PLUG.md
```
See [GitHub Releases](https://github.com/justyns/silverbullet-ai/releases) for available versions.
**Upgrading?** If you have an old version in `_plug/`, delete it before reinstalling via Library Manager.
## Configuration
After installing, configure your API keys and models via Space Lua. See [[Configuration]] for full details.
Minimal example:
```lua
config.set {
ai = {
keys = {
OPENAI_API_KEY = "your-key-here"
},
textModels = {
{
name = "GPT-4o",
description = "OpenAI GPT-4o",
modelName = "gpt-4o",
provider = "openai",
secretName = "OPENAI_API_KEY",
requireAuth = true
}
}
}
}
```
Run `AI: Connectivity Test` to verify your configuration.
### Configuration
Configuration is done using SilverBullet v2's Space Lua configuration system.
## Provider Configuration (Recommended)
The recommended way (as of version 0.6.0) to configure silverbullet-ai is using the `providers` config. This fetches models dynamically using each provider's API instead of needing to update the config for each one.
```lua
-- API keys can be defined however you want, previous convention of config.ai.keys can still be used if wanted, but you will need to set them and then reference them
local openai_key = "sk-your-openai-key-here"
config.set {
ai = {
-- Provider-level configuration
providers = {
openai = {
apiKey = openai_key,
-- Could also be something like: apiKey = config.get("ai.keys.OPENAI_API_KEY"),
useProxy = false,
preferredModels = {"gpt-4o", "gpt-4o-mini"}
},
ollama = {
baseUrl = "http://localhost:11434/v1",
preferredModels = {"llama3.2", "qwen2.5-coder"},
timeout = 180000
},
gemini = {
apiKey = "your-gemini-key",
preferredModels = {"gemini-2.0-flash"}
}
},
-- Default model to use (format: "provider:modelName")
-- This is auto-selected on startup if no model is already selected on this client
defaultTextModel = "ollama:llama3.2",
-- Chat settings
chat = {
bakeMessages = true,
searchEmbeddings = true,
userInformation = "I'm a software developer who likes taking notes.",
userInstructions = "Please give short and concise responses."
}
}
}
```
With this configuration:
- **"AI: Select Text Model"** will show all available models from each configured provider
- preferredModels will show up first, but you can type to filter through all available models
- Use **"AI: Refresh Model List"** to update the cached model lists
- **`defaultTextModel`** will automatically select the specified model on startup if no model is already selected
### Global Options
| Option | Description |
|--------|-------------|
| `defaultTextModel` | Default model to use on startup (format: `"provider:modelName"`, e.g., `"ollama:llama3.2"` or `"openai:gpt-4o"`) |
### Provider Options
| Option | Description |
|--------|-------------|
| `apiKey` | API key for the provider (inline or via Lua variable) |
| `baseUrl` | Custom API endpoint (required for Ollama, optional for OpenAI-compatible APIs) |
| `useProxy` | Whether to use SilverBullet's proxy (default: true, set false for local services) |
| `preferredModels` | Array of model names to show first in the picker |
| `fetchModels` | Whether to fetch models from API (default: true). Set to `false` for APIs that don't support listing models - only `preferredModels` will be shown |
| `timeout` | Request timeout in milliseconds. Defaults: OpenAI/Gemini: 60000 (60s), Ollama: 120000 (120s), Image generation: 180000 (180s). For streaming requests, timeout only applies to the initial connection - once data starts flowing, the request can take as long as needed. |
## Legacy Configuration (Deprecated)
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
-- API keys
keys = {
OPENAI_API_KEY = "your-openai-key-here"
},
-- Disabled by default, indexEmbeddings and indexSummary can be set
-- to true to provide the AI: Search command.
-- Be sure to read the relevant documentation and warnings first.
indexEmbeddings = false,
indexSummaryModelName = "ollama-gemma2",
indexSummary = false,
-- Configure one or more image models. Only OpenAI's API is currently supported
imageModels = {
{name = "dall-e-3", modelName = "dall-e-3", provider = "dalle"},
{name = "dall-e-2", modelName = "dall-e-2", provider = "dalle"}
},
-- Configure one or more text models (DEPRECATED - use providers instead)
textModels = {
{
name = "ollama-phi-2",
modelName = "phi-2",
provider = "ollama",
baseUrl = "http://localhost:11434/v1",
requireAuth = false,
useProxy = false
},
{name = "gpt-4o", provider = "openai", modelName = "gpt-4o"},
{name = "gpt-4o-mini", provider = "openai", modelName = "gpt-4o-mini"}
},
-- Chat section is optional, but may help provide better results
-- when using the Chat On Page command
chat = {
-- If bakeMessages is true, SilverBullet query and template blocks
-- are rendered before sending
bakeMessages = true,
-- If searchEmbeddings is true, the Chat command will search indexed
-- embeddings and provide relevant pages as context.
searchEmbeddings = true,
-- When using chat, the userInformation and userInstructions
-- are included in the system prompt.
userInformation = "I'm a software developer who likes taking notes.",
userInstructions = "Please give short and concise responses."
},
-- Prompt Instructions are optional, but can help steer the LLM
-- to more personalized results for built-in commands.
promptInstructions = {
pageRenameRules = "Include a random animal name in every note title.",
tagRules = "Tag every note with the current year."
}
}
}
```
## Configuration Options
- **[[Configuration/Text Models]]** - Configure LLM providers for text generation
- **[[Configuration/Image Models]]** - Configure image generation (DALL-E)
- **[[Configuration/Embedding Models]]** - Configure embeddings for semantic search
- **[[Configuration/Chat Instructions]]** - Customize chat behavior
- **[[Configuration/Prompt Instructions]]** - Customize built-in command prompts
- **[[Configuration/Custom Enrichment Functions]]** - Add custom context enrichment
### Templated Prompts
Template notes make use of all of the template language available to SilverBullet.
## Creating Markdown Templates
To be a templated prompt, the note must have the following frontmatter:
- `tags` must include `meta/template/aiPrompt`
- `aiprompt` object must exist and have a `description` key
- Optionally, `aiprompt.slashCommand` to register as a slash command
- Optionally, `aiprompt.systemPrompt` can be specified to override the system prompt
- Optionally, `aiprompt.chat` can be specified to treat the template as a multi-turn chat instead of single message
- Optionally, `aiprompt.enrichMessages` can be set to true to enrich each chat message
- Optionally, `aiprompt.postProcessors` can be set to a list of Space Lua function names to manipulate text returned by the llm
For example, here is a templated prompt to summarize the current note and insert the summary at the cursor:
``` markdown
---
tags: meta/template/aiPrompt
aiprompt:
description: "Generate a summary of the current page."
---
Generate a short and concise summary of the note below.
title: ${@page.name}
Everything below is the content of the note:
${readPage(@page.ref)}
```
With the above note saved as `AI: Generate note summary`, you can run the `AI: Execute AI Prompt from Custom Template` command from the command palette, select the `AI: Generate note summary` template, and the summary will be streamed to the current cursor position.
Another example prompt is to pull in remote pages and ask the llm to generate Space Lua code for you:
``` markdown
---
tags: meta/template/aiPrompt
aiprompt:
description: "Describe the Space Lua functionality you want and generate it"
systemPrompt: "You are an expert Lua developer. Help the user develop new functionality for their personal note taking tool using SilverBullet's Space Lua."
slashCommand: aiSpaceLua
---
SilverBullet Space Lua documentation:
${readPage([[!silverbullet.md/Space%20Lua]])}
Using the above documentation, please create Space Lua code following the user's description in the note below. Output only valid markdown with a code block using space-lua. No explanations, code in a markdown space-lua block only.
title: ${@page.name}
Everything below is the content of the note:
${readPage(@page.ref)}
```
## Creating Space Lua Prompts
You can also define AI prompts entirely in Space Lua using `ai.prompt.define`:
```lua
ai.prompt.define {
name = "Quick Summary",
description = "Summarize selected text",
slashCommand = "aiQuickSummary",
systemPrompt = "You are a helpful assistant.",
template = "Summarize this:\n\n${@selectedText or @currentPageText}",
insertAt = "replace-selection",
extraContext = {
customVar = "my custom value",
}
}
```
### ai.prompt.define(spec)
Supported keys in the spec:
* `name`: Display name for the prompt (required)
* `template`: The prompt template string, supports `${...}` interpolation (required)
* `description`: Description shown in pickers
* `slashCommand`: (optional) Register as a slash command with this name
* `systemPrompt`: (optional) System prompt for the LLM
* `insertAt`: (optional) Where to insert result (default: `cursor`)
* `chat`: (optional) Set to `true` for multi-turn chat mode
* `enrichMessages`: (optional) Set to `true` to enable message enrichment
* `postProcessors`: (optional) Array of function names to post-process output
* `extraContext`: (optional) Additional variables to merge into template context
## Template Metadata
The following global metadata is available for use inside of an aiPrompt template:
* **`page`**: Metadata about the current page.
* **`currentItemBounds`**: Start and end positions of the current item. An item may be a bullet point or task.
* **`currentItemText`**: Full text of the current item.
* **`currentLineNumber`**: Line number of the current cursor position.
* **`lineStartPos`**: Starting character position of the current line.
* **`lineEndPos`**: Ending character position of the current line.
* **`currentPageText`**: Entire text of the current page.
* **`parentItemBounds`**: Start and end positions of the parent item.
* **`parentItemText`**: Full text of the parent item. A parent item may contain child items.
* **`selectedText`**: Text the user has currently selected.
* **`currentParagraph`**: Text of the current paragraph where the cursor is located.
* **`smartReplaceType`**: Indicates the type of content being replaced when using the 'replace-smart' option. Can be 'selected-text', 'current-item', or 'current-paragraph'.
* **`smartReplaceText`**: The text that will be replaced when using the 'replace-smart' option.
These variables can be accessed inside `${...}` interpolation by prefixing the variable name with `@`, like `${@lineEndPos}` or `${@selectedText}`.
## Insert At Options
The `insertAt` option in the `aiprompt` frontmatter determines where the generated content will be inserted. The valid options are:
* **`cursor`**: Inserts at the current cursor position
* **`page-start`**: Inserts at the beginning of the page
* **`page-end`**: Inserts at the end of the page
* **`start-of-line`**: Inserts at the start of the current line
* **`end-of-line`**: Inserts at the end of the current line
* **`start-of-item`**: Inserts at the start of the current item (list item or task)
* **`end-of-item`**: Inserts at the end of the current item
* **`new-line-above`**: Inserts on a new line above the current line
* **`new-line-below`**: Inserts on a new line below the current line
* **`replace-line`**: Replaces the current line with the generated content
* **`replace-paragraph`**: Replaces the entire paragraph (or item) where the cursor is located with the generated content
* **`replace-selection`**: Replaces the currently selected text with the generated content. If no text is selected, it behaves like the 'cursor' option
* **`replace-smart`**: Intelligently replaces content based on context:
- If text is selected, it replaces the selection.
- If no text is selected but the cursor is within a list item or task, it replaces the entire item.
- If neither of the above applies, it replaces the current paragraph.
### Replacing content
If the objective is to replace all or a portion of the note's content, the `replace-smart` option is the best choice. It intelligently selects the most appropriate text to replace based on the cursor's context. If more control is needed, any of the other options can be used.
**Note** that the replace options will remove existing content before inserting the new content. Make sure there is a backup of any important content before using these options.
## Chat-style prompts
`aiprompt.chat` can be set to true in the template frontmatter to treat the template similar to a page using [[Commands/AI: Chat on current page]].
For example, a summarize prompt could look like this:
```markdown
---
tags: meta/template/aiPrompt
aiprompt:
description: "Generate a summary of the current page."
systemPrompt: You are an AI Note Summary bot. Help the user create useful and accurate summaries.
slashCommand: aisummarychat
chat: true
---
**user**: [enrich:false] I'll provide the note contents, and instructions.
**assistant**: What are the note contents?
**user**: [enrich:true] title: ${@page.name}
Everything below is the content of the note:
${readPage(@page.ref)}
**assistant**: What are the instructions?
**user**: [enrich:false] Generate a short and concise summary of the note.
```
These messages will be parsed into multiple chat messages when calling the LLM's api. Only the response from the LLM will be included in the note where the template is triggered from.
The `enrich` attribute can also be toggled on or off per message. By default it is either disabled or goes off of the `aiPrompt.enrichMessages` frontmatter attribute. Assistant and system messages are never enriched.
## Post Processors
`aiPrompt.postProcessors` can be set to a list of Space Lua function names like in the example below. Once the LLM finishes streaming its response, the entire response will be sent to each post processor function in order.
Each function must accept a single data parameter containing these fields:
- `response`: The full response text
- `lineBefore`: The line before where the response was inserted
- `lineAfter`: The line after where the response was inserted
- `lineCurrent`: The line where the cursor was before the response was inserted
A simple post processing function looks like this:
```lua
function aiFooBar(data)
return "FOO " .. data.response .. " BAR"
end
```
This function could be used in a template prompt like this:
```yaml
---
tags: meta/template/aiPrompt
aiprompt:
description: "Generate a random pet name."
slashCommand: aiGeneratePetName
insertAt: cursor
postProcessors:
- aiFooBar
---
Generate a random name for a pet. Only generate a single name. Return nothing but that name.
```
Running this prompt, the LLM may return `Henry` as the name and then aiFooBar will transform it into `FOO Henry BAR` which is what will ultimately be placed in the note the templated was executed from.
### Bundled Prompts
# Bundled Prompts
The plug ships with several prompt templates defined in Space Lua. These appear as commands in the command palette but are implemented as AI prompts that can be customized through configuration.
## Available Bundled Prompts
### AI: Enhance Note
A convenience command that runs three prompts in sequence:
1. AI: Generate tags for note
2. AI: Generate Note FrontMatter
3. AI: Suggest Page Name
### AI: Generate tags for note
Analyzes the current note and suggests tags based on its content. Generated tags are merged with any existing tags in the note's frontmatter.
**Customization:** Set `ai.promptInstructions.tagRules` in your config:
```lua
config.set("ai", {
promptInstructions = {
tagRules = [[
ONLY use existing tags. Don't create new tags.
ONLY add relevant tags. Favor a small number of tags instead of many.
Tag notes that contain confirmations or receipts with #receipt.
]]
}
})
```
See [[Configuration/Prompt Instructions]] for more examples.
### AI: Generate Note FrontMatter
**Experimental.** Extracts useful information from the note content and generates frontmatter attributes.
**Customization:** Set `ai.promptInstructions.enhanceFrontMatterPrompt` in your config.
Without specific rules, the LLM may over-generate attributes. Consider providing guidance on which attributes are valuable for your use case.
### AI: Suggest Page Name
Sends the note to the LLM and asks for suggested titles. Presents a list of suggestions and renames the page if one is selected.
**Customization:** Set `ai.promptInstructions.pageRenameRules` or `ai.promptInstructions.pageRenameSystem` in your config:
```lua
config.set("ai", {
promptInstructions = {
pageRenameRules = [[
Retain ALL date and time information from the original note title.
If there is a date at the beginning, ensure a hyphen separates the timestamp from the title.
If tags include #receipt, move it to "Receipts/YYYY/MM-MMMM/" using the date from the note metadata.
]]
}
})
```
### Agents
# Agents
Agents are customizable personas that configure how the Assistant behaves. Each agent has its own system prompt and can optionally restrict which tools are available. You can also optionally provide additional context to the agent, such as providing links to other notes in the space or http urls for documentation/etc.
Currently, Agents are only used in the Assistant chat panel.
## Configuration
### Setting a Default Agent
Set a default agent in your Space Lua config using the agent's key (for Lua agents) or `name` field (for page agents):
```lua
config.set("ai", {
chat = {
defaultAgent = "myCustomAgent"
}
})
```
## Creating Custom Agents
### Method 1: Lua Definition
Add agents directly in a Space Lua block:
```lua
ai.agents.myagent = {
name = "My Custom Agent",
description = "Does something specific",
systemPrompt = "You are a specialized assistant that...",
toolsExclude = {"eval_lua"} -- optional: exclude dangerous tools
}
```
### Method 2: Page-Based Agents
Create a page with the `meta/template/aiAgent` tag. The `aiagent.name` field is used for both display and as a short lookup key in config:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "tasks"
description: "Helps manage tasks and todos"
systemPrompt: |
You are a task management assistant for SilverBullet.
Help users organize, prioritize, and track their tasks.
Use wiki-links [[like this]] when referencing pages.
Be concise and action-oriented.
tools:
- read_note
- update_note
- list_pages
---
```
With the above, you can set `defaultAgent = "tasks"` in your config.
## Example Agents
### Task Manager Agent
A focused agent for managing tasks and todos. This example shows using a custom `list_tasks` tool alongside built-in tools:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "tasks"
description: "Helps manage tasks and todos"
systemPrompt: |
You are a task management assistant for SilverBullet.
Help users organize, prioritize, and track their tasks.
Use wiki-links [[like this]] when referencing pages.
Be concise and action-oriented.
tools:
- list_tasks
- read_note
- update_note
---
```
`list_tasks` is not a built-in tool, but you can create custom tools in Space Lua. See [[Tools#Defining Custom Tools]] for how to create your own tools.
### Research Assistant (Read-Only)
A safe agent that can only read and search, not modify:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "Research Mode"
description: "Read-only research assistant"
systemPrompt: |
You help research and find information in the user's notes.
You cannot modify any pages - only read and search.
Provide detailed answers with references to relevant pages.
tools:
- read_note
- list_pages
- get_page_info
- navigate
---
```
This agent restricts tools to only the built-in read-only tools, preventing any page modifications.
### Sandboxed Agent with Path Restrictions
An agent that can only operate on pages within a specific folder:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "Sandbox Agent"
description: "Can only access pages under Sandbox/"
systemPrompt: |
You help the user with notes in the Sandbox folder.
You cannot access or modify pages outside this area.
allowedReadPaths: ["Sandbox/"]
allowedWritePaths: ["Sandbox/"]
---
```
This agent can read and write pages under `Sandbox/` but will get an error if it tries to access other pages via tools that support path permissions.
> **Note:** Path permissions only apply to tools that declare `readPathParam` or `writePathParam`. Tools like `eval_lua` can bypass these restrictions. For a true sandbox, combine path permissions with a tool whitelist.
### Writing Assistant with Context
An agent with additional context embedded from wiki-links:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "Writing Helper"
description: "Helps improve and edit writing"
systemPrompt: |
You are a writing assistant. Help users improve their prose,
fix grammar, and structure their notes effectively.
---
Use the following style guide when editing:
[[Style Guide]]
And follow these formatting conventions:
[[Formatting Rules]]
```
**Note**: Wiki-links in the page body will be resolved and their content included as context.
### Personalized Agent with Profile
An agent that knows about you by referencing your profile page:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "Personal Assistant"
description: "Knows my preferences and location"
systemPrompt: |
You are my personal assistant. Use my profile below to personalize responses.
For example, use my location for weather queries and timezone for scheduling.
---
[[Profile]]
```
Create a `Profile` page with your info:
```markdown
---
tags: meta
---
# Profile
Location: San Francisco, CA
Timezone: America/Los_Angeles
Preferred language: English
```
### Minimal Agent
The simplest valid agent:
```yaml
---
tags: meta/template/aiAgent
aiagent:
description: "A helpful assistant"
systemPrompt: "You are a helpful assistant. Be concise."
---
```
## Agent Properties
| Property | Type | Description |
|----------|------|-------------|
| `name` | string | Display name and lookup key for the agent. Use this in `defaultAgent` config. |
| `description` | string | Brief description shown in picker |
| `systemPrompt` | string | The system prompt for the LLM |
| `tools` | string[] | Whitelist - only these tools are available |
| `toolsExclude` | string[] | Blacklist - these tools are removed |
| `inheritBasePrompt` | boolean | Prepend base system prompt (default: true) |
| `allowedReadPaths` | string[] | Path prefixes tools can read from (e.g., `["Journal/", "Notes/"]`) |
| `allowedWritePaths` | string[] | Path prefixes tools can write to (e.g., `["Journal/"]`) |
### Base Prompt Inheritance
By default, agents inherit the base system prompt which includes SilverBullet markdown syntax, tool usage guidance, and the llms.txt documentation link. Set `inheritBasePrompt: false` to completely replace the base prompt with your own.
### Tool Filtering Precedence
- If `tools` is set, **only** those tools are available (whitelist mode)
- If `toolsExclude` is set (and `tools` is not), those tools are removed from all available tools (blacklist mode)
- If both are set, `tools` takes precedence and `toolsExclude` is ignored
**Tip:** Use `tools` (whitelist) for restrictive agents that should only have specific capabilities. Use `toolsExclude` (blacklist) when you want most tools but need to block a few dangerous ones like `eval_lua`.
### Path Permissions
Restrict which pages an agent can read from or write to using path prefixes:
```yaml
---
tags: meta/template/aiAgent
aiagent:
name: "Journal Assistant"
description: "Helps with journal entries only"
allowedReadPaths: ["Journal/", "Daily/"]
allowedWritePaths: ["Journal/"]
---
```
Or in Lua:
```lua
ai.agents.journal = {
name = "Journal Assistant",
allowedReadPaths = {"Journal/", "Daily/"},
allowedWritePaths = {"Journal/"}
}
```
**How it works:**
- If `allowedReadPaths` is set:
- Tools with `readPathParam` can only read pages starting with those prefixes
- Wiki-links in the agent's page body are filtered (only allowed pages included as context)
- Current page content/selection only shown if the page is within allowed paths
- If `allowedWritePaths` is set, tools with `writePathParam` can only write to pages starting with those prefixes
- If not set, no path restrictions apply
This is useful for creating restricted agents that can only operate on specific areas of your space.
## Usage
1. **Select Agent**: Run `AI: Select Agent` command
2. **Clear Agent**: Run `AI: Clear Agent` command
### Tools
# Tools
Tools allow the Assistant to interact with your space - reading, writing, and updating notes, or searching notes and the web.
This enables a much more natural chat interface where you can ask the LLM to do something like "Update my grocery todo list and mark the milk and eggs as complete".
Custom tools offer even more extensibility by letting you interact with external systems instead of being limited to built-in tools that interact with SilverBullet.
## Enabling Tools
Set `enableTools: true` in your chat configuration:
```lua
config.set("ai", {
chat = {
enableTools = true
}
})
```
## Built-in Tools
The plug includes built-in tools, mostly defined using space lua.
### Reading & Navigation
These tools are generally considered safe and shouldn't cause changes to
existing notes.
| Tool | Description | Approval |
|------|-------------|----------|
| `read_note` | Read page content, optionally a specific section | No |
| `list_pages` | List pages with path filtering and recursion options | No |
| `get_page_info` | Get page metadata (tags, size, modified date, subpages) | No |
| `navigate` | Navigate to a page or position | No |
| `ask_user` | Ask the user a question and get immediate feedback | No |
### Creating & Editing
These tools should all require approval since they may change the content of
one or more pages.
| Tool | Description | Approval |
|------|-------------|----------|
| `create_note` | Create a new page (fails if exists) | Yes |
| `update_note` | Update page content (replace, append, prepend; supports sections) | Yes |
| `search_replace` | Find and replace text (first, all, or Nth occurrence) | Yes |
| `update_frontmatter` | Update YAML frontmatter keys without affecting page content | Yes |
| `rename_note` | Rename/move a page and update all backlinks | Yes |
### Advanced
| Tool | Description | Approval |
|------|-------------|----------|
| `eval_lua` | Execute a Lua expression and return the result | Yes |
## Defining Custom Tools
Ideally the built-in tools will remain slim and only provide core functionality. Other libraries or users can add additional tools as needed.
Do keep in mind that most LLMs start to do worse if they are given too many tools or tools that are too similar to each other.
Add tools to the `ai.tools` table in any Space Lua block:
```lua
ai.tools.my_tool = {
description = "What this tool does (shown to the LLM)",
parameters = {
type = "object",
properties = {
name = {type = "string", description = "A required parameter"},
count = {type = "number", description = "An optional number"}
},
required = {"name"}
},
handler = function(args)
return "Result: " .. args.name
end
}
```
Each tool needs:
- `description` - Explains what the tool does (the LLM uses this to decide when to call it)
- `parameters` - JSON Schema defining input parameters
- `handler` - Function that receives `args` and returns a string result
- `requiresApproval` - (optional) If `true`, user must confirm before the tool executes
- `readPathParam` - (optional) Parameter name(s) containing page paths for read operations. Can be a string or array of strings. (used with agent path permissions)
- `writePathParam` - (optional) Parameter name(s) containing page paths for write operations. Can be a string or array of strings. (used with agent path permissions)
## Requiring Approval
For tools that modify data, you can require user confirmation before execution:
```lua
ai.tools.update_note = {
description = "Update the content of a note",
requiresApproval = true,
parameters = {
type = "object",
properties = {
page = {type = "string", description = "The page name"},
content = {type = "string", description = "New content"}
},
required = {"page", "content"}
},
handler = function(args)
space.writePage(args.page, args.content)
return "Updated: " .. args.page
end
}
```
When the LLM calls a tool with `requiresApproval = true`, a confirmation dialog appears showing the tool name and arguments. Users can reject instead of approving, and can provide a message telling the LLM what to do differently.
### Diff Preview with `ai.writePage`
For tools that modify page content, use `ai.writePage()` instead of `space.writePage()` to show a visual diff of the proposed changes before writing:
```lua
ai.tools.my_editor = {
description = "Edit a page",
requiresApproval = true,
parameters = { ... },
handler = function(args)
local newContent = transform(space.readPage(args.page))
ai.writePage(args.page, newContent)
return "Updated: " .. args.page
end
}
```
The `ai.writePage` function:
1. Reads the current page content
2. Computes a diff against the new content
3. Shows the approval modal with the diff preview
4. Only writes if the user approves
All built-in editing tools (`update_note`, `update_frontmatter`, `create_note`, etc.) use `ai.writePage` internally to provide diff previews.
There's nothing stopping you from bypassing this, so please be careful when making custom tools.
## Path Permissions
Tools can declare which parameter contains a page path for permission validation. When an agent has `allowedReadPaths` or `allowedWritePaths` configured, tools will be blocked from accessing pages outside those paths.
### Declaring Path Parameters
```lua
ai.tools.my_reader = {
description = "Read data from a page",
readPathParam = "page", -- This param will be validated against allowedReadPaths
parameters = {
type = "object",
properties = {
page = {type = "string", description = "The page to read"}
},
required = {"page"}
},
handler = function(args)
return space.readPage(args.page)
end
}
ai.tools.my_writer = {
description = "Write data to a page",
writePathParam = "page", -- This param will be validated against allowedWritePaths
requiresApproval = true,
parameters = {
type = "object",
properties = {
page = {type = "string", description = "The page to write"}
},
required = {"page"}
},
handler = function(args)
ai.writePage(args.page, "content")
return "Written"
end
}
```
### How It Works
1. Agent defines `allowedReadPaths` and/or `allowedWritePaths` (see [[Agents]])
2. When a tool is called, the validation checks if the path parameter starts with any allowed prefix
3. If the path is not allowed, the tool returns an error instead of executing
4. Write operations require **both** read and write access (since tools typically read content before modifying it)
All built-in tools declare their path parameters, so they work with agent path permissions automatically.
### Context Enrichment
# Context Enrichment
The AI plug automatically enriches chat messages with relevant context before sending them to the LLM. This helps the LLM understand your notes and provide more relevant responses.
## How It Works
When you send a message in chat, the plug can:
1. **Parse wiki-links** - Extract content from linked pages
2. **Search embeddings** - Find semantically related notes
3. **Expand templates** - Render SilverBullet queries and templates
4. **Run custom functions** - Execute your own enrichment logic
Each piece of context is wrapped as an "attachment" and inserted into the conversation in a cache-friendly order.
## Attachment Types
| Type | Description |
|------|-------------|
| `note` | Content from a wiki-linked page like `[[PageName]]` |
| `embedding` | Content from semantically similar pages found via embedding search |
| `custom` | Content added by custom enrichment functions |
## Configuration
Enable context enrichment features in your config:
```lua
config.set("ai", {
chat = {
parseWikiLinks = true, -- Extract content from [[wiki-links]]
searchEmbeddings = true, -- Search indexed embeddings for context
bakeMessages = true -- Render templates/queries before sending
}
})
```
## Wiki-Link Context
When `parseWikiLinks` is enabled, any `[[PageName]]` references in your message will have their content fetched and included as context.
**Example:**
```
Can you summarize the key points from [[Project Notes]]?
```
The content of the "Project Notes" page will be attached to your message, giving the LLM access to that information.
**Agent context:** Page-based agents (see [[Agents]]) can also include wiki-links in their page body. These become attachments that persist across the entire chat session.
## Embedding Search
When `searchEmbeddings` is enabled and you have [[Configuration/Embedding Models|embeddings configured]], the plug searches your indexed notes for content semantically related to your message.
Relevant excerpts are automatically included as context, enabling RAG (Retrieval Augmented Generation).
## Template Expansion
When `bakeMessages` is enabled, SilverBullet templates and queries in your message are rendered before sending:
```
What should I work on today?
Current tasks:
${query[[from index.tag("task") where _.done == false]]}
```
The query results will be included in the message sent to the LLM.
## Custom Enrichment Functions
You can add your own enrichment logic using Space Lua. Define a function and register it:
```lua
-- In a Space Lua block
function myCustomEnricher(content)
-- Add custom context based on the message
local extra = "Current date: " .. os.date("%Y-%m-%d")
return content .. "\n\n" .. extra
end
```
Then add it to the configuration:
```lua
config.set("ai", {
chat = {
customEnrichFunctions = {"myCustomEnricher"}
}
})
```
### Event-Based Enrichment
You can also listen to the `ai:enrichMessage` event to dynamically add enrichment functions:
```lua
event.listen {
name = "ai:enrichMessage",
run = function(data)
-- Return function names to run based on message content
if string.find(data.enrichedContent, "weather") then
return {"addWeatherContext"}
end
return {}
end
}
```
## Context Format
Attachments are formatted as XML-like context blocks when sent to the LLM:
```xml
Content of the page goes here...
```
This format helps the LLM distinguish between your message and the attached context. It's also inserted as it's own chat message to help preserve caching with providers that support it.
## Disabling Enrichment
To skip enrichment for a specific message, add `[enrich:false]` anywhere in your message:
```
[enrich:false] What is 2+2?
```
The attribute is removed before sending, and no enrichment is performed for that message.
## Configuration
### Chat Instructions
The Chat section configures behavior for the [[Commands/AI: Chat on current page]] command and the Assistant Panel.
## All Chat Options
```lua
config.set {
ai = {
chat = {
-- Enable AI tools (read/write notes, search, etc.)
enableTools = true,
-- Parse [[wiki-links]] and include their content as context
parseWikiLinks = true,
-- Search embeddings for relevant context (requires indexEmbeddings)
searchEmbeddings = false,
-- Render SilverBullet templates/queries before sending
bakeMessages = true,
-- Default agent to use (by name)
defaultAgent = nil,
-- User info included in system prompt
userInformation = "I'm a software developer who likes taking notes.",
-- Instructions included in system prompt
userInstructions = "Please give short and concise responses.",
-- Dynamic context (Lua expression evaluated at chat time)
customContext = [["Today is " .. os.date("%Y-%m-%d")]],
-- Skip tool approval prompts (useful for benchmarks/automation)
skipToolApproval = false,
-- Show reasoning/thinking blocks from models that support it
showReasoning = true
}
}
}
```
## Chat Custom Instructions
OpenAI introduced [custom instructions for ChatGPT](https://openai.com/blog/custom-instructions-for-chatgpt) a while back to help improve the responses from ChatGPT. We are emulating that feature by allowing a system prompt to be injected into each new chat session.
The system prompt is rendered similar to the one below, see the example config above for where to configure these settings:
Always added:
> This is an interactive chat session with a user in a note-taking tool called SilverBullet.
If **enableTools** is true (default), this is added:
> You have access to tools that can help you assist the user. Use them proactively when they would be helpful - for example, reading notes, searching, or performing actions the user requests.
If **userInformation** is set, this is added:
> The user has provided the following information about their self: **${ai.chat.userInformation}**
If **userInstructions** is set, this is added:
> The user has provided the following instructions for the chat, follow them as closely as possible: **${ai.chat.userInstructions}**
## Custom Context
The **customContext** option allows you to add dynamic context to each chat message. It accepts a Lua expression that is evaluated at chat time, so you can include things like the current date.
The result is prepended to your message in the Chat Panel, wrapped in `` tags along with the current page content and selection.
**Note:** This context is sent with the latest message only, it is not persisted.
**Example - Add current date:**
```lua
config.set {
ai = {
chat = {
customContext = [["Today is " .. os.date("%Y-%m-%d") .. " (" .. os.date("%A") .. ")"]]
}
}
}
```
**Example - Add multiple pieces of context:**
```lua
config.set {
ai = {
chat = {
customContext = [[table.concat({
"Date: " .. os.date("%Y-%m-%d"),
"Time: " .. os.date("%H:%M"),
"Day: " .. os.date("%A"),
}, "\n")]]
}
}
}
```
This is useful for time-sensitive queries where you want the LLM to know the current date without having to type it manually.
**Example - Include a profile page:**
Create a `Profile` page with your personal information:
```markdown
---
tags: meta
---
# Profile
Location: San Francisco, CA
Timezone: America/Los_Angeles
Preferred language: English
```
Then configure customContext to read it:
```lua
config.set {
ai = {
chat = {
customContext = [[space.readPage("Profile") or ""]]
}
}
}
```
Now the assistant will know your location for weather queries, timezone for scheduling, etc.
## Reasoning/Thinking Display
Some models (like DeepSeek, Ollama models with `--think`, and OpenAI o1/o3) support "reasoning" or "thinking" output - the model's internal thought process before generating a response.
When **showReasoning** is `true` (the default), these reasoning blocks are displayed as collapsible sections in both the Chat Panel and "Chat on Page" responses.
```lua
config.set {
ai = {
chat = {
showReasoning = true -- Show reasoning/thinking blocks (default)
}
}
}
```
The reasoning appears as a collapsible `reasoning` code block that can be expanded to see the model's thought process.
**Example - Combine profile with date:**
```lua
config.set {
ai = {
chat = {
customContext = [[table.concat({
"Date: " .. os.date("%Y-%m-%d %H:%M"),
space.readPage("Profile") or ""
}, "\n\n")]]
}
}
}
```
### Custom Enrichment Functions
When using [[Commands/AI: Chat on current page]], wiki links are automatically expanded and queries/templates are rendered, but it's also possible to include your own custom message enrichment.
**Example use cases**:
- Use regex to detect JIRA-1234, automatically use the jira api to fetch an issues description and return it as context to the llm.
- Detect http/https urls, fetch them, parse them into readable chunks, and include for llm.
- Use the same remote http/https urls to fetch and provide documentation when asking for help from a llm.
A custom enrichment function is a [Space Lua](https://silverbullet.md/Space%20Lua) function. Please see the related upstream documentation.
Once you have a Space Lua function defined, there are two ways to tell the AI plug to use it.
## Method 1 - Configuration
In your Space Lua config, define a list of functions to call. These will be executed for each **user** message on a chat page.
```lua
config.set {
ai = {
chat = {
customEnrichFunctions = {
"enrichWithURL",
"extractJiraDescriptions",
"addBlarp"
}
}
}
}
```
## Method 2 - Event Listeners
If you'd rather keep logic grouped together, an `ai:enrichMessage` event will be fired and expects a string or array of strings with function names to call. These must be valid Space Lua functions.
Example defining a simple function and listener that registers that function:
```lua
function addBlarp(message)
return message .. " BLARP"
end
event.listen {
name = "ai:enrichMessage",
run = function(e)
return "addBlarp"
end
}
```
### Embedding Models
## Simple Configuration (Recommended)
If you've already configured a provider for text models, you can use the same provider for embeddings. Simply add `defaultEmbeddingModel` to your config:
```lua
config.set {
ai = {
providers = {
openai = { apiKey = "sk-xxx" }
},
defaultTextModel = "openai:gpt-4o-mini",
defaultEmbeddingModel = "openai:text-embedding-3-small",
indexEmbeddings = true,
}
}
```
The embedding model will use the same API key and settings from the provider config. No need to configure the provider twice.
**Format:** `provider:modelName` (e.g., `openai:text-embedding-3-small`, `ollama:all-minilm`)
## Model Discovery
When using the "AI: Select Embedding Model from Config" command, the plug can automatically discover embedding models from your configured providers. This uses litellm's model database to identify which models support embeddings, unless the provider api returns this informataion.
## Legacy Configuration
For more control or custom setups, you can use the `embeddingModels` array:
```lua
config.set {
ai = {
embeddingModels = {
{
name = "",
provider = "",
modelName = "",
baseUrl = "",
requireAuth = true,
secretName = "",
useProxy = true
}
}
}
}
```
**Options:**
- **name**: Display name for this model in the selector.
- **provider**: Currently supported: openai, gemini, or ollama.
- **modelName**: The actual model identifier sent to the API.
- **baseUrl**: Base URL of the provider API.
- **requireAuth**: If false, Authorization headers won't be sent. Useful for local Ollama.
- **secretName**: Name of the API key in `ai.keys` (legacy) or use the provider config.
- **useProxy**: If false, bypasses SilverBullet's proxy. Useful for local services.
## Enabling and Using Embeddings
Generating vector embeddings is **disabled by default** for privacy (and cost) reasons. It is recommended to only enable it if using a locally hosted model using Ollama or an openai-compatible api.
When turned on, **every page** in your space will end up being sent to the embeddings provider. We recommend using a locally-hosted model.
> **warning** If you are not comfortable sending all of your notes to a 3rd party, do not use a 3rd party api for embeddings.
To enable generation and indexing of embeddings with the simple provider config:
```lua
config.set {
ai = {
providers = {
ollama = { baseUrl = "http://localhost:11434", useProxy = false }
},
defaultEmbeddingModel = "ollama:all-minilm",
indexEmbeddings = true,
indexEmbeddingsExcludePages = {"my_passwords"},
indexEmbeddingsExcludeStrings = {"**user**:", "Daily Quote:"},
}
}
```
Or using the legacy embeddingModels array:
```lua
config.set {
ai = {
indexEmbeddings = true,
indexEmbeddingsExcludePages = {"my_passwords"},
indexEmbeddingsExcludeStrings = {"**user**:", "Daily Quote:"},
embeddingModels = {
-- Only the first model is currently used
{
name = "ollama-all-minilm",
modelName = "all-minilm",
provider = "ollama",
baseUrl = "http://localhost:11434",
requireAuth = false,
useProxy = false
}
}
}
}
```
**Options:**
- **indexEmbeddings**: true to enable this feature.
- **indexEmbeddingsExcludePages**: List of exact page names to exclude from indexing. By default, pages starting with _ are never indexed.
- **indexEmbeddingsExcludeStrings**: List of exact strings to exclude from indexing. If a paragraph or line contains only one of these strings, it won't be indexed. This helps from polluting search results in some cases.
- **embeddingModels**: Explained above. Only the first model in the list is used for indexing.
After setting **indexEmbeddings** to **true** OR changing the **first embeddingModels model**, you must run the `Space: Reindex` command.
## Generating and indexing note summaries
> **warning** This is an experimental feature, mostly due to the amount of extra time and resources it takes during the indexing process. If you try it out, please report your experience!
In addition to generating embeddings for each paragraph of a note, we can also use the llm model to generate a summary of the entire note and then index that.
This can be helpful for larger notes or notes where each paragraph may not contain enough context by itself.
To enable this feature:
```lua
config.set {
ai = {
indexSummaryModelName = "ollama-gemma2",
indexSummary = true,
textModels = {
{
name = "ollama-gemma2",
modelName = "gemma2",
provider = "ollama",
baseUrl = "http://localhost:11434/v1",
requireAuth = false,
useProxy = false
}
}
}
}
```
**Options:**
- **indexSummary**: Off by default. Set to true to start generating page summaries and indexing their embeddings.
- **indexSummaryModelName**: Which [[Configuration/Text Models|text model]] to use for generating summaries. It's recommended to use a locally hosted model since every note in your space will be sent to it.
> **warning** If you are not comfortable sending all of your notes to a 3rd party, do not use a 3rd party api for embeddings or summary generation.
### Suggested models for summary generation
> **info** Please report your experiences with using different models!
These models have been tested with Ollama for generating note summaries, along with their quality. Please report any other models you test with and your success (or not) with them.
- **phi3**: Can generate summaries relatively quickly, but often includes hallucinations and weird changes that don't match the source material.
- **gemma2**: This model is a bit bigger, but generates much better summaries than phi3.
### Image Models
## Simple Configuration (Recommended)
If you've already configured a provider for text models, you can use the same provider for image generation. Simply add `defaultImageModel` to your config:
```lua
config.set {
ai = {
providers = {
openai = { apiKey = "sk-xxx" }
},
defaultTextModel = "openai:gpt-4o-mini",
defaultImageModel = "openai:dall-e-3",
}
}
```
The image model will use the same API key and settings from the provider config. No need to configure the provider twice.
**Format:** `provider:modelName` (e.g., `openai:dall-e-3`, `openai:dall-e-2`)
## Model Discovery
When using the "AI: Select Image Model from Config" command, the plug can automatically discover image generation models from your configured providers. This uses litellm's model database to identify which models support image generation.
## Legacy Configuration (Advanced)
For more control or custom setups, you can use the `imageModels` array:
```lua
config.set {
ai = {
imageModels = {
{
name = "",
provider = "",
modelName = "",
baseUrl = "",
requireAuth = true,
secretName = "",
useProxy = true
}
}
}
}
```
**Options:**
- **name**: Display name for this model in the selector.
- **provider**: Currently only **dalle** supported.
- **modelName**: The actual model identifier sent to the API (e.g., `dall-e-3`).
- **baseUrl**: Base URL of the provider API.
- **requireAuth**: If false, Authorization headers won't be sent.
- **secretName**: Name of the API key in `ai.keys`
- **useProxy**: If false, bypasses SilverBullet's proxy. Defaults to true.
### Prompt Instructions
When using the built-in commands like [[Commands/AI: Suggest Page Name]], it can be useful to provide user-specific instructions or rules.
Additional prompt instructions can be configured like this:
```lua
config.set {
ai = {
promptInstructions = {
pageRenameRules = "",
pageRenameSystem = "",
tagRules = "",
indexSummaryPrompt = "",
enhanceFrontMatterPrompt = ""
}
}
}
```
`pageRenameSystem` can be used to completely override the system prompt used when requesting note title suggestions. If not specified or left blank, the default prompt will be used.
`pageRenameRules` is appended to the system prompt and can be used to extend the default system prompt without completely overriding it.
`indexSummaryPrompt` is appended to the system prompt when generating a summary of a page that will be indexed.
`enhanceFrontMatterPrompt` is appended to the system prompt when generating new frontmatter for a note.
For example, the following example does a few things:
- Quick notes will keep their timestamp prefix in the note title
- If a note is tagged with #receipt, it will automatically be moved to a receipts folder. E.g. `Receipts/2024/06-June/2024-06-30 - lawncare-co payment.md`
- If a note looks like a receipt, automatically add the #receipt tag
```lua
config.set {
ai = {
promptInstructions = {
pageRenameRules = [[
Retain ALL date and time information from the original note title.
If there is a date or time at the beginning, ensure a hyphen separates the timestamp from the actual note title. For example, try to name quick notes like this: "YYYY-MM-DD HH:MM:SS - A short title about the note"
If tags include #receipt or otherwise looks like a receipt, move it to "Receipts/YYYY/MM-MMMM/" using the date from the note metadata.
]],
tagRules = "Tag notes that contain confirmations or receipts with #receipt."
}
}
}
```
### Text Models
All text model providers can be configured using the following configuration options. Not all options are required for every model.
At least one text model must be configured for this plugin to work, but multiple can be configured at once and swapped between on the fly.
A text model is configured in Space Lua like this:
```lua
config.set {
ai = {
textModels = {
{
name = "",
provider = "",
modelName = "",
baseUrl = "",
requireAuth = true, -- or false
secretName = "",
useProxy = true -- or false
}
}
}
}
```
**Options:**
- **name**: Name to use inside of silverbullet for this model. This is used to identify different versions of the same model in one config, or just to give your own custom names to them.
- **provider**: Currently **openai**, **gemini**, or **ollama** are supported.
- **modelName**: Name of the model to send to the provider api. This should be the actual model name.
- **baseUrl**: Base url and path of the provider api.
- **requireAuth**: If false, the Authorization headers will not be sent. Needed as a workaround for some CORS issues with Ollama.
- **secretName**: Name of the API key in `ai.keys`.
- **useProxy**: If false, bypasses SilverBullet's proxy and makes requests directly. Useful for local services like Ollama. Defaults to true for cloud providers.
## Providers
### DallE
DALL-E can be configured for generating images.
> **Note**: Image models use the `imageModels` array configuration. The new `providers` config is only for text models.
## Configuration
```lua
config.set {
ai = {
keys = {
OPENAI_API_KEY = "your-openai-key-here"
},
imageModels = {
{name = "dall-e-3", modelName = "dall-e-3", provider = "dalle"},
{name = "dall-e-2", modelName = "dall-e-2", provider = "dalle"}
}
}
}
```
## Options
| Option | Description |
|--------|-------------|
| `name` | Display name for this model in SilverBullet |
| `modelName` | The DALL-E model version (`dall-e-2` or `dall-e-3`) |
| `provider` | Must be `"dalle"` |
| `baseUrl` | Custom API endpoint (optional, defaults to OpenAI's API) |
`baseUrl` can be set to use another API compatible with OpenAI/DALL-E.
## Cost
DALL-E API usage is charged by OpenAI. See their [pricing page](https://openai.com/pricing) for details.
### Google Gemini
Google Gemini is supported as a text provider and for embeddings. Note that Gemini uses a different API format than OpenAI, so some features may behave slightly differently.
## Provider Configuration (Recommended)
```lua
local gemini_key = "your-google-ai-studio-key-here"
config.set {
ai = {
providers = {
gemini = {
apiKey = gemini_key,
preferredModels = {"gemini-2.0-flash", "gemini-1.5-pro"}
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "gemini:gemini-2.0-flash"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all available Gemini models
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
keys = {
GOOGLE_AI_STUDIO_KEY = "your-google-ai-studio-key-here"
},
textModels = {
{
name = "gemini-2.0-flash",
modelName = "gemini-2.0-flash",
provider = "gemini",
secretName = "GOOGLE_AI_STUDIO_KEY"
},
{
name = "gemini-1.5-pro",
modelName = "gemini-1.5-pro",
provider = "gemini",
secretName = "GOOGLE_AI_STUDIO_KEY"
}
}
}
}
```
## Embedding Models
Embedding models still use the legacy `embeddingModels` array:
```lua
config.set {
ai = {
providers = {
gemini = { apiKey = gemini_key }
},
embeddingModels = {
{
name = "text-embedding-004",
modelName = "text-embedding-004",
provider = "gemini",
secretName = "GOOGLE_AI_STUDIO_KEY"
}
}
}
}
```
## Provider Options
| Option | Description |
|--------|-------------|
| `apiKey` | Your Google AI Studio API key |
| `useProxy` | Use SilverBullet's proxy (default: `true`) |
| `preferredModels` | Array of model names to show first in the picker |
See [Google AI models](https://ai.google.dev/gemini-api/docs/models) for available model names.
**Note**: Get your API key from [Google AI Studio](https://aistudio.google.com/app/apikey).
**Note 2**: AI Studio is not the same as the Gemini App (previously Bard). You may have access to https://gemini.google.com/app but it does not offer an API key needed for integrating 3rd party tools. You need access to https://aistudio.google.com/app specifically.
### Mistral Ai
[Mistral AI](https://mistral.ai/) is a hosted service that offers an OpenAI-compatible API.
## Provider Configuration (Recommended)
```lua
local mistral_key = "your-mistral-api-key-here"
config.set {
ai = {
providers = {
mistral = {
provider = "openai", -- Mistral uses OpenAI-compatible API
apiKey = mistral_key,
baseUrl = "https://api.mistral.ai/v1",
preferredModels = {"mistral-large-latest", "mistral-medium"}
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "mistral:mistral-large-latest"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all available Mistral models
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
keys = {
MISTRAL_API_KEY = "your-mistral-api-key-here"
},
textModels = {
{
name = "mistral-medium",
modelName = "mistral-medium",
provider = "openai",
baseUrl = "https://api.mistral.ai/v1",
secretName = "MISTRAL_API_KEY"
}
}
}
}
```
## Provider Options
| Option | Description |
|--------|-------------|
| `provider` | Must be `"openai"` (Mistral uses OpenAI-compatible API) |
| `apiKey` | Your Mistral API key |
| `baseUrl` | Must be `"https://api.mistral.ai/v1"` |
| `preferredModels` | Array of model names to show first in the picker |
See [Mistral AI Documentation](https://docs.mistral.ai/) for available models.
### Ollama
Ollama is supported both as a text/llm provider, and also can be used for embeddings generation.
To use Ollama locally, make sure you have it running first and the desired models downloaded.
## Provider Configuration (Recommended)
```lua
config.set {
ai = {
providers = {
ollama = {
baseUrl = "http://localhost:11434/v1",
useProxy = false, -- Bypass SilverBullet's proxy for local requests
preferredModels = {"llama3.2", "qwen2.5-coder"},
timeout = 180000
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "ollama:llama3.2"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all models from your Ollama instance
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
textModels = {
{
name = "ollama-phi-2",
modelName = "phi",
provider = "ollama",
baseUrl = "http://localhost:11434/v1",
requireAuth = false,
useProxy = false
}
},
embeddingModels = {
{
name = "ollama-all-minilm",
modelName = "all-minilm",
provider = "ollama",
baseUrl = "http://localhost:11434",
requireAuth = false,
useProxy = false
}
}
}
}
```
## Embedding Models
Embedding models still use the legacy `embeddingModels` array:
```lua
config.set {
ai = {
providers = {
ollama = { baseUrl = "http://localhost:11434/v1", useProxy = false }
},
embeddingModels = {
{
name = "ollama-all-minilm",
modelName = "all-minilm",
provider = "ollama",
baseUrl = "http://localhost:11434",
requireAuth = false,
useProxy = false
}
}
}
}
```
## Configuration Options
- **useProxy**: Set to `false` to bypass SilverBullet's proxy and make requests directly from the client browser. Useful if running ollama somewhere accessible by the client, but not by the silverbullet server.
- **requireAuth**: Ollama defaults to `false`. Set to `true` if you have a reverse proxy providing authentication.
- **timeout**: Request timeout in milliseconds. Default: 120000 (2 minutes). Increase this for large models that take a while to generate responses. For streaming requests, the timeout only applies to the initial connection - once the model starts responding, it can take as long as needed.
## Docker Configuration
If running both SilverBullet and Ollama in Docker on the same machine, use `host.docker.internal` instead of `localhost`:
```lua
config.set {
ai = {
providers = {
ollama = {
baseUrl = "http://host.docker.internal:11434/v1",
useProxy = true
}
}
}
}
```
> **note**: `host.docker.internal` is available on Docker Desktop (Mac/Windows) and recent versions of Docker on Linux. On older Linux Docker installations, you may need to add `--add-host=host.docker.internal:host-gateway` to your docker run command.
## Multiple Ollama Instances
You can configure multiple Ollama instances by using different key names with the explicit `provider` field:
```lua
config.set {
ai = {
providers = {
ollamaLocal = {
provider = "ollama", -- Explicit provider type
baseUrl = "http://localhost:11434/v1",
useProxy = false
},
ollamaRemote = {
provider = "ollama",
baseUrl = "http://my-server:11434/v1",
useProxy = true
}
}
}
}
```
## Ollama Server Configuration
When running Ollama, these are some useful environment variables/options:
- `OLLAMA_ORIGINS` - Allow silverbullet's hostname _if not using useProxy=true_.
- `OLLAMA_HOST` - By default, only 127.0.0.1 is exposed. If you use ollama on a different machine, this may need changed.
- `OLLAMA_CONTEXT_LENGTH` - By default, Ollama only uses a 4k context window. You'll most likely want to increase this.
- `OLLAMA_FLASH_ATTENTION=1` - Can reduce memory usage as context size grows.
- `OLLAMA_KV_CACHE_TYPE=q8_0` - Quantizes the K/V context cache so that less memory is used by the context cache.
Please see [docs.ollama.com/faq](https://docs.ollama.com/faq) for more information.
### OpenAI
[OpenAI](https://platform.openai.com/) is the default provider for text models.
For other OpenAI-compatible services, see:
- [[Providers/Ollama]] - Local models
- [[Providers/OpenRouter]] - Access many models via one API
- [[Providers/Mistral Ai]] - Mistral AI
- [[Providers/Perplexity Ai]] - Perplexity AI
## Provider Configuration (Recommended)
```lua
local openai_key = "sk-your-openai-key-here"
config.set {
ai = {
providers = {
openai = {
apiKey = openai_key,
preferredModels = {"gpt-4o", "gpt-4o-mini"}
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "openai:gpt-4o"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all available OpenAI models
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
keys = {
OPENAI_API_KEY = "your-openai-key-here"
},
textModels = {
{name = "gpt-4o", provider = "openai", modelName = "gpt-4o"},
{name = "gpt-4o-mini", provider = "openai", modelName = "gpt-4o-mini"}
}
}
}
```
## Provider Options
| Option | Description |
|--------|-------------|
| `apiKey` | Your OpenAI API key |
| `baseUrl` | Custom API endpoint (default: `https://api.openai.com/v1`) |
| `useProxy` | Use SilverBullet's proxy (default: `true`) |
| `preferredModels` | Array of model names to show first in the picker |
## Cost
While this plugin is free to use, OpenAI does charge for their API usage. Please see their [pricing page](https://openai.com/pricing) for cost of the various APIs.
See [OpenAI's list of models](https://platform.openai.com/docs/models/overview) for available model names.
### OpenRouter
[OpenRouter](https://openrouter.ai/) provides access to many different models, some of them even being free. Since it exposes all LLMs through an OpenAI-compatible API, we use the `openai` provider type.
## Provider Configuration (Recommended)
```lua
local openrouter_key = "your-openrouter-api-key-here"
config.set {
ai = {
providers = {
openrouter = {
provider = "openai", -- OpenRouter uses OpenAI-compatible API
apiKey = openrouter_key,
baseUrl = "https://openrouter.ai/api/v1",
preferredModels = {"anthropic/claude-3.5-sonnet", "openai/gpt-4o"}
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "openrouter:anthropic/claude-3.5-sonnet"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all available OpenRouter models
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
keys = {
OPENROUTER_API_KEY = "your-openrouter-api-key-here"
},
textModels = {
{
name = "openrouter-auto",
modelName = "openrouter/auto",
provider = "openai",
baseUrl = "https://openrouter.ai/api/v1",
secretName = "OPENROUTER_API_KEY"
},
{
name = "openrouter-mistral-7b-instruct",
modelName = "mistralai/mistral-7b-instruct:free",
provider = "openai",
baseUrl = "https://openrouter.ai/api/v1",
secretName = "OPENROUTER_API_KEY"
}
}
}
}
```
## Provider Options
| Option | Description |
|--------|-------------|
| `provider` | Must be `"openai"` (OpenRouter uses OpenAI-compatible API) |
| `apiKey` | Your OpenRouter API key |
| `baseUrl` | Must be `"https://openrouter.ai/api/v1"` |
| `preferredModels` | Array of model names to show first in the picker |
Get your API key from [OpenRouter Keys](https://openrouter.ai/keys).
See [OpenRouter Models](https://openrouter.ai/docs#models) for a list of available models.
### Perplexity Ai
[Perplexity AI](https://www.perplexity.ai/) is a hosted service that offers an OpenAI-compatible API with [various models](https://docs.perplexity.ai/docs/model-cards).
## Provider Configuration (Recommended)
```lua
local perplexity_key = "your-perplexity-api-key-here"
config.set {
ai = {
providers = {
perplexity = {
provider = "openai", -- Perplexity uses OpenAI-compatible API
apiKey = perplexity_key,
baseUrl = "https://api.perplexity.ai",
preferredModels = {"sonar-pro", "sonar"}
}
},
-- Optional: auto-select a default model on startup
defaultTextModel = "perplexity:sonar-pro"
}
}
```
With this configuration:
- Run **"AI: Select Text Model"** to see all available Perplexity models
- **"AI: Refresh Model List"** updates the cached model list
- `preferredModels` appear first in the picker (marked with ★)
## Legacy Configuration
!!! warning "Deprecated"
The `textModels` array configuration is deprecated. Please migrate to the `providers` config above.
```lua
config.set {
ai = {
keys = {
PERPLEXITY_API_KEY = "your-perplexity-api-key-here"
},
textModels = {
{
name = "sonar-medium-online",
modelName = "sonar-medium-online",
provider = "openai",
baseUrl = "https://api.perplexity.ai",
secretName = "PERPLEXITY_API_KEY"
}
}
}
}
```
## Provider Options
| Option | Description |
|--------|-------------|
| `provider` | Must be `"openai"` (Perplexity uses OpenAI-compatible API) |
| `apiKey` | Your Perplexity API key |
| `baseUrl` | Must be `"https://api.perplexity.ai"` |
| `preferredModels` | Array of model names to show first in the picker |
Get your API key from [the Perplexity web console](https://www.perplexity.ai/settings/api).
See [Perplexity Model Cards](https://docs.perplexity.ai/docs/model-cards) for available models.
## Optional
### Changelog
For the full changelog, please refer to the individual release notes on https://github.com/justyns/silverbullet-ai/releases or the commits themselves.
This page is a brief overview of each version.
## 0.6.5 (unreleased)
- Correctly infer provider type in more cases, e.g. ollamaLocal -> ollama
- Add configurable `timeout` option per provider for slow models
- Add model name to assistant chat header, click to change the model for the session
- Click RAG icon to enable/disable embeddings search temporarily
- Display agent name and click it to change to a different agent
- Remember the state of the assistant chat panel and re-open it on page reload
- Add support for displaying thinking/reasoning blocks with Ollama
## 0.6.4 (2025-01-13)
- Image generation supports gpt-image models now
- Support provider-based config for image and embedding models, now you can define a single provider and use it for text, images, and embeddings
- Add new `defaultEmbeddingModel` and `defaultImageModel` config options
- Support batch embedding generation for openai and ollama
- Show pricing info for remote apis in the model picker
- Add new `AI: Reset Selected Models` command
## 0.6.3 (2025-01-10)
- Embeddings now include page title and section headers. Requires re-indexing to take effect.
- Benchmark command now shows progress
- Reuse SB's theming where possible so that the UI is more consistent
- Add path-based permissions for agents (`allowedReadPaths`, `allowedWritePaths`) to restrict tool access, wiki-link context, and current page context to specific folders
- Add an explicit "AI: Refresh Config" command
- Improve potential performance issue where we still do unncessary inits and datastore reads
## 0.6.2 (2025-01-05)
- Improvements to default system prompt to use less tokens
- Generate [/llms.txt](https://ai.silverbullet.md/llms.txt) and [/llms-full.txt](https://ai.silverbullet.md/llms-full.txt)
- Agents now inherit the base system prompt by default, but can be toggled off with `inheritBasePrompt`
- Fix potential performance issue where `page:index` events caused unnecessary async work when embeddings are disabled
- Fix potential performance issue where the config was re-read even when there were no changes
- Parallelize model discovery and cache Ollama model info to avoid redundant API calls
- Add new `Reindex All Embeddings` command
- Fix Embeddings Search virtualpage
- Fix error on chat panel when no text model selected
- Add RAG status indicator in chat panel header, show embeddings context like tool calls, move them to their own messages
## 0.6.1 (2026-01-02)
- Fix error caused by tool messages being enriched
- ctrl+a+shift actually toggles the assistant panel now instead of only opening it
- Add JSON schema definitions for `ai.providers` and `ai.defaultTextModel` config
- Clicking wiki-links in the assistant panel now navigates without page refresh
- Auto-complete of page links should filter properly now
## 0.6.0 (2026-01-02)
- Added a side-panel for AI chat (`AI: Open Assistant` command)
- Tool calls rendered as expandable blocks
- Strip tool calls from chat history to reduce context size (but they are stored in local storage temporarily)
- Default context including current page name and content
- Customizable chat context via Space Lua (e.g. current date or other dynamic values)
- Track token usage against model's token limit (caches LiteLLM's public json)
- Add a modal version of the same assistant chat
- New agent system for customizable personas with specific context and tools (e.g. "silverbullet lua expert")
- Create custom agents via Space Lua (`ai.agents.myagent = {...}`)
- Create page-based agents with `meta/template/aiAgent` tag
- Tool filtering with whitelist (`tools`) or blacklist (`toolsExclude`)
- `AI: Select Agent` and `AI: Clear Agent` commands
- New tool system allowing interactions with your space
- Tools defined via Space Lua in `ai.tools` table
- Approval system for tools that modify data (shows diff preview)
- Built-in tools:
- `read_note` - Read page content or specific sections
- `list_pages` - List pages with filtering options
- `get_page_info` - Get page metadata
- `create_note` - Create new pages
- `update_note` - Update page content (replace, append, prepend)
- `search_replace` - Find and replace text
- `update_frontmatter` - Update YAML frontmatter keys
- `rename_note` - Rename pages with backlink updates
- `navigate` - Navigate to pages or positions
- `eval_lua` - Execute Lua expressions
- `ask_user` - Get immediate feedback from the user
- Updated default system prompt to include instructions for tools when enabled
- Update system prompt to include basic SB formatting hints and docs links
- Add support for structured output
- Connectivity test now includes structured output and tool usage tests
- Migrated commands to Space Lua
- AI: Suggest Page Name
- AI: Generate tags for note
- AI: Generate Note FrontMatter
- AI: Enhance Note
- Created an initial version of a benchmark system to verify if specific models can correctly use tools for sbai
- Add a new provider-based configuration to configure a provider like Ollama once and load models dynamically, instead of configuring each model separately
- Added a `defaultTextModel` option
---
## 0.5.0 (2025-12-15)
### SilverBullet v2 Support
- **BREAKING**: Now requires SilverBullet v2.3.0 or later
- Migrated from SETTINGS/SECRETS pages to Space Lua configuration (`config.set {}`)
- API keys now configured via `ai.keys` in config (e.g., `ai.keys.OPENAI_API_KEY`)
- Uses `system.getConfig()` instead of deprecated `system.getSpaceConfig()`
- Removed all server vs client logic - everything runs in the browser now
- Moved embedding search and connectivity test pages to new virtual page API
- See [[SilverBullet v2 Migration Guide]] for upgrade instructions
### Proxy Configuration
- Added `useProxy` option to all provider types (text, embedding, image)
- When `useProxy: false`, requests bypass SilverBullet's server proxy and go directly from the browser
- Useful for local services like Ollama running on the same machine as the browser
- SSE streaming now properly transforms URLs and headers for the proxy
- **Note**: `useProxy: true` requires Silverbullet >= 2.3.1 or Edge as of 2025-12-11 for [PR #1721](https://github.com/silverbulletmd/silverbullet/pull/1721)
### Removed deprecated stuff
- Removed deprecated commands (use [[Templated Prompts]] instead):
- `AI: Summarize Note and open summary`
- `AI: Insert Summary`
- `AI: Call OpenAI with Note as context`
- `AI: Stream response with selection or note as prompt`
- Removed deprecated config settings:
- `ai.openAIBaseUrl` - use `baseUrl` in model config instead
- `ai.dallEBaseUrl` - use `baseUrl` in model config instead
- `ai.requireAuth` - use `requireAuth` in model config instead
- `ai.secretName` - use `ai.keys.*` instead
- `ai.provider` - use `provider` in model config instead
### Library Changes
- **BREAKING**: The AICore Library is now merged into the main plug - no separate install needed
- Converted library scripts from Space Script to Space Lua
- Convert AIPrompts to examples and add support for defining them using Lua
### Other Changes
- Better logging when SSE events have errors
- Add support for retrieving list of models from OpenAI and Ollama providers
- Add a Connectivity Test command and page to test whether an API is working
- Docs site now uses mkdocs only (removed deprecated silverbullet-pub :( )
- Plug now distributed via GitHub Releases (`ghr:` prefix in Library Manager) only, the compiled .js file will no longer be in git and neither will the compiled lua library.
---
## 0.4.1 (2024-11-15)
- Upgrade to deno 2
- Upgrade to Silverbullet 0.10.1
- Upgrade to deno std@0.224.0
---
## 0.4.0 (2024-09-16)
- Use a separate queue for indexing embeddings and summaries, to prevent blocking the main SB indexing thread
- Refactor to use JSR for most Silverbullet imports, and lots of related changes
- Reduced bundle size
- Add support for [space-config](https://silverbullet.md/Space%20Config)
- Add support for [[Templated Prompts|Post Processor]] functions in [[Templated Prompts]].
- AICore Library: Updated all library files to have the meta tag.
- AICore Library: Add space-script functions to be used as post processors:
- **indentOneLevel** - Indent entire response one level deeper than the previous line.
- **removeDuplicateStart** - Remove the first line from the response if it matches the line before the response started.
- **convertToBulletList** - Convert response to a markdown list.
- **convertToTaskList** - Convert response to a markdown list of tasks.
- Add new insertAt options for [[Templated Prompts]]:
- **replace-selection**: Replaces the currently selected text with the generated content. If no text is selected, it behaves like the 'cursor' option.
- **replace-paragraph**: Replaces the entire paragraph (or item) where the cursor is located with the generated content.
- **start-of-line**: Inserts at the start of the current line.
- **end-of-line**: Inserts at the end of the current line.
- **start-of-item**: Inserts at the start of the current item (list item or task).
- **end-of-item**: Inserts at the end of the current item.
- **new-line-above**: Inserts on a new line above the current line.
- **new-line-below**: Inserts on a new line below the current line.
- **replace-line**: Replaces the current line with the generated content.
- **replace-smart**: Intelligently replaces content based on context (selected text, current item, or current paragraph).
- AICore Library: Add `aiSplitTodo` slash command and `AI Split Task` templated prompt to split a task into smaller subtasks.
- AICore Library: Add template prompts for rewriting text, mostly as a demo for the `replace-smart` insertAt option.
- Remove need for duplicate `description` frontmatter field for templated prompts.
- Revamp [docs website](https://ai.silverbullet.md) to use mkdocs and mkdocs-material.
---
## 0.3.2
- Expose searchCombinedEmbeddings function to AICore library for templates to use
- Add [[Providers/Ollama]] text/llm provider as a wrapper around openai provider
---
## 0.3.1
- Set searchEmbeddings to false by default
- Fix templated prompts not rendering as a template when not using chat-style prompts
- Add new searchEmbeddings function to AICore library for templates to use
---
## 0.3.0
- Don't index and generate embeddings for pages in Library/
- Add new [[Commands/AI: Enhance Note]] command to call existing `AI: Tag Note` and `AI: Suggest Page Name` commands on a note, and the new frontmatter command
- Add new [[Commands/AI: Generate Note FrontMatter]] command to extract useful facts from a note and add them to the frontmatter
- Always include the note’s current name in [[Commands/AI: Suggest Page Name]] as an option
- Log how long it takes to index each page when generating embeddings
- Improve the layout and UX of the [[Commands/AI: Search]] page
- Fix the `AI: Search` page so it works in sync/online mode, requires Silverbullet >= 0.8.3
- Fix bug preventing changing models in sync mode
- Add [[Templated Prompts#Chat-style prompts|Chat-style prompts]] support in Templated Prompts
- Fix bug when embeddingModels is undefined
---
## 0.2.0
* Vector search and embeddings generation by [@justyns](https://github.com/justyns) in [#37](https://github.com/justyns/silverbullet-ai/pull/37)
* Enrich chat messages with RAG by searching our local embeddings by [@justyns](https://github.com/justyns) in [#38](https://github.com/justyns/silverbullet-ai/pull/38)
* Refactor: Re-organize providers, interfaces, and types by [@justyns](https://github.com/justyns) in [#39](https://github.com/justyns/silverbullet-ai/pull/39)
* Add try/catch to tests by [@justyns](https://github.com/justyns) in [#40](https://github.com/justyns/silverbullet-ai/pull/40)
* Fix bug causing silverbullet to break when aiSettings isn't configured at all by [@justyns](https://github.com/justyns) in [#42](https://github.com/justyns/silverbullet-ai/pull/42)
* Add option to generate summaries of each note and index them. by [@justyns](https://github.com/justyns) in [#43](https://github.com/justyns/silverbullet-ai/pull/43)
* Disable indexing on clients, index only on server by [@justyns](https://github.com/justyns) in [#44](https://github.com/justyns/silverbullet-ai/pull/44)
* Set index and search events to server only by [@justyns](https://github.com/justyns) in [#45](https://github.com/justyns/silverbullet-ai/pull/45)
---
## 0.1.0
- **BREAKING**: Removed deprecated parameters: summarizePrompt, tagPrompt, imagePrompt, temperature, maxTokens, defaultTextModel, backwardsCompat. Except for defaultTextModel, these were no longer used.
- New [[Commands/AI: Suggest Page Name]] command
- Bake queries and templates in chat by [@justyns](https://github.com/justyns) in [#30](https://github.com/justyns/silverbullet-ai/pull/30)
* Allow completely overriding page rename system prompt, improve ux by [@justyns](https://github.com/justyns) in [#31](https://github.com/justyns/silverbullet-ai/pull/31)
* Always select a model if it's the only one in the list by [@justyns](https://github.com/justyns) in [#33](https://github.com/justyns/silverbullet-ai/pull/33)
* Pass all existing tags to generate tag command, allow user to add their own instructions too
---
## 0.0.11
- Support for custom chat message enrichment functions, see [[Configuration/Custom Enrichment Functions]]
---
## 0.0.10
- Add WIP docs and docs workflow by [@justyns](https://github.com/justyns) in [#20](https://github.com/justyns/silverbullet-ai/pull/20)
- Enable slash completion for ai prompts
- Don't die if clientStore.get doesn't work, like in cli mode
---
## 0.0.9
- Add github action for deno build-release by [@justyns](https://github.com/justyns) in [#18](https://github.com/justyns/silverbullet-ai/pull/18)
- Add ability to configure multiple text and image models, and switch between them by [@justyns](https://github.com/justyns) in [#17](https://github.com/justyns/silverbullet-ai/pull/17)
- Fix error when imageModels is undefined in SETTINGS by [@justyns](https://github.com/justyns) in [#22](https://github.com/justyns/silverbullet-ai/pull/22)
- Re-add summarizeNote and insertSummary commands, fixes [#19](https://github.com/justyns/silverbullet-ai/issues/19). Also add non-streaming support to gemini by [@justyns](https://github.com/justyns) in [#24](https://github.com/justyns/silverbullet-ai/pull/24)
---
## 0.0.8
- Add wikilink enrichment to chat messages for [#9](https://github.com/justyns/silverbullet-ai/issues/9) by [@justyns](https://github.com/justyns) in [#12](https://github.com/justyns/silverbullet-ai/pull/12)
- Add a newline when the first message from the LLM is either a code fence or markdown block by [@justyns](https://github.com/justyns) in [#13](https://github.com/justyns/silverbullet-ai/pull/13)
---
## 0.0.7
- Added Perplexity AI API info by [@zefhemel](https://github.com/zefhemel) in [#6](https://github.com/justyns/silverbullet-ai/pull/6)
- Add Custom Instructions for chat by [@justyns](https://github.com/justyns) in [#8](https://github.com/justyns/silverbullet-ai/pull/8)
- Interfaces refactor by [@justyns](https://github.com/justyns) in [#10](https://github.com/justyns/silverbullet-ai/pull/10)
- Add experimental Google Gemini support for [#3](https://github.com/justyns/silverbullet-ai/issues/3) by [@justyns](https://github.com/justyns) in [#11](https://github.com/justyns/silverbullet-ai/pull/11)
---
## 0.0.6
- Add a new command to prompt for a template to execute and render as a prompt
- Add insertAt option for prompt templates (page-start, page-end, cursor)
- Make the cursor behave nicer in interactive chats, fixes [#1](https://github.com/justyns/silverbullet-ai/issues/1)
- Remove 'Contacting LLM' notification and replace it with a loading placeholder for now [#1](https://github.com/justyns/silverbullet-ai/issues/1)
- Move some of the flashNotifications to console.log instead
- Dall-e: Use finalFileName instead of the prompt to prevent long prompts form breaking the markdown
- Add queryOpenAI function to use in templates later
- Update Readme for templated prompts, build potential release version
---
## 0.0.5
- Rename test stream command
- Add better error handling and notifications
- Misc refactoring to make the codebase easier to work on
- Automatically reload config from SETTINGS and SECRETS page
- Update readme for ollama/mistral.ai examples
- Use editor.insertAtPos instead of insertAtCursor to make streaming text more sane
- Add requireAuth variable to fix cors issue on chrome w/ ollama
- Remove redundant commands, use streaming for others
- Let chat on page work on any page. Add keyboard shortcut for it
- Move cursor to follow streaming response
---
## 0.0.4
- Add command for 'Chat on current page' to have an interactive chat on a note page
- Use relative image path name for dall-e generated images
- First attempt at supporting streaming responses from openai directly into the editor
---
## 0.0.3
- Add a new command to call openai using a user note or selection as the prompt, ignoring built-in prompts
- Add support for changing the openai-compatible api url and using a local LLM like Ollama
- Update jsdoc descriptions for each command and add to readme
- Save dall-e generated image locally
- Add script to update readme automatically
- Save and display the revised prompt from dall-e-3
---
### Development
## Build
To build this plug, make sure you have [SilverBullet installed](https://silverbullet.md/Install). Then, build the plug with:
```shell
deno task build
```
Or to watch for changes and rebuild automatically
```shell
deno task watch
```
Then, copy the resulting `.plug.js` file into your space's `_plug` folder. Or build and copy in one command:
```shell
deno task build && cp *.plug.js /my/space/_plug/
```
SilverBullet will automatically sync and load the new version of the plug (or speed up this process by running the {[Sync: Now]} command).
## Testing
### Unit Tests
Run unit tests with Deno's built-in test runner:
```shell
deno task test
```
This runs all `*.test.ts` files and generates coverage reports in `cov_profile/`.
### E2E Tests (Browser Testing)
The project includes end-to-end tests using [Playwright](https://playwright.dev/) to test the plugin in a real browser.
#### Setup
First-time setup requires installing Playwright browsers:
```shell
deno task playwright:install
```
#### Running E2E Tests
Run all E2E tests:
```shell
deno task test:e2e
```
Run tests with interactive UI:
```shell
deno task test:e2e:ui
```
Run tests in headed mode (see the browser):
```shell
deno task test:e2e:headed
```
Run specific browser:
```shell
deno task test:e2e --project=chromium
deno task test:e2e --project=firefox
deno task test:e2e --project="Mobile Chrome"
```
#### Test Structure
E2E tests are located in `e2e-tests/tests/`. See `e2e-tests/README.md` for detailed documentation.
The tests automatically:
- Start a SilverBullet instance on port 3000
- Load the test space from `test-space/`
- Run tests across multiple browsers and viewports
- Capture screenshots and videos on failure
For more information, see the [E2E Testing README](../e2e-tests/README.md).
## Docs
Documentation is located in the `docs/` directory and rendered using [mkdocs](https://github.com/mkdocs/mkdocs).
To make changes, use silverbullet (or any markdown editor) locally like: `silverbullet docs/`
If you want to see changes in real-time, open up two terminals and run these two commands:
- `mkdocs serve -a localhost:9000`
- `find docs -name \*.md -type f | egrep -v 'public' | entr bash ./render-docs.sh`
The first starts a local development server of mkdocs. The second uses the [entr](https://github.com/eradman/entr) command to run silverbullet-pub every time a file changes inside the silverbullet docs/ directory.
Markdown files inside of docs/ can also be manually edited using any editor.
### Recommended Models
# Recommended Models
This page lists AI models and their compatibility with silverbullet-ai features, particularly tool/function calling, streaming, and structured responses. If a model does not support tool calling, related features will not work, but you can still use the basic chat support with those models.
**Last updated**: 2026-01-06
**Note**: This is not a very thorough benchmark, and was mostly meant as a quick sanity check to see if certain models will work at all. Please consider other benchmarks like [Aider's Polygot leaderboard](https://aider.chat/docs/leaderboards/).
## Model Compatibility
| Model | Provider | Stream | JSON | Schema | Tools | Read | Section | List | Update | Replace | No Tool | Score | Notes |
|-------|----------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|-------|-------|
| gpt-4o | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| gpt-4o-mini | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| o3-mini | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| gpt-5-mini | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| gpt-5-nano | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| gpt-5.1 | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| gpt-5.2 | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| qwen2.5:32b | Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| qwen3:8b | Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| qwen3:14b | Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 10/10 | |
| qwen2.5:14b | Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | 9/10 | |
| qwen2.5:7b | Ollama | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | 9/10 | |
| gpt-5 | OpenAI | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | 8/10 | API calls timed out repeatedly |
| hermes3:8b | Ollama | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ✅ | 7/10 | |
| mistral-nemo:12b | Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | 7/10 | |
| llama3.2:3b | Ollama | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | 6/10 | |
| llama3.2:latest | Ollama | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | 6/10 | |
| qwen2.5-coder:7b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | 4/10 | No native tool support |
| qwen2.5-coder:32b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | 4/10 | No native tool support |
| granite3.2:8b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | 4/10 | No native tool support |
| phi4:14b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 3/10 | No native tool support |
| gemma2:9b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 3/10 | No native tool support |
| deepseek-coder:6.7b | Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 3/10 | No native tool support |
| deepseek-r1:8b | Ollama | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 1/10 | No native tool support |
### Legend
- ✅ Pass - Test completed successfully
- ⚠️ Warning - Completed with issues
- ❌ Error - Failed or not supported
### Test Descriptions
| Test | Description |
|------|-------------|
| Stream | Streaming response support |
| JSON | JSON output mode |
| Schema | Structured output with schema validation |
| Tools | Basic tool/function calling support |
| Read | Read a page by name |
| Section | Read a specific section from a page |
| List | List pages in a folder |
| Update | Append content to a section |
| Replace | Search and replace text in a page |
| No Tool | Correctly answers without using tools when not needed |
## Notes
### Running Your Own Benchmark
To test a model's compatibility:
1. Select the model using **AI: Select Text Model from Config**
2. Run **AI: Run Benchmark**
3. View results on the **🧪 AI Benchmark** page
### Contributing Results
If you've tested a model not listed here, please contribute your results via a GitHub issue or pull request.
### SilverBullet v2 Migration Guide
# SilverBullet v2 Migration Guide
This guide covers migrating silverbullet-ai from SilverBullet v1 to v2. The main change is moving from SETTINGS/SECRETS pages to Space Lua configuration.
For general SilverBullet v2 migration steps, see the official [Migrate from v1](https://silverbullet.md/Migrate%20from%20v1) guide.
## Quick Steps
1. **Update SilverBullet**: Upgrade to v2.3.0+
2. **Remove old plugin**: Delete `_plug/silverbullet-ai.plug.js` if present
3. **Install plugin**: Use Library Manager
4. **Move Configuration**: Migrate from SETTINGS/SECRETS to Space Lua
## Plugin Installation
1. Run `Library: Install` command
2. Enter one of the following:
**Latest release:**
```
ghr:justyns/silverbullet-ai/PLUG.md
```
**Specific release:**
```
ghr:justyns/silverbullet-ai@0.5.0/PLUG.md
```
See [GitHub Releases](https://github.com/justyns/silverbullet-ai/releases) for available versions.
## Configuration Migration
### Move API Keys
**Old (SECRETS page):**
```yaml
OPENAI_API_KEY: "sk-..."
GEMINI_API_KEY: "ai-..."
```
**New (Space Lua config):**
```lua
config.set {
ai = {
keys = {
OPENAI_API_KEY = "sk-...",
GEMINI_API_KEY = "ai-..."
}
}
}
```
### Move AI Settings
**Old (SETTINGS page):**
```yaml
ai:
textModels:
- name: gpt-4o
provider: openai
modelName: gpt-4o
```
**New (Space Lua config):**
```lua
config.set {
ai = {
textModels = {
{name = "gpt-4o", provider = "openai", modelName = "gpt-4o"}
}
}
}
```
## Complete Example
```lua
config.set {
ai = {
keys = {
OPENAI_API_KEY = "sk-..."
},
textModels = {
{name = "gpt-4o", provider = "openai", modelName = "gpt-4o"},
{
name = "ollama-llama",
provider = "ollama",
modelName = "llama3",
baseUrl = "http://localhost:11434/v1",
requireAuth = false,
useProxy = false -- Bypass SilverBullet's proxy for local services
}
},
imageModels = {
{name = "dall-e-3", provider = "dalle", modelName = "dall-e-3"}
},
embeddingModels = {
{name = "text-embedding-3-small", provider = "openai", modelName = "text-embedding-3-small"}
},
indexEmbeddings = false,
chat = {
userInformation = "I'm a software developer who likes taking notes.",
userInstructions = "Give short, concise responses."
}
}
}
```
## Testing
After migration:
1. Run `AI: Connectivity Test`
2. Run `AI: Select Text Model from Config`
3. Try `AI: Chat on current page`
## Troubleshooting
- **Plugin won't load**: Check SilverBullet version is 2.3.0+
- **API errors**: Verify API keys are set correctly under `ai.keys`
- **Config errors**: Check Space Lua syntax (use `=` not `:`)
- **Local models not working**: Add `useProxy = false` to bypass SilverBullet's proxy and connect directly from the browser
### Space Lua
# Space Lua
Call the LLM from your own Space Lua code using the `silverbullet-ai.chat` function.
## Basic Usage
```lua
local result = system.invokeFunction("silverbullet-ai.chat", {
messages = {
{role = "user", content = "What is the capital of France?"}
}
})
print(result.response) -- "The capital of France is Paris."
```
## Options
| Option | Type | Description |
|--------|------|-------------|
| `messages` | array | Chat messages with `role` and `content` |
| `systemPrompt` | string | Optional system prompt |
| `useTools` | boolean | Enable AI tools (default: false) |
## With Tools
When `useTools` is enabled, the LLM can use any tools defined in `ai.tools`:
```lua
local result = system.invokeFunction("silverbullet-ai.chat", {
messages = {
{role = "user", content = "Read my Daily Notes page and summarize it"}
},
useTools = true
})
print(result.response) -- The AI's summary
print(result.toolCalls) -- Tools called (e.g., "> 🔧 read_note(...) → ✓")
```
## Example: Custom Command
```lua
command.define {
name = "AI: Summarize Page",
run = function()
local content = editor.getText()
local result = system.invokeFunction("silverbullet-ai.chat", {
messages = {
{role = "user", content = "Summarize in 3 bullets:\n\n" .. content}
}
})
editor.flashNotification(result.response)
end
}
```
## Multi-turn Conversations
```lua
local result = system.invokeFunction("silverbullet-ai.chat", {
messages = {
{role = "user", content = "My name is Alice"},
{role = "assistant", content = "Nice to meet you, Alice!"},
{role = "user", content = "What's my name?"}
}
})
-- result.response: "Your name is Alice."
```
### Space Style
# Space Style
Some of this plug's UI can be customized by overriding CSS variables using SilverBullet's [Space Style](https://silverbullet.md/Space%20Style) feature.
## CSS Variables
The following CSS variables control panel appearance. If not set, they fall back to SilverBullet's default theme colors.
### Approval Buttons
| Variable | Description |
|----------|-------------|
| `--ai-approve-bg` | Background color for approve buttons (defaults to accent color) |
| `--ai-approve-text` | Text color for approve buttons |
| `--ai-reject-bg` | Background color for reject buttons |
| `--ai-reject-text` | Text color for reject buttons |
| `--ai-reject-border` | Border color for reject buttons |
### Diff Preview
| Variable | Description |
|----------|-------------|
| `--ai-diff-add-bg` | Background color for added lines |
| `--ai-diff-add-text` | Text color for added lines |
| `--ai-diff-remove-bg` | Background color for removed lines |
| `--ai-diff-remove-text` | Text color for removed lines |
## Example
Create a page in your space with a `space-style` code block:
~~~markdown
```space-style
:root {
--ai-approve-bg: #22c55e;
--ai-approve-text: #ffffff;
--ai-reject-bg: transparent;
--ai-reject-text: #ef4444;
--ai-reject-border: #ef4444;
--ai-diff-add-bg: #dcfce7;
--ai-diff-add-text: #166534;
--ai-diff-remove-bg: #fee2e2;
--ai-diff-remove-text: #991b1b;
}
```
~~~
For dark mode overrides, use the `[data-theme="dark"]` selector:
~~~markdown
```space-style
[data-theme="dark"] {
--ai-diff-add-bg: #14532d;
--ai-diff-add-text: #bbf7d0;
--ai-diff-remove-bg: #7f1d1d;
--ai-diff-remove-text: #fecaca;
}
```
~~~
## SilverBullet References
See https://silverbullet.md for full SilverBullet documentation.
- [Space Lua](https://silverbullet.md/Space%20Lua/): Lua scripting system for SilverBullet
- [Lua Integrated Query](https://silverbullet.md/Space%20Lua/Lua%20Integrated%20Query/): Query language for data
- [Widget](https://silverbullet.md/Space%20Lua/Widget/): Custom UI widgets
- [Template](https://silverbullet.md/Template/): Template system reference
- [Library](https://silverbullet.md/Library/): Library system and plugs
- [Event](https://silverbullet.md/Event/): SilverBullet events reference
- [Frontmatter](https://silverbullet.md/Frontmatter/): Page metadata format
- [Object](https://silverbullet.md/Object/): Object/attribute system