Skip to content

Changelog

For the full changelog, please refer to the individual release notes on https://github.com/justyns/silverbullet-ai/releases or the commits themselves.

This page is a brief overview of each version.

Unreleased

  • Nothing yet.

0.4.0 (2024-09-16)

  • Use a separate queue for indexing embeddings and summaries, to prevent blocking the main SB indexing thread
  • Refactor to use JSR for most Silverbullet imports, and lots of related changes
  • Reduced bundle size
  • Add support for space-config
  • Add support for Post Processor functions in Templated Prompts.
  • AICore Library: Updated all library files to have the meta tag.
  • AICore Library: Add space-script functions to be used as post processors:
    • indentOneLevel - Indent entire response one level deeper than the previous line.
    • removeDuplicateStart - Remove the first line from the response if it matches the line before the response started.
    • convertToBulletList - Convert response to a markdown list.
    • convertToTaskList - Convert response to a markdown list of tasks.
  • Add new insertAt options for Templated Prompts:
    • replace-selection: Replaces the currently selected text with the generated content. If no text is selected, it behaves like the 'cursor' option.
    • replace-paragraph: Replaces the entire paragraph (or item) where the cursor is located with the generated content.
    • start-of-line: Inserts at the start of the current line.
    • end-of-line: Inserts at the end of the current line.
    • start-of-item: Inserts at the start of the current item (list item or task).
    • end-of-item: Inserts at the end of the current item.
    • new-line-above: Inserts on a new line above the current line.
    • new-line-below: Inserts on a new line below the current line.
    • replace-line: Replaces the current line with the generated content.
    • replace-smart: Intelligently replaces content based on context (selected text, current item, or current paragraph).
  • AICore Library: Add aiSplitTodo slash command and ^Library/AICore/AIPrompt/AI Split Task templated prompt to split a task into smaller subtasks.
  • AICore Library: Add template prompts for rewriting text, mostly as a demo for the replace-smart insertAt option.
  • Remove need for duplicate description frontmatter field for templated prompts.
  • Revamp docs website to use mkdocs (and mkdocs-material) in addition to silverbullet-pub to handle the silverbullet-specific things like templates/queries.

0.3.2

  • Expose searchCombinedEmbeddings function to AICore library for templates to use
  • Add Ollama text/llm provider as a wrapper around openai provider

0.3.1

  • Set searchEmbeddings to false by default
  • Fix templated prompts not rendering as a template when not using chat-style prompts
  • Add new searchEmbeddings function to AICore library for templates to use

0.3.0

  • Don't index and generate embeddings for pages in Library/
  • Add new AI: Enhance Note command to call existing AI: Tag Note and AI: Suggest Page Name commands on a note, and the new frontmatter command
  • Add new AI: Generate Note FrontMatter command to extract useful facts from a note and add them to the frontmatter
  • Always include the note’s current name in AI: Suggest Page Name as an option
  • Log how long it takes to index each page when generating embeddings
  • Improve the layout and UX of the AI: Search page
  • Fix the AI: Search page so it works in sync/online mode, requires Silverbullet >= 0.8.3
  • Fix bug preventing changing models in sync mode
  • Add Templated Prompts#Chat-style prompts support in Templated Prompts
  • Fix bug when embeddingModels is undefined

0.2.0

  • Vector search and embeddings generation embe by @justyns in #37
  • Enrich chat messages with RAG by searching our local embeddings by @justyns in #38
  • Refactor: Re-organize providers, interfaces, and types by @justyns in #39
  • Add try/catch to tests by @justyns in #40
  • Fix bug causing silverbullet to break when aiSettings isn't configured at all by @justyns in #42
  • Add option to generate summaries of each note and index them. by @justyns in #43
  • Disable indexing on clients, index only on server by @justyns in #44
  • Set index and search events to server only by @justyns in #45

0.1.0

  • BREAKING: Removed deprecated parameters: summarizePrompt, tagPrompt, imagePrompt, temperature, maxTokens, defaultTextModel, backwardsCompat. Except for defaultTextModel, these were no longer used.
  • New AI: Suggest Page Name command
  • Bake queries and templates in chat by @justyns in #30
  • Allow completely overriding page rename system prompt, improve ux by @justyns in #31
  • Always select a model if it's the only one in the list by @justyns in #33
  • Pass all existing tags to generate tag command, allow user to add their own instructions too

0.0.11


0.0.10

  • Add WIP docs and docs workflow by @justyns in #20
  • Enable slash completion for ai prompts
  • Don't die if clientStore.get doesn't work, like in cli mode

0.0.9

  • Add github action for deno build-release by @justyns in #18
  • Add ability to configure multiple text and image models, and switch between them by @justyns in #17
  • Fix error when imageModels is undefined in SETTINGS by @justyns in #22
  • Re-add summarizeNote and insertSummary commands, fixes #19. Also add non-streaming support to gemini by @justyns in #24

0.0.8

  • Add wikilink enrichment to chat messages for #9 by @justyns in #12
  • Add a newline when the first message from the LLM is either a code fence or markdown block by @justyns in #13

0.0.7


0.0.6

  • Add a new command to prompt for a template to execute and render as a prompt
  • Add insertAt option for prompt templates (page-start, page-end, cursor)
  • Make the cursor behave nicer in interactive chats, fixes #1
  • Remove 'Contacting LLM' notification and replace it with a loading placeholder for now #1
  • Move some of the flashNotifications to console.log instead
  • Dall-e: Use finalFileName instead of the prompt to prevent long prompts form breaking the markdown
  • Add queryOpenAI function to use in templates later
  • Update Readme for templated prompts, build potential release version

0.0.5

  • Rename test stream command
  • Add better error handling and notifications
  • Misc refactoring to make the codebase easier to work on
  • Automatically reload config from SETTINGS and SECRETS page
  • Update readme for ollama/mistral.ai examples
  • Use editor.insertAtPos instead of insertAtCursor to make streaming text more sane
  • Add requireAuth variable to fix cors issue on chrome w/ ollama
  • Remove redundant commands, use streaming for others
  • Let chat on page work on any page. Add keyboard shortcut for it
  • Move cursor to follow streaming response

0.0.4

  • Add command for 'Chat on current page' to have an interactive chat on a note page
  • Use relative image path name for dall-e generated images
  • First attempt at supporting streaming responses from openai directly into the editor

0.0.3

  • Add a new command to call openai using a user note or selection as the prompt, ignoring built-in prompts
  • Add support for changing the openai-compatible api url and using a local LLM like Ollama
  • Update jsdoc descriptions for each command and add to readme
  • Save dall-e generated image locally
  • Add script to update readme automatically
  • Save and display the revised prompt from dall-e-3