Skip to content

Latest commit

 

History

History
365 lines (225 loc) · 12.1 KB

README.md

File metadata and controls

365 lines (225 loc) · 12.1 KB

LLM BOLT

Note on Workbench: The Modelfile Ollama Create uses; PARAMETER num_ctx ##### should be set to 32k or 128k Context Length (32768 or 131072 bytes) as required when opening or using the Workbench.


social


LLM Webdev in a browser

What Makes Bolt Different

What Makes Bolt Different

Claude, v0, etc are incredible- but you can't install packages, run backends, or edit code. That’s where Bolt.new stands out:

  • Full-Stack in the Browser: Bolt.new integrates cutting-edge AI models with an in-browser development environment powered by StackBlitz’s WebContainers. This allows you to:

    • Install and run npm tools and libraries (like Vite, Next.js, and more)
    • Run Node.js servers
    • Interact with third-party APIs
    • Deploy to production from chat
    • Share your work via a URL
  • AI with Environment Control: Unlike traditional dev environments where the AI can only assist in code generation, Bolt.new gives AI models complete control over the entire environment including the filesystem, node server, package manager, terminal, and browser console. This empowers AI agents to handle the whole app lifecycle—from creation to deployment.

Whether you’re an experienced developer, a PM, or a designer, Bolt.new allows you to easily build production-grade full-stack applications.

For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!


Setup

Setup

Many of you are new users to installing software from Github. If you have any installation troubles reach out and submit an "issue" using the links above, or feel free to enhance this documentation by forking, editing the instructions, and doing a pull request.

  1. Install Git from https://git-scm.com/downloads

  2. Install Node.js from https://nodejs.org/en/download/

Pay attention to the installer notes after completion.

On all operating systems, the path to Node.js should automatically be added to your system path. But you can check your path if you want to be sure. On Windows, you can search for "edit the system environment variables" in your system, select "Environment Variables..." once you are in the system properties, and then check for a path to Node in your "Path" system variable. On a Mac or Linux machine, it will tell you to check if /usr/local/bin is in your $PATH. To determine if usr/local/bin is included in $PATH open your Terminal and run:

echo $PATH .

If you see usr/local/bin in the output then you're good to go.

  1. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this:
git clone https://github.com/coleam00/bolt.new-any-llm.git
  1. Rename .env.example to .env.local and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar.

image

If you can't see the file indicated above, its likely you can't view hidden files. On Mac, open a Terminal window and enter this command below. On Windows, you will see the hidden files option in File Explorer Settings. A quick Google search will help you if you are stuck here.

defaults write com.apple.finder AppleShowAllFiles YES

NOTE: you only have to set the ones you want to use and Ollama doesn't need an API key because it runs locally on your computer:

Get your GROQ API Key here: https://console.groq.com/keys

Get your Open AI API Key by following these instructions: https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key

Get your Anthropic API Key in your account settings: https://console.anthropic.com/settings/keys

GROQ_API_KEY=XXX
OPENAI_API_KEY=XXX
ANTHROPIC_API_KEY=XXX

Optionally, you can set the debug level:

VITE_LOG_LEVEL=debug

Important: Never commit your .env.local file to version control. It's already included in .gitignore.


Run with Docker

Run with Docker

Prerequisites:

Git and Node.js as mentioned above, as well as Docker: https://www.docker.com/

1a. Using Helper Scripts

NPM scripts are provided for convenient building:

# Development build
npm run dockerbuild

# Production build
npm run dockerbuild:prod

1b. Direct Docker Build Commands (alternative to using NPM scripts)

You can use Docker's target feature to specify the build environment instead of using NPM scripts if you wish:

# Development build
docker build . --target bolt-ai-development

# Production build
docker build . --target bolt-ai-production

2. Docker Compose with Profiles to Run the Container

Use Docker Compose profiles to manage different environments:

# Development environment
docker-compose --profile development up

# Production environment
docker-compose --profile production up

When you run the Docker Compose command with the development profile, any changes you make on your machine to the code will automatically be reflected in the site running on the container (i.e. hot reloading still applies!).


Run Without Docker

Run Without Docker

  1. Install dependencies using Terminal (or CMD in Windows with admin permissions):
pnpm install

If you get an error saying "command not found: pnpm" or similar, then that means pnpm isn't installed. You can install it via this:

sudo npm install -g pnpm
  1. Start the application with the command:
pnpm run dev

Ollama Tips!

Note on running Ollama models!

Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more. This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want to use where you specify a larger context window. Luckily it's super easy to do that.

All you have to do is:

  • Create a file called "Modelfile" (no file extension) anywhere on your computer
  • Put in the two lines:
FROM [Ollama model ID such as qwen2.5-coder:7b]
PARAMETER num_ctx 32768
  • Run the command:
ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]

Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason. You'll see this new model in the list of Ollama models along with all the others you pulled!


Adding LLMs

Adding LLMs:

To make new LLMs available to use in this version of Bolt.new, head on over to app/utils/constants.ts and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.

By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!

When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!


Available Scripts

Available Scripts

  • pnpm run dev: Starts the development server.
  • pnpm run build: Builds the project.
  • pnpm run start: Runs the built application locally using Wrangler Pages. This script uses bindings.sh to set up necessary bindings so you don't have to duplicate environment variables.
  • pnpm run preview: Builds the project and then starts it locally, useful for testing the production build. Note, HTTP streaming currently doesn't work as expected with wrangler pages dev.
  • pnpm test: Runs the test suite using Vitest.
  • pnpm run typecheck: Runs TypeScript type checking.
  • pnpm run typegen: Generates TypeScript types using Wrangler.
  • pnpm run deploy: Builds the project and deploys it to Cloudflare Pages.



Dev

Start

Start Server

To start the development server:

pnpm run dev

This will start the Remix Vite development server. You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.


App Config
  • Config (pending)

  • Constants See; DEFAULT_MODEL = 'claude-3-5-sonnet-latest';


todo list
  • ✅ OpenRouter Integration (@coleam00)
  • ✅ Gemini Integration (@jonathands)
  • ✅ Autogenerate Ollama models from what is downloaded (@yunatamos)
  • ✅ Filter models by provider (@jasonm23)
  • ✅ Download project as ZIP (@fabwaseem)
  • ✅ Improvements to the main Bolt.new prompt in app\lib\.server\llm\prompts.ts (@kofi-bhr)
  • ✅ DeepSeek API Integration (@zenith110)
  • ✅ Mistral API Integration (@ArulGandhi)
  • ✅ "Open AI Like" API Integration (@ZerxZ)
  • ✅ Ability to sync files (one way sync) to local folder (@muzafferkadir)
  • ✅ Containerize the application with Docker for easy installation (@aaronbolton)
  • ✅ Publish projects directly to GitHub (@goncaloalves)
  • ✅ Ability to enter API keys in the UI (@ali00209)
  • ✅ xAI Grok Beta Integration (@milutinke)
  • HIGH PRIORITY - Prevent Bolt from rewriting files as often (file locking and diffs)
  • HIGH PRIORITY - Better prompting for smaller LLMs (code window sometimes doesn't start)
  • HIGH PRIORITY Load local projects into the app
  • HIGH PRIORITY - Attach images to prompts
  • HIGH PRIORITY - Run agents in the backend as opposed to a single model call
  • ⬜ LM Studio Integration
  • ⬜ Together Integration
  • ⬜ Azure Open AI API Integration
  • ⬜ HuggingFace Integration
  • ⬜ Perplexity Integration
  • ⬜ Vertex AI Integration
  • ⬜ Cohere Integration
  • ⬜ Deploy directly to Vercel/Netlify/other similar platforms
  • ⬜ Ability to revert code to earlier version
  • ⬜ Prompt caching
  • ⬜ Better prompt enhancing
  • ⬜ Have LLM plan the project in a MD file for better results/transparency
  • ⬜ VSCode Integration with git-like confirmations
  • ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
  • ⬜ Voice prompting

Tips & Tricks

More Tips

  • Be Patient!: If you try to save the page while Bolt is generating it'll stop the model with an notified error.

  • ** ... **:

  • Be specific about your stack: For specific frameworks or libraries (like Astro, Tailwind, or popular JS frameworks) then mention them in your initial prompt to ensure Bolt scaffolds the project accordingly.

  • ** ... **:

  • Use the enhance prompt icon: Before sending your prompt, try clicking the 'enhance' icon to have the AI model help you refine your prompt, then edit the results before submitting.

  • ** ... **:

  • Scaffold the basics first, then add features: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps Bolt understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality.

  • ** ... **:

  • Batch simple instructions: Save time by combining simple instructions into one message. For example, you can ask Bolt to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.



Fork

This fork allows you to choose a local LLM via Ollama. See Bolt's console command list for further details on building the server app.