Setting Up OpenCode with LocalLLM (llama-swap)

This tutorial guides you through setting up OpenCode to use a local LLM via llama-swap, using GLM-4.7-Flash-32B as the example model.

Prerequisites

  • OpenCode installed and configured
  • llama-swap server running locally

Step 1: Install and Run llama-swap

  1. Install llama-swap:
    Go to https://github.com/mostlygeek/llama-swap and follow an instruction method you need.

  2. See my guide on setting up llama-swap .

Start the llama-swap server (adjust port as needed):

llama-swap-launch.sh

See my guide on creating the launchers.

The server will start on http://localhost:8080/ by default. Connect to it with programs using http://localhost:8080/v1

Step 2: Configure OpenCode

Create or Update auth.json

Edit ~/.local/share/opencode/auth.json to add your llama-swap API key:
you may set any key you like as long as you remember to use it in the other configs

{
  "llamaswap": {
    "type": "api",
    "key": "llama"
  }
}

Create or Update opencode.json

Edit ~/.config/opencode/opencode.json with the following configuration:

  • Make sure to use the same apiKey you choose in the previous step above.
  • The following model "GLM-4.7-Flash-32B" is the Name/Title I gave this model in the json config for llama-swap.
    llama-swap will use this name to locate the item in the json config which then locates the model and settings on
    the host machine for OpenCode to use.
    {
    "$schema": "https://opencode.ai/config.json",
    "provider": {
    "llamaswap": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "llama-swap (GLM-4.7-Flash-32B)",
      "options": {
        "baseURL": "http://192.168.0.69:8080/v1",
        "apiKey": "llama"
      },
      "models": {
        "GLM-4.7-Flash-32B": {
          "name": "GLM-4.7-Flash-32B"
        }
      }
    }
    },
    "model": "GLM-4.7-Flash-32B",
    "small_model": "GLM-4.7-Flash-32B"
    }

Configuration Details

  • provider.llamaswap.npm: Uses the OpenAI-compatible adapter for llama-swap
  • provider.llamaswap.options.baseURL: Points to your local llama-swap server
  • provider.llamaswap.options.apiKey: Your llama-swap API key (must match auth.json)
  • model: The default model to use for OpenCode
  • small_model: The model used for smaller tasks

Step 3: Verify Configuration

Test your configuration by running OpenCode and ensuring it connects to llama-swap:

opencode

You should see OpenCode successfully connecting to your local llama-swap instance.

Troubleshooting

  • Connection refused: Ensure llama-swap server is running
  • API key mismatch: Verify the apiKey in both config files matches
  • Wrong base URL: Check that baseURL points to your llama-swap server
  • Model not found: Ensure the model name matches what's available in llama-swap