Docker Reference

This page covers advanced Docker topics: the container architecture, Docker Compose file structure, GPU support, data persistence, customization, and common operations. For getting started with Docker, see the installation guides:

Container architecture

A full Magec deployment includes several containers working together:

ContainerPurposeAlways present
magecThe Magec server — API, Admin UI, Voice UI, agent runtimeYes
redisSession memory storageYes
postgresLong-term memory storage (pgvector)Yes
ollamaLocal LLM and embeddings (Qwen 3, nomic-embed-text)Local mode only
ollama-setupDownloads Ollama models on first start, then exitsLocal mode only
parakeetLocal speech-to-text (NVIDIA Parakeet)Local, Anthropic, Gemini
ttsLocal text-to-speech (OpenAI Edge TTS)Local, Anthropic, Gemini

In cloud modes (OpenAI, Anthropic, Gemini), some containers are replaced by cloud API calls. Redis and PostgreSQL always run locally because they store your data.

Docker Compose files

Docker Compose file

The install script downloads a single docker-compose.yaml with all services pre-configured for a fully local deployment. GPU acceleration for Ollama is included but commented out — the installer enables it when you pass --gpu.

Docker image

The Magec Docker image (ghcr.io/achetronic/magec) is built with a multi-stage Dockerfile:

StageBasePurpose
frontendnode:22-slimBuilds Admin UI and Voice UI
modelsgolang:1.25-alpineDownloads auxiliary ONNX models from HuggingFace
ffmpegmwader/static-ffmpeg:7.1Provides static ffmpeg binary (~135MB)
onnxdebian:bookworm-slimDownloads ONNX Runtime shared library (arch-aware)
buildergolang:1.25Compiles Go binary with everything embedded
Runtimegcr.io/distroless/cc-debian12Final image — binary + ffmpeg + ONNX Runtime

The runtime image uses distroless — no shell, no package manager, minimal attack surface. The image is available for linux/amd64 and linux/arm64.

GPU support

For local deployments, you can enable NVIDIA GPU acceleration for Ollama:

curl -fsSL .../install.sh | bash -s -- --gpu

This uncomments the GPU configuration in the Ollama service. You need:

To enable manually, edit docker-compose.override.yaml:

services:
  ollama:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

Data persistence

All persistent data is stored in Docker volumes:

VolumeContains
magec_datastore.json (configuration), conversations.json
redis_dataSession memory
postgres_dataLong-term memory (pgvector)
ollama_dataDownloaded AI models

Data survives container restarts, image updates, and docker compose down/up cycles.

To back up, copy store.json from the magec_data volume. To start fresh:

docker compose down -v    # removes all volumes
docker compose up -d      # fresh start

Customizing your deployment

Adding MCP servers as containers

You can extend the Docker Compose configuration to add MCP servers:

services:
  hass-mcp:
    image: ghcr.io/achetronic/hass-mcp:latest
    environment:
      - HASS_URL=http://homeassistant:8123
      - HASS_TOKEN=${HASS_TOKEN}
    ports:
      - "8888:8080"

Then add the MCP server in the Admin UI pointing at http://hass-mcp:8080/sse.

Changing ports

services:
  magec:
    ports:
      - "3000:8080"   # Voice UI + User API on port 3000
      - "3001:8081"   # Admin UI + Admin API on port 3001

Environment variables

All Magec configuration supports ${VAR} substitution:

services:
  magec:
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - LOG_LEVEL=debug

Accessing host services

If your LLM or other services run on the Docker host (not in containers), use host.docker.internal:

services:
  magec:
    extra_hosts:
      - "host.docker.internal:host-gateway"
On macOS and Windows, host.docker.internal works automatically. On Linux, you need the extra_hosts mapping above (or --add-host=host.docker.internal:host-gateway with docker run).

Common operations

cd magec                                    # your deployment directory

# Logs
docker compose logs -f                      # all services
docker compose logs -f magec                # Magec server only
docker compose logs -f ollama               # Ollama only

# Lifecycle
docker compose down                         # stop everything
docker compose up -d                        # start everything
docker compose restart magec                # restart Magec only

# Updates
docker compose pull                         # pull latest images
docker compose up -d                        # restart with new versions

# Backup
docker compose cp magec:/app/data/store.json ./store-backup.json

# Reset
docker compose down -v                      # stop and remove volumes
docker compose up -d                        # fresh start
When updating, docker compose pull && docker compose up -d is all you need. Your configuration in data/ persists across image updates.