本文最后更新于 2025年11月7日。
编排
services:
ollama:
image: docker.1ms.run/ollama/ollama:0.12.10 # 使用官方最新镜像
container_name: ollama
restart: unless-stopped # 自动重启策略
ports:
- "11434:11434" # 映射服务端口
volumes:
- /share/dockerdata/ollama:/root/.ollama # 持久化模型数据到本地
environment:
- OLLAMA_HOST=0.0.0.0:11434 # 允许外部访问
- OLLAMA_KEEP_ALIVE=30m # 模型空闲保留时间
- OLLAMA_MAX_LOADED_MODELS=3 # 最大并发模型数
- OLLAMA_MMAP=1 # 启用内存映射加速
networks:
- wsl-network
networks:
wsl-network:
external: true
运行镜像
进入容器控制台执行
ollama run qwen3-vl:2b
日志
root@d769e6ff9bdb:/# ollama
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model
show Show information for a model
run Run a model
stop Stop a running model
pull Pull a model from a registry
push Push a model to a registry
signin Sign in to ollama.com
signout Sign out from ollama.com
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
Use "ollama [command] --help" for more information about a command.
root@d769e6ff9bdb:/# ollama run qwen3-vl:2b
pulling manifest