feat: add Docker support for offline deployment with qwen3:14b

Major additions:
- All-in-One Docker image with Ollama + models bundled
- Separate deployment option for existing Ollama installations
- Changed default model from qwen3:8b to qwen3:14b
- Comprehensive deployment documentation

Files added:
- Dockerfile: Basic app-only image
- Dockerfile.allinone: Complete image with Ollama + models
- docker-compose.yml: Easy deployment configuration
- docker-entrypoint.sh: Startup script for all-in-one image
- requirements.txt: Python dependencies
- .dockerignore: Exclude unnecessary files from image

Scripts:
- export-ollama-models.sh: Export models from local Ollama
- build-allinone.sh: Build complete offline-deployable image
- build-and-export.sh: Build and export basic image

Documentation:
- DEPLOYMENT.md: Comprehensive deployment guide
- QUICK_START.md: Quick reference for common tasks

Configuration:
- Updated config.py: DEFAULT_CHAT_MODEL = qwen3:14b
- Updated frontend/opro.html: Page title to 系统提示词优化
This commit is contained in:
2025-12-08 10:10:38 +08:00
parent 65cdcf29dc
commit 26f8e0c648
13 changed files with 897 additions and 3 deletions

117
QUICK_START.md Normal file
View File

@@ -0,0 +1,117 @@
# 快速开始指南
## 离线部署All-in-One 方案)
### 在开发机器上(有外网)
```bash
# 1. 下载模型
ollama pull qwen3:14b
ollama pull qwen3-embedding:4b
# 2. 导出模型
./export-ollama-models.sh
# 3. 构建并导出 Docker 镜像
./build-allinone.sh
# 4. 传输到目标服务器
# 文件: system-prompt-optimizer-allinone.tar (约 10-20GB)
scp system-prompt-optimizer-allinone.tar user@server:/path/
```
### 在目标服务器上(无外网)
```bash
# 1. 加载镜像
docker load -i system-prompt-optimizer-allinone.tar
# 2. 启动服务
docker run -d \
--name system-prompt-optimizer \
-p 8010:8010 \
-p 11434:11434 \
-v $(pwd)/outputs:/app/outputs \
--restart unless-stopped \
system-prompt-optimizer:allinone
# 3. 等待启动(约 60 秒)
sleep 60
# 4. 验证
curl http://localhost:8010/health
curl http://localhost:11434/api/tags
# 5. 访问界面
# http://<服务器IP>:8010/ui/opro.html
```
## 常用命令
```bash
# 查看日志
docker logs -f system-prompt-optimizer
# 重启服务
docker restart system-prompt-optimizer
# 停止服务
docker stop system-prompt-optimizer
# 删除容器
docker rm -f system-prompt-optimizer
# 进入容器
docker exec -it system-prompt-optimizer bash
# 检查模型
docker exec -it system-prompt-optimizer ollama list
```
## 端口说明
- **8010**: Web 界面和 API
- **11434**: Ollama 服务(仅 All-in-One 方案需要暴露)
## 文件说明
- `system-prompt-optimizer-allinone.tar`: 完整镜像10-20GB
- `outputs/`: 用户反馈日志目录
## 故障排查
### 服务无法启动
```bash
# 查看日志
docker logs system-prompt-optimizer
# 检查端口占用
netstat -tulpn | grep 8010
netstat -tulpn | grep 11434
```
### 模型不可用
```bash
# 进入容器检查
docker exec -it system-prompt-optimizer ollama list
# 应该看到:
# qwen3:14b
# qwen3-embedding:4b
```
### 性能慢
- 确保服务器有足够的 RAM建议 16GB+
- 如果有 GPU使用支持 GPU 的 Docker 运行时
- 调整 `config.py` 中的 `GENERATION_POOL_SIZE`
## 更多信息
详细文档请参考:
- `DEPLOYMENT.md`: 完整部署指南
- `README.md`: 项目说明
- http://localhost:8010/docs: API 文档