Compare commits

..

18 Commits

Author SHA1 Message Date
AdrianoDev 7fa269de14 feat(deploy): auto-include docker-compose.local.yml override
Lo script deploy-noclone.sh ora carica automaticamente come ultimo -f
un eventuale $DEPLOY_DIR/docker-compose.local.yml se esiste. Utile per
fix specifici macchina (es. DOCKER_API_VERSION watchtower su daemon
vecchi). Gitignored per design — non versionato nel repo.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:44:01 +02:00
AdrianoDev c9ab211c38 chore(build-push): riusa docker login persistente
Skip login se ~/.docker/config.json contiene già auth per il registry.
Permette di fare 'docker login' una volta e poi lanciare lo script
senza dover esportare GITEA_PAT ad ogni run.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 21:40:56 +02:00
AdrianoDev 287c4b5372 chore: rimuovi deploy.sh e cache registry buildx
- scripts/deploy.sh eliminato (sostituito da deploy-noclone.sh)
- build-push.sh: rimossa cache-from/cache-to registry (cache buildx
  locale del laptop sufficiente, niente più image buildcache:* sul
  registry Gitea)
- doc cleanup riferimenti orfani

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 21:25:38 +02:00
AdrianoDev ba29572e93 chore(deploy): build locale + deploy no-clone, rimuovi CI Gitea
- scripts/build-push.sh: replica job CI in locale (8 image, cache buildx, tag :latest + :sha-X)
- scripts/deploy-noclone.sh: deploy VPS senza clone (curl raw config + image pull)
- rimossa .gitea/workflows/ci.yml
- README + DEPLOYMENT aggiornati: laptop -> registry -> VPS, paths /docker/cerbero_mcp
- ruff fix su 3 test (I001, SIM117, UP037, F821)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:37:06 +02:00
AdrianoDev 4f3e959805 feat(deploy): docker-compose.traefik.yml overlay per behind-Traefik
ci / ruff lint (push) Failing after 13s
ci / mypy mcp_common (push) Successful in 22s
ci / pytest (push) Successful in 32s
ci / validate compose + Caddyfile (push) Failing after 2m23s
ci / build & push to registry (push) Has been skipped
Per VPS condiviso (es. con Gitea) dove Traefik gestisce già 80/443.

- gateway/Caddyfile: env-aware listen + auto_https + trusted_proxies
  (defaults invariati per modalità standalone).
- docker-compose.traefik.yml: overlay che rimuove ports binding host,
  attacca gateway alla network esterna di Traefik, set labels per
  routing Host(cerbero-mcp.tielogic.xyz) + TLS via certresolver
  Traefik. Caddy ascolta plain HTTP :80 interno.
- scripts/deploy.sh: rileva BEHIND_TRAEFIK=true → aggiunge -f
  docker-compose.traefik.yml a tutti i docker compose call.
- DEPLOYMENT.md: nuova sezione 2a (topologia standalone vs behind-traefik)
  + sotto-sezione modalità behind-Traefik con env vars richieste.

Uso:
  docker compose -f docker-compose.prod.yml -f docker-compose.traefik.yml \
                 --env-file .env up -d

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:56:07 +02:00
AdrianoDev a1110c8ecb feat(safety+audit+deploy): consistency_check + audit log file sink + deploy script
ci / ruff lint (push) Failing after 12s
ci / mypy mcp_common (push) Successful in 25s
ci / pytest (push) Successful in 35s
ci / validate compose + Caddyfile (push) Successful in 2m3s
ci / build & push to registry (push) Has been skipped
#2 Env switch safety:
- mcp_common/environment.py: nuova consistency_check() che previene
  switch accidentali a mainnet. Solleva EnvironmentMismatchError se
  resolved=mainnet senza creds["environment"]="mainnet" esplicito,
  o se declared/resolved mismatch. Override via STRICT_MAINNET=false.
- Wirato in app_factory.run_exchange_main al boot.
- 6 nuovi test consistency.

#3 Audit log persistence:
- mcp_common/audit.py: TimedRotatingFileHandler aggiuntivo se env
  AUDIT_LOG_FILE settato. Rotation midnight UTC, retention 30gg
  default (AUDIT_LOG_BACKUP_DAYS). Format JSONL con SecretsFilter.
- docker-compose.prod.yml: bind mount /var/log/cerbero-mcp + env
  AUDIT_LOG_FILE per i 4 servizi exchange (write endpoints).
- 2 nuovi test file sink.

#1 Deploy script:
- scripts/deploy.sh: idempotente, fa docker login + clone/pull repo +
  copia secrets chmod 600 + crea .env + setup audit dir + pull image
  + up + smoke test pubblico HTTPS.
- DEPLOYMENT.md aggiornato: sezioni 2 (script), 3 (safety mainnet),
  4 (audit log query), renumber sezioni successive.

Test: 488/488 verdi.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:29:04 +02:00
AdrianoDev 019b7e3298 docs: README + DEPLOYMENT con stato CI/CD funzionante
ci / ruff lint (push) Successful in 14s
ci / mypy mcp_common (push) Successful in 23s
ci / pytest (push) Successful in 30s
ci / validate compose + Caddyfile (push) Successful in 2m2s
ci / build & push to registry (push) Successful in 1m32s
README aggiunge sezione 'CI/CD pipeline' che descrive i 5 job e i tag
image. DEPLOYMENT espande sez. 1 con dettagli runner Gitea (network
gitea_gitea-internal, image runner-images, label ubuntu-latest) e
configurazione secret user-level REGISTRY_TOKEN con scope write:package.
2026-04-29 09:18:30 +02:00
AdrianoDev 2fb7043790 ci: push base image al registry + parametrizza BASE_IMAGE nei service Dockerfile
ci / ruff lint (push) Successful in 13s
ci / mypy mcp_common (push) Successful in 23s
ci / pytest (push) Successful in 29s
ci / validate compose + Caddyfile (push) Successful in 2m0s
ci / build & push to registry (push) Successful in 1m43s
Buildx con driver docker-container non vede image caricate nel daemon
locale. Soluzione: push base come git.tielogic.xyz/adriano/cerbero-mcp/
base:latest e i 6 service Dockerfile usano ${BASE_IMAGE}:${BASE_TAG}
con default "cerbero-base" per dev locale, override CI a path registry.
2026-04-29 09:09:47 +02:00
AdrianoDev 38fd7db259 ci: usa secrets.REGISTRY_TOKEN per docker login (scope write:package)
ci / ruff lint (push) Successful in 14s
ci / mypy mcp_common (push) Successful in 26s
ci / pytest (push) Successful in 32s
ci / validate compose + Caddyfile (push) Successful in 2m24s
ci / build & push to registry (push) Failing after 4m13s
GITEA_TOKEN auto-iniettato non include write:package scope necessario per
push al registry. Serve PAT manuale creato in User Settings → Applications
con scope write:package, salvato come secret repo REGISTRY_TOKEN.
2026-04-29 08:53:31 +02:00
AdrianoDev 9da2e12473 lint: ruff clean services/ (autofix + manual + ignore E741)
ci / ruff lint (push) Successful in 15s
ci / validate compose + Caddyfile (push) Successful in 2m6s
ci / mypy mcp_common (push) Successful in 30s
ci / pytest (push) Successful in 34s
ci / build & push to registry (push) Failing after 47s
- 24 autofix safe (SIM105 contextlib.suppress, F401 unused imports,
  I001 import order, B007 unused loop var, F811 redef, F841 unused).
- 15 unsafe-fix (UP038 X|Y in isinstance, SIM108 ternary, ecc.).
- Manual fix: SIM102 nested if in deribit term_structure, E402 imports
  in test_cot.py + sentiment server.py.
- Ignore E741 (variabili 'l' in list comprehensions deribit/client.py
  — stilistico, non bug).

Tests: 478/478 verdi.
2026-04-29 08:44:12 +02:00
AdrianoDev 910f80c99b ci: setup-python@v5 con 3.13 + curl uv install (setup-uv@v5 non applicava python-version)
ci / mypy mcp_common (push) Successful in 25s
ci / pytest (push) Successful in 33s
ci / validate compose + Caddyfile (push) Successful in 3m35s
ci / build & push to registry (push) Has been skipped
ci / ruff lint (push) Failing after 52s
2026-04-29 08:29:24 +02:00
AdrianoDev fe7a9dd9c0 ci: usa astral-sh/setup-uv@v5 con python-version 3.13 (gestisce uv + Python + cache)
ci / ruff lint (push) Failing after 55s
ci / mypy mcp_common (push) Successful in 24s
ci / pytest (push) Successful in 30s
ci / build & push to registry (push) Has been cancelled
ci / validate compose + Caddyfile (push) Has been cancelled
2026-04-29 08:23:50 +02:00
AdrianoDev 503f7a4b17 ci: install Python 3.13 via uv (runner image ha solo 3.10)
ci / ruff lint (push) Failing after 21s
ci / mypy mcp_common (push) Successful in 32s
ci / pytest (push) Successful in 39s
ci / build & push to registry (push) Has been cancelled
ci / validate compose + Caddyfile (push) Has been cancelled
2026-04-29 08:22:29 +02:00
AdrianoDev 0956283463 ci: runs-on ubuntu-latest (label più stabile)
ci / ruff lint (push) Failing after 37s
ci / mypy mcp_common (push) Successful in 20s
ci / pytest (push) Successful in 30s
ci / validate compose + Caddyfile (push) Failing after 3m40s
ci / build & push to registry (push) Has been cancelled
2026-04-29 08:21:07 +02:00
AdrianoDev 7cc28cd6de ci: install uv via astral script + add to GITHUB_PATH
ci / ruff lint (push) Failing after 6s
ci / mypy mcp_common (push) Failing after 7s
ci / pytest (push) Failing after 6s
ci / validate compose + Caddyfile (push) Failing after 2m27s
ci / build & push to registry (push) Has been cancelled
2026-04-29 08:18:07 +02:00
AdrianoDev b91f843d89 ci: remove probe workflow (runner network issue resolved)
ci / ruff lint (push) Failing after 1m4s
ci / mypy mcp_common (push) Failing after 13s
ci / pytest (push) Failing after 13s
ci / build & push to registry (push) Has been cancelled
ci / validate compose + Caddyfile (push) Has been cancelled
2026-04-29 08:13:50 +02:00
AdrianoDev fd811d0692 ci(probe): minimal workflow per diagnosticare runner shell/tools
ci / ruff lint (push) Failing after 31s
ci / mypy mcp_common (push) Failing after 37s
ci / pytest (push) Failing after 31s
ci / validate compose + Caddyfile (push) Failing after 31s
probe / probe shell + tools (push) Successful in 1s
ci / build & push to registry (push) Has been skipped
2026-04-29 07:58:50 +02:00
AdrianoDev 1fea7d4ea1 ci: install uv via pipx (setup-uv@v3 era skipped da Gitea runner)
ci / ruff lint (push) Failing after 42s
ci / mypy mcp_common (push) Failing after 41s
ci / pytest (push) Failing after 35s
ci / validate compose + Caddyfile (push) Failing after 44s
ci / build & push to registry (push) Has been skipped
2026-04-29 07:54:17 +02:00
36 changed files with 1005 additions and 326 deletions
-215
View File
@@ -1,215 +0,0 @@
name: ci
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: git.tielogic.xyz
IMAGE_PREFIX: git.tielogic.xyz/adriano/cerbero-mcp
jobs:
lint:
name: ruff lint
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v3
with:
version: "latest"
enable-cache: true
- name: Install deps
run: uv sync --frozen --group dev
- name: Ruff check
run: uv run ruff check services/
typecheck:
name: mypy mcp_common
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v3
with:
version: "latest"
enable-cache: true
- name: Install deps
run: uv sync --frozen --group dev
- name: Mypy on mcp_common (gating)
run: uv run mypy services/common/src/mcp_common
- name: Mypy on services (warn-only)
run: uv run mypy services/ || true
test:
name: pytest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v3
with:
version: "latest"
enable-cache: true
- name: Install deps
run: uv sync --frozen --group dev
- name: Pytest full suite
run: uv run pytest services/ --tb=short
validate-config:
name: validate compose + Caddyfile
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Validate dev compose
run: docker compose -f docker-compose.yml config -q
- name: Validate prod compose
run: docker compose -f docker-compose.prod.yml config -q
env:
ACME_EMAIL: test@example.com
WRITE_ALLOWLIST: "127.0.0.1/32"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build gateway image (local, no push)
uses: docker/build-push-action@v6
with:
context: ./gateway
file: gateway/Dockerfile
tags: cerbero-gateway:validate
load: true
- name: Validate Caddyfile syntax
run: |
docker run --rm \
-v "$PWD/gateway/Caddyfile:/etc/caddy/Caddyfile:ro" \
-e ACME_EMAIL=test@example.com \
-e WRITE_ALLOWLIST="127.0.0.1/32" \
cerbero-gateway:validate \
caddy validate --config /etc/caddy/Caddyfile
build-and-push:
name: build & push to registry
runs-on: ubuntu-22.04
needs: [lint, test, validate-config]
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Gitea registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ gitea.actor }}
password: ${{ secrets.GITEA_TOKEN }}
- name: Compute short SHA
id: meta
run: echo "sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Build base image (load to local daemon)
uses: docker/build-push-action@v6
with:
context: .
file: docker/base.Dockerfile
tags: cerbero-base:latest
load: true
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:base
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:base,mode=max
- name: Build & push gateway
uses: docker/build-push-action@v6
with:
context: ./gateway
file: gateway/Dockerfile
push: true
tags: |
${{ env.IMAGE_PREFIX }}/gateway:latest
${{ env.IMAGE_PREFIX }}/gateway:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:gateway
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:gateway,mode=max
- name: Build & push mcp-deribit
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-deribit.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-deribit:latest
${{ env.IMAGE_PREFIX }}/mcp-deribit:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-deribit
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-deribit,mode=max
- name: Build & push mcp-bybit
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-bybit.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-bybit:latest
${{ env.IMAGE_PREFIX }}/mcp-bybit:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-bybit
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-bybit,mode=max
- name: Build & push mcp-hyperliquid
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-hyperliquid.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-hyperliquid:latest
${{ env.IMAGE_PREFIX }}/mcp-hyperliquid:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-hyperliquid
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-hyperliquid,mode=max
- name: Build & push mcp-alpaca
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-alpaca.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-alpaca:latest
${{ env.IMAGE_PREFIX }}/mcp-alpaca:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-alpaca
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-alpaca,mode=max
- name: Build & push mcp-macro
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-macro.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-macro:latest
${{ env.IMAGE_PREFIX }}/mcp-macro:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-macro
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-macro,mode=max
- name: Build & push mcp-sentiment
uses: docker/build-push-action@v6
with:
context: .
file: docker/mcp-sentiment.Dockerfile
build-args: BASE_TAG=latest
push: true
tags: |
${{ env.IMAGE_PREFIX }}/mcp-sentiment:latest
${{ env.IMAGE_PREFIX }}/mcp-sentiment:sha-${{ steps.meta.outputs.sha }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-sentiment
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/buildcache:mcp-sentiment,mode=max
+3
View File
@@ -36,3 +36,6 @@ config/*.env
# MCP config con token (solo .example tracciato)
.mcp.json
# Override locale compose (specifico macchina, fix daemon vecchi, ecc.)
docker-compose.local.yml
+271 -47
View File
@@ -1,43 +1,256 @@
# Deployment Cerbero_mcp
Guida operativa per il deploy della suite MCP su un VPS pubblico.
L'architettura è: Gitea ospita codice + container registry; il VPS produzione
non builda nulla, ma fa pull dei container già pronti dalla registry e usa
Watchtower per il rollover automatico delle versioni.
L'architettura è: Gitea ospita codice + container registry; le immagini
vengono buildate e pushate dalla **macchina di sviluppo** (laptop) verso
il registry; il VPS produzione non builda nulla, fa solo pull dei
container già pronti e usa Watchtower per il rollover automatico.
```
┌─────────────────────────┐ ┌──────────────────────────────────┐
│ Gitea git.tielogic.xyz │ │ VPS produzione │
│ │ │ cerbero-mcp.tielogic.xyz │
│ ┌──────────────────┐ │ push │ │
│ Cerbero-mcp repo │───┼─CI/CD──▶│ ┌────────────────────────────┐ │
│ └──────────────────┘ │ image │ │ docker compose │ │
│ ┌──────────────────┐ │ │ (docker-compose.prod.yml) │ │
│ Container reg.◀──┼─ pull ──┤ │ gateway, mcp-* │ │
│ └──────────────────┘ │ │ watchtower (poll 5min) │ │
┌──────────────────┐ │ │ └────────────────────────────┘ │
│ │ Actions runner │ │ │ │
│ └──────────────────┘ │ │ │
└─────────────────────────┘ └──────────────────────────────────┘
┌─────────────────────────┌─────────────────────────┐ ┌──────────────────────────────────┐
│ Laptop dev │ │ Gitea git.tielogic.xyz │ │ VPS produzione │
│ │ │ │ cerbero-mcp.tielogic.xyz │
│ build-push.sh ──push──▶ │───▶│ ┌──────────────────── │ │
(8 image) │ │ │ Container registry │ │ │ ┌────────────────────────────┐ │
│ git push ─────────────▶ │───▶│ └────────────────────│◀──┼──┤ docker compose │ │
│ │ │ ┌────────────────────┐ │ pull │ (docker-compose.prod.yml) │ │
│ │ │ Cerbero-mcp repo │ │ │ gateway, mcp-* │ │
│ │ │ └──────────────────── │ │ watchtower (poll 5min) │ │
│ │ │ │ └────────────────────────────┘ │
└──────────────────────────┘ └─────────────────────────┘ └──────────────────────────────────┘
```
## 1. Pipeline CI/CD (Gitea Actions)
Niente CI/CD su Gitea — qualità e build sono responsabilità del laptop
prima del push (lint/test in locale, poi `scripts/build-push.sh`).
`.gitea/workflows/ci.yml` ad ogni push su `main` esegue, in sequenza:
## 1. Build & push image (dal laptop)
1. **lint** (`ruff check`) — gating
2. **typecheck** (`mypy mcp_common`) — gating su mcp_common, warn-only sui servizi
3. **test** (`pytest services/`) — gating, 455 test
4. **build-and-push** — solo su push a `main`:
- Logga al registry `git.tielogic.xyz` con `secrets.GITEA_TOKEN`
- Builda `docker/base.Dockerfile` (cache)
- Builda e pusha `gateway` + 6 servizi MCP con tag:
- `:latest` (mobile, Watchtower polla questo)
- `:sha-XXXXXXX` (immutabile, per rollback puntuali)
Lo script `scripts/build-push.sh` builda e pusha le 8 image al registry
Gitea, replicando il vecchio job CI ma in locale. Pre-requisiti:
Le PR fanno girare solo lint+typecheck+test, niente build/push.
- `docker` + `buildx` sul laptop.
- Personal Access Token Gitea con scope `write:package` (User Settings
→ Applications → Generate Token).
## 2. Setup iniziale del VPS
```bash
export GITEA_PAT='<PAT_write:package>'
export GITEA_USER=adriano
# Tutte le 8 image (base + gateway + 6 mcp-*)
./scripts/build-push.sh
# Solo specifiche (es. dopo modifica a un singolo servizio)
./scripts/build-push.sh base mcp-bybit
```
Lo script:
- Fa `docker login git.tielogic.xyz`.
- Builda con `docker buildx build --push` (cache buildx locale del
laptop, niente cache registry: build successivi rapidi senza pesare
sul registry).
- Tagga `:latest` + `:sha-<short_HEAD>`.
- Per le mcp-* passa `BASE_IMAGE`/`BASE_TAG` come build-arg in modo da
ereditare dall'image `base` appena pushata.
Ordine consigliato: builda `base` prima delle `mcp-*` (lo script lo fa
di default se chiamato senza argomenti).
## 1b. Quality gate locale (consigliato prima del push)
Prima di `build-push.sh` esegui in locale i check che prima girava il CI:
```bash
uv run ruff check services/
uv run mypy services/common/src/mcp_common
uv run pytest services/ --tb=short
docker compose -f docker-compose.prod.yml config -q
```
Tutti devono essere verdi prima di pushare image al registry.
## 2a. Topologia: standalone vs behind-Traefik
Cerbero_mcp supporta due topologie di deploy:
### Standalone (Caddy gestisce TLS direttamente)
```
Internet ──[443]──► Caddy gateway ──► mcp-* services
(ACME Let's Encrypt)
```
Setto: `docker-compose.prod.yml` da solo. Caddy bind sulle porte
80/443 host, fa cert auto via ACME. Adatto a un VPS dedicato senza
altri servizi sulle 80/443.
### Behind-Traefik (Traefik termina TLS)
```
Internet ──[443]──► Traefik ──[traefik network]──► Caddy gateway ──► mcp-* services
(TLS+ACME) (rate-limit, IP allowlist)
```
Setto: `docker-compose.prod.yml` + `docker-compose.traefik.yml` overlay.
Caddy non bind su host, ascolta plain HTTP `:80` interno alla
`traefik` network. Traefik fa routing per `Host(cerbero-mcp.tielogic.xyz)`,
TLS, ACME. Adatto a VPS condiviso con altri servizi (Gitea, ecc.).
## 2. Deploy automatizzato (script no-clone)
Il modo più rapido è `scripts/deploy-noclone.sh`, idempotente. Sul VPS
**non** viene clonato il repo: lo script scarica via raw HTTP solo i
file strettamente necessari al runtime (compose, Caddyfile, public
assets). Esegui sul VPS:
```bash
# Prerequisiti
export GITEA_PAT="<PAT con scope read:package>"
export GITEA_USER=adriano
# Crea la dir di deploy e mettici i secrets via scp dal posto sicuro
sudo mkdir -p /docker/cerbero_mcp/secrets
sudo chown -R "$USER" /docker/cerbero_mcp
# scp deribit.json bybit.json hyperliquid.json alpaca.json \
# macro.json sentiment.json core.token observer.token \
# vps:/docker/cerbero_mcp/secrets/
# Behind Traefik (opzionale, solo se VPS condiviso con Gitea o altri)
# export BEHIND_TRAEFIK=true
# export TRAEFIK_NETWORK=gitea_traefik-public
curl -sL -o /tmp/deploy-noclone.sh \
https://git.tielogic.xyz/Adriano/Cerbero-mcp/raw/branch/main/scripts/deploy-noclone.sh
chmod +x /tmp/deploy-noclone.sh
/tmp/deploy-noclone.sh
```
Lo script esegue: docker login registry → scarica `docker-compose.prod.yml`,
`docker-compose.traefik.yml`, `gateway/Caddyfile`, `gateway/public/*` in
`/docker/cerbero_mcp/` → chmod 600 sui secrets → genera `.env` iniziale
(testnet) → crea `/var/log/cerbero-mcp` con permessi `1000:1000` → pull
image dal registry → `docker compose up -d` → smoke test pubblico.
Per aggiornare in seguito: ri-esegui lo stesso script (preserva `.env`
e secrets, ricarica config dal branch `main` aggiornato).
**Override paths**: `DEPLOY_DIR` (default `/docker/cerbero_mcp`),
`SECRETS_SRC` (default `$DEPLOY_DIR/secrets`), `AUDIT_LOG_DIR` (default
`/var/log/cerbero-mcp`).
**Override compose locale (`docker-compose.local.yml`)**: lo script
include automaticamente come ultimo `-f` un eventuale
`$DEPLOY_DIR/docker-compose.local.yml`. Utile per fix specifici della
macchina (es. forzare `DOCKER_API_VERSION` su watchtower se il daemon
del VPS è più vecchio dell'API attesa). File gitignored per design —
non viene scaricato dal repo, lo crei a mano sul VPS. Esempio:
```yaml
# /docker/cerbero_mcp/docker-compose.local.yml
services:
watchtower:
environment:
DOCKER_API_VERSION: "1.44"
```
### Modalità behind-Traefik
Se sul VPS gira già un Traefik (es. lo stesso VPS di Gitea), prima di
lanciare lo script aggiungi al tuo `.env`:
```bash
BEHIND_TRAEFIK=true
TRAEFIK_NETWORK=gitea_traefik-public # nome network esterna di Traefik
TRAEFIK_CERTRESOLVER=letsencrypt # nome resolver in Traefik
TRAEFIK_ENTRYPOINT=websecure # entrypoint HTTPS Traefik
# Porte gateway non più necessarie (Traefik bind 80/443):
# GATEWAY_HTTP_PORT, GATEWAY_HTTPS_PORT non vengono usate.
```
Lo script rileva `BEHIND_TRAEFIK=true` e usa
`docker compose -f docker-compose.prod.yml -f docker-compose.traefik.yml`.
Il gateway Caddy NON bind su 80/443 host; viene esposto via Traefik con
labels per `Host(cerbero-mcp.tielogic.xyz)`.
Verifica della network Traefik:
```bash
docker network ls | grep -i traefik
# Tipicamente vedrai: gitea_traefik-public, traefik_default, ecc.
# Usa il nome ESATTO come TRAEFIK_NETWORK in .env.
```
## 3. Safety: switch testnet → mainnet
`mcp_common.environment.consistency_check` (richiamato dal boot
`run_exchange_main`) PREVIENE switch accidentali:
- Se l'ambiente risolto è **mainnet** ma il secret JSON corrispondente
non contiene `"environment": "mainnet"` esplicito → boot abort con
`EnvironmentMismatchError`.
- Se il secret dichiara un environment diverso da quello risolto (es.
`creds["environment"]="mainnet"` ma env var setta testnet) → boot abort.
**Per passare a mainnet su un exchange specifico** (es. bybit):
1. Edita `secrets/bybit.json`: aggiungi `"environment": "mainnet"`.
2. Modifica `.env`: `BYBIT_TESTNET=false`.
3. `docker compose -f docker-compose.prod.yml --env-file .env restart mcp-bybit`.
Senza il flag esplicito nel secret, il container mcp-bybit fallirà al
boot e Watchtower NON aggiornerà su versioni con cred mainnet rotti.
Override `STRICT_MAINNET=false` in `.env` permette mainnet senza la
conferma esplicita (downgrade safety, sconsigliato in produzione).
## 4. Audit log persistente
Tutti i write endpoint (`place_order`, `place_combo_order`, `cancel_*`,
`set_*`, `close_*`, `transfer_*`, `amend_*`, `switch_*`) emettono un
record JSON strutturato sul logger `mcp.audit`.
**Sink**:
- stdout/stderr container (sempre, visibile via `docker logs`).
- File JSONL persistente su volume host:
`${AUDIT_LOG_DIR:-/var/log/cerbero-mcp}/<service>.audit.jsonl`.
Rotation a mezzanotte UTC con retention `AUDIT_LOG_BACKUP_DAYS`
(default 30 giorni).
**Esempio record**:
```json
{
"audit_event": "write_op",
"action": "place_order",
"exchange": "bybit",
"principal": "core",
"target": "BTCUSDT",
"payload": {"side": "Buy", "qty": 0.01, "price": 60000, "leverage": 3},
"result": {"order_id": "abc123", "status": "submitted"}
}
```
**Query operative**:
```bash
# Tutto l'audit log oggi
tail -f /var/log/cerbero-mcp/*.audit.jsonl
# Solo place_order su bybit
jq -c 'select(.action=="place_order" and .exchange=="bybit")' \
/var/log/cerbero-mcp/bybit.audit.jsonl
# Errori
jq -c 'select(.error)' /var/log/cerbero-mcp/*.audit.jsonl
# Operazioni di un principal
jq -c 'select(.principal=="core")' /var/log/cerbero-mcp/*.audit.jsonl
```
I secret (api_key, password) sono filtrati automaticamente da
`SecretsFilter` prima di arrivare al sink.
## 5. Setup iniziale del VPS (manuale, alternativa allo script)
**Pre-requisiti**: Docker Engine ≥ 24, `docker compose` plugin, accesso SSH
sudo, dominio DNS A record `cerbero-mcp.tielogic.xyz` → IP del VPS, porte 80
@@ -55,12 +268,23 @@ echo "$GITEA_PAT" | docker login git.tielogic.xyz -u <gitea-username> --password
Le credenziali vengono salvate in `~/.docker/config.json`. Watchtower lo
bind-monta in sola lettura per fare i pull autenticati.
### b) Clona repository (solo per i file di compose, secret e Caddyfile)
### b) Crea dir di deploy e scarica i file di config
Sul VPS NON serve clonare il repo. Bastano i file di compose, il
`Caddyfile` e i public assets del gateway:
```bash
sudo mkdir -p /opt/cerbero-mcp && sudo chown $USER /opt/cerbero-mcp
cd /opt/cerbero-mcp
git clone ssh://git@git.tielogic.xyz:222/Adriano/Cerbero-mcp.git .
sudo mkdir -p /docker/cerbero_mcp/{secrets,gateway/public}
sudo chown -R "$USER" /docker/cerbero_mcp
cd /docker/cerbero_mcp
BASE=https://git.tielogic.xyz/Adriano/Cerbero-mcp/raw/branch/main
curl -fsSL -o docker-compose.prod.yml $BASE/docker-compose.prod.yml
curl -fsSL -o docker-compose.traefik.yml $BASE/docker-compose.traefik.yml
curl -fsSL -o gateway/Caddyfile $BASE/gateway/Caddyfile
curl -fsSL -o gateway/public/index.html $BASE/gateway/public/index.html
curl -fsSL -o gateway/public/status.js $BASE/gateway/public/status.js
curl -fsSL -o gateway/public/style.css $BASE/gateway/public/style.css
```
Il VPS NON ha bisogno di buildare; usa `docker-compose.prod.yml` che fa solo
@@ -79,7 +303,7 @@ chmod 600 secrets/*
### d) `.env` con configurazione runtime
Crea `/opt/cerbero-mcp/.env`:
Crea `/docker/cerbero_mcp/.env`:
```bash
# Gateway
@@ -113,7 +337,7 @@ docker compose -f docker-compose.prod.yml logs -f gateway
Caddy chiede automaticamente il certificato Let's Encrypt al primo
contatto su `https://cerbero-mcp.tielogic.xyz`.
## 3. Auto-update via Watchtower
## 6. Auto-update via Watchtower
Watchtower (servizio `watchtower` nel compose) polla il registry ogni
`WATCHTOWER_POLL_INTERVAL` secondi. Se trova un nuovo digest dietro al tag
@@ -147,7 +371,7 @@ Rimuovi la label `com.centurylinklabs.watchtower.enable=true` per quel
servizio nel compose (oppure imposta `=false`). Watchtower lo ignora ma
continua a tenere aggiornati gli altri.
## 4. Rollback
## 7. Rollback
```bash
# Trova lo SHA della versione precedente
@@ -162,7 +386,7 @@ docker compose -f docker-compose.prod.yml --env-file .env up -d
Watchtower NON downgraderà perché il digest del tag pin corrisponde a quello
locale.
## 5. Smoke test post-deploy
## 8. Smoke test post-deploy
```bash
# Da fuori VPS (laptop)
@@ -179,7 +403,7 @@ curl -X POST https://cerbero-mcp.tielogic.xyz/mcp-deribit/tools/place_order \
GATEWAY=http://localhost bash tests/smoke/run.sh
```
## 6. Sicurezza VPS
## 9. Sicurezza VPS
- Firewall `ufw`: `allow 22, 80, 443`. Tutto il resto deny in.
- `fail2ban` su SSH e (opz) sul log Caddy 401.
@@ -189,7 +413,7 @@ GATEWAY=http://localhost bash tests/smoke/run.sh
- Audit log in `docker compose logs <service> | grep audit_event` — per
produzione meglio redirezionare a syslog o a un servizio dedicato.
## 7. Note Traefik / reverse proxy davanti a Gitea
## 10. Note Traefik / reverse proxy davanti a Gitea
Gitea è esposto via Traefik (ROOT_URL `https://git.tielogic.xyz`). Per il push
di image Docker il reverse proxy deve consentire upload di body grossi (un
@@ -209,16 +433,16 @@ http:
Applicalo come middleware al router Gitea.
## 8. Aggiornamento del compose stesso (file YAML)
## 11. Aggiornamento del compose stesso (file YAML)
Watchtower aggiorna le **image**, non il `docker-compose.prod.yml`. Se cambi
struttura (nuovi servizi, nuove env var) devi:
Watchtower aggiorna le **image**, non `docker-compose.prod.yml`
`Caddyfile`. Se cambi struttura (nuovi servizi, nuove env var, modifiche
al gateway), ri-esegui sul VPS lo script no-clone, che ri-scarica i file
di config dal branch `main` di Gitea e applica:
```bash
cd /opt/cerbero-mcp
git pull
docker compose -f docker-compose.prod.yml --env-file .env up -d
/tmp/deploy-noclone.sh
```
Per automatizzare anche questo serve un cron job o uno step CD push-based
(vedi backlog).
Lo script è idempotente: preserva `.env` e `secrets/`, aggiorna solo i
file di config + fa `pull` + `up -d`.
+26 -6
View File
@@ -53,7 +53,31 @@ OI history. **Nuovi**: `get_funding_arb_spread` (opportunità arb compatte),
`get_liquidation_heatmap` (heuristic da OI delta + funding extreme),
`get_cointegration_pairs` (Engle-Granger su coppie crypto).
## Avvio locale
## Build & deploy pipeline
Niente CI/CD su Gitea: la build delle 8 image è responsabilità della
macchina di sviluppo, fatta da `scripts/build-push.sh`. Il flusso è:
1. **Quality gate locale** (sul laptop, prima di pushare):
- `uv run ruff check services/`
- `uv run mypy services/common/src/mcp_common`
- `uv run pytest services/`
- `docker compose -f docker-compose.prod.yml config -q`
2. **Build & push** (sul laptop):
```bash
export GITEA_PAT='<PAT_write:package>'
./scripts/build-push.sh # tutte le 8 image
./scripts/build-push.sh base mcp-bybit # solo specifiche
```
Tagga `:latest` + `:sha-<short_HEAD>` per rollback puntuali. Cache
buildx via registry stesso (run successivi 5-10× più veloci).
3. **Auto-rollover su VPS**: Watchtower polla il registry ogni 5 min e
aggiorna i container quando il digest del tag `:latest` cambia.
Vedi [`DEPLOYMENT.md`](DEPLOYMENT.md) per build & push, deploy VPS
no-clone (`scripts/deploy-noclone.sh`), smoke test, rollback.
## Avvio locale (dev)
```bash
docker compose up -d
@@ -67,11 +91,7 @@ Vedi `secrets/*.json` e variabili `*_TESTNET` / `ALPACA_PAPER` in
### Deploy su VPS pubblica (`cerbero-mcp.tielogic.xyz`)
Vedi [`DEPLOYMENT.md`](DEPLOYMENT.md) per la guida completa: pipeline CI/CD
(Gitea Actions → registry → Watchtower auto-update), setup VPS step-by-step,
rollback, smoke test post-deploy.
Vedi [`DEPLOYMENT.md`](DEPLOYMENT.md) per il runbook completo end-to-end.
Il gateway Caddy è configurato per:
- TLS automatico via Let's Encrypt (richiede DNS A/AAAA che punti al
+6
View File
@@ -53,6 +53,8 @@ x-common-security: &common-security
networks: [internal]
labels:
com.centurylinklabs.watchtower.enable: "true"
volumes:
- ${AUDIT_LOG_DIR:-/var/log/cerbero-mcp}:/var/log/cerbero-mcp:rw
x-image-prefix: &image_prefix git.tielogic.xyz/adriano/cerbero-mcp
@@ -103,6 +105,7 @@ services:
OBSERVER_TOKEN_FILE: /run/secrets/observer_token
DERIBIT_TESTNET: "${DERIBIT_TESTNET:-true}"
ROOT_PATH: /mcp-deribit
AUDIT_LOG_FILE: /var/log/cerbero-mcp/deribit.audit.jsonl
mcp-hyperliquid:
image: ${IMAGE_PREFIX:-git.tielogic.xyz/adriano/cerbero-mcp}/mcp-hyperliquid:${IMAGE_TAG:-latest}
@@ -118,6 +121,7 @@ services:
OBSERVER_TOKEN_FILE: /run/secrets/observer_token
HYPERLIQUID_TESTNET: "${HYPERLIQUID_TESTNET:-true}"
ROOT_PATH: /mcp-hyperliquid
AUDIT_LOG_FILE: /var/log/cerbero-mcp/hyperliquid.audit.jsonl
mcp-bybit:
image: ${IMAGE_PREFIX:-git.tielogic.xyz/adriano/cerbero-mcp}/mcp-bybit:${IMAGE_TAG:-latest}
@@ -133,6 +137,7 @@ services:
OBSERVER_TOKEN_FILE: /run/secrets/observer_token
BYBIT_TESTNET: "${BYBIT_TESTNET:-true}"
ROOT_PATH: /mcp-bybit
AUDIT_LOG_FILE: /var/log/cerbero-mcp/bybit.audit.jsonl
PORT: "9019"
mcp-alpaca:
@@ -149,6 +154,7 @@ services:
OBSERVER_TOKEN_FILE: /run/secrets/observer_token
ALPACA_PAPER: "${ALPACA_PAPER:-true}"
ROOT_PATH: /mcp-alpaca
AUDIT_LOG_FILE: /var/log/cerbero-mcp/alpaca.audit.jsonl
PORT: "9020"
mcp-macro:
+60
View File
@@ -0,0 +1,60 @@
# docker-compose.traefik.yml — overlay per integrare Cerbero_mcp con un
# Traefik già esistente sull'host (es. lo stesso VPS che ospita Gitea).
#
# USO:
# docker compose -f docker-compose.prod.yml -f docker-compose.traefik.yml \
# --env-file .env up -d
#
# Differenze vs docker-compose.prod.yml standalone:
# - Gateway Caddy NON ha ports binding host (Traefik è il punto di ingresso
# pubblico su 80/443).
# - Gateway è connesso anche alla network esterna `traefik` (override env
# TRAEFIK_NETWORK se diversa, es. `gitea_traefik-public`).
# - Caddy NON fa auto-TLS — Traefik termina TLS e fa ACME Let's Encrypt.
# Caddy ascolta in chiaro su :80 dentro Docker network.
# - Trusted proxies: Caddy rispetta X-Forwarded-For ricevuto da Traefik
# per il match `remote_ip` (rate limit + WRITE_ALLOWLIST).
# - Labels Traefik su gateway: routing Host(`cerbero-mcp.tielogic.xyz`) +
# TLS automatic.
#
# Variabili .env aggiuntive richieste:
# TRAEFIK_NETWORK=gitea_traefik-public # nome network di Traefik
# TRAEFIK_CERTRESOLVER=letsencrypt # nome resolver in tua config Traefik
# TRAEFIK_ENTRYPOINT=websecure # entrypoint HTTPS Traefik
networks:
traefik:
external: true
name: ${TRAEFIK_NETWORK:-gitea_traefik-public}
services:
gateway:
# Override: niente port binding host, traffica solo via Traefik
ports: !reset []
networks:
- internal
- traefik
environment:
ACME_EMAIL: ${ACME_EMAIL:-adrianodalpastro@tielogic.com}
WRITE_ALLOWLIST: ${WRITE_ALLOWLIST:-127.0.0.1/32 ::1/128 172.16.0.0/12}
# Mode behind-proxy: Caddy ascolta plain HTTP su :80, no auto_https
LISTEN: ":80"
AUTO_HTTPS: "off"
# Traefik è il proxy che inoltra; trusta range privati + opz. CIDR Traefik
TRUSTED_PROXIES: ${TRUSTED_PROXIES:-private_ranges}
labels:
com.centurylinklabs.watchtower.enable: "true"
traefik.enable: "true"
traefik.docker.network: ${TRAEFIK_NETWORK:-gitea_traefik-public}
traefik.http.routers.cerbero-mcp.rule: "Host(`cerbero-mcp.tielogic.xyz`)"
traefik.http.routers.cerbero-mcp.entrypoints: ${TRAEFIK_ENTRYPOINT:-websecure}
traefik.http.routers.cerbero-mcp.tls: "true"
traefik.http.routers.cerbero-mcp.tls.certresolver: ${TRAEFIK_CERTRESOLVER:-letsencrypt}
traefik.http.services.cerbero-mcp.loadbalancer.server.port: "80"
# Security headers a livello Traefik (ridondante con Caddy ma utile se
# in futuro Caddy viene rimosso). Commenta se non vuoi duplicazione.
traefik.http.routers.cerbero-mcp.middlewares: cerbero-mcp-secheaders@docker
traefik.http.middlewares.cerbero-mcp-secheaders.headers.stsSeconds: "31536000"
traefik.http.middlewares.cerbero-mcp-secheaders.headers.stsIncludeSubdomains: "true"
traefik.http.middlewares.cerbero-mcp-secheaders.headers.contentTypeNosniff: "true"
traefik.http.middlewares.cerbero-mcp-secheaders.headers.referrerPolicy: "no-referrer"
+2 -1
View File
@@ -1,6 +1,7 @@
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-alpaca ./services/mcp-alpaca
RUN uv sync --frozen --no-dev --package mcp-alpaca
+2 -1
View File
@@ -1,6 +1,7 @@
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-bybit ./services/mcp-bybit
RUN uv sync --frozen --no-dev --package mcp-bybit
+2 -1
View File
@@ -1,8 +1,9 @@
# CER-P5-012 multi-stage slim: builder da cerbero-base (con uv + toolchain),
# runtime da python:3.11-slim (solo venv + source).
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-deribit ./services/mcp-deribit
RUN uv sync --frozen --no-dev --package mcp-deribit
+2 -1
View File
@@ -1,6 +1,7 @@
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-hyperliquid ./services/mcp-hyperliquid
RUN uv sync --frozen --no-dev --package mcp-hyperliquid
+2 -1
View File
@@ -1,6 +1,7 @@
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-macro ./services/mcp-macro
RUN uv sync --frozen --no-dev --package mcp-macro
+2 -1
View File
@@ -1,6 +1,7 @@
ARG BASE_IMAGE=cerbero-base
ARG BASE_TAG=latest
FROM cerbero-base:${BASE_TAG} AS builder
FROM ${BASE_IMAGE}:${BASE_TAG} AS builder
COPY services/mcp-sentiment ./services/mcp-sentiment
RUN uv sync --frozen --no-dev --package mcp-sentiment
+8 -1
View File
@@ -1,12 +1,19 @@
{
admin off
email {$ACME_EMAIL:adrianodalpastro@tielogic.com}
auto_https {$AUTO_HTTPS:on}
# Plugin mholt/caddy-ratelimit
order rate_limit before basicauth
# Trusted proxies: rispetta X-Forwarded-For quando dietro reverse proxy
# (es. Traefik). Default = solo private ranges.
servers {
trusted_proxies static {$TRUSTED_PROXIES:private_ranges}
}
}
cerbero-mcp.tielogic.xyz {
{$LISTEN:cerbero-mcp.tielogic.xyz} {
log {
output stdout
format json
+1 -1
View File
@@ -15,7 +15,7 @@ target-version = "py313"
[tool.ruff.lint]
select = ["E", "F", "I", "W", "UP", "B", "SIM"]
ignore = ["E501"]
ignore = ["E501", "E741"]
[tool.ruff.lint.flake8-bugbear]
extend-immutable-calls = [
+90
View File
@@ -0,0 +1,90 @@
#!/usr/bin/env bash
# Cerbero_mcp — build & push image al registry Gitea da macchina locale.
#
# Sostituisce il job CI `build-and-push` di .gitea/workflows/ci.yml.
# Usalo dopo `git push` (o senza, se vuoi pushare un build "dirty").
# Watchtower sul VPS pulla automaticamente entro WATCHTOWER_POLL_INTERVAL.
#
# Pre-requisiti:
# - docker + buildx
# - PAT Gitea con scope `write:package` in env $GITEA_PAT
# - $GITEA_USER (default: adriano)
#
# Uso:
# ./scripts/build-push.sh # tutte le image
# ./scripts/build-push.sh base gateway # solo specifiche
set -euo pipefail
REGISTRY="${REGISTRY:-git.tielogic.xyz}"
IMAGE_PREFIX="${IMAGE_PREFIX:-$REGISTRY/adriano/cerbero-mcp}"
GITEA_USER="${GITEA_USER:-adriano}"
SHA="$(git rev-parse --short HEAD)"
# Ordine di build: base prima (parent delle mcp-*), poi le altre.
ALL_TARGETS=(base gateway mcp-deribit mcp-bybit mcp-hyperliquid mcp-alpaca mcp-macro mcp-sentiment)
TARGETS=("${@:-${ALL_TARGETS[@]}}")
command -v docker >/dev/null || { echo "FATAL: docker non installato"; exit 1; }
docker buildx version >/dev/null || { echo "FATAL: docker buildx non disponibile"; exit 1; }
# Login solo se non già autenticato sul registry. Per primo login fai:
# echo "<PAT>" | docker login $REGISTRY -u $GITEA_USER --password-stdin
if grep -q "\"$REGISTRY\"" ~/.docker/config.json 2>/dev/null; then
echo "=== docker già loggato su $REGISTRY (skip login) ==="
elif [ -n "${GITEA_PAT:-}" ]; then
echo "=== docker login $REGISTRY ==="
echo "$GITEA_PAT" | docker login "$REGISTRY" -u "$GITEA_USER" --password-stdin
else
echo "FATAL: non autenticato su $REGISTRY e GITEA_PAT non settata."
echo " Esegui una volta: docker login $REGISTRY -u $GITEA_USER"
exit 1
fi
build_one() {
local name="$1"
local context file
case "$name" in
base)
context="."; file="docker/base.Dockerfile" ;;
gateway)
context="./gateway"; file="gateway/Dockerfile" ;;
mcp-*)
context="."; file="docker/${name}.Dockerfile" ;;
*)
echo "FATAL: target sconosciuto '$name'"; exit 1 ;;
esac
if [ ! -f "$file" ]; then
echo "FATAL: Dockerfile non trovato: $file"; exit 1
fi
local tag_latest="$IMAGE_PREFIX/$name:latest"
local tag_sha="$IMAGE_PREFIX/$name:sha-$SHA"
echo "=== [$name] build & push ==="
local args=(buildx build --push
-f "$file"
-t "$tag_latest"
-t "$tag_sha"
)
if [[ "$name" == mcp-* ]]; then
args+=(--build-arg "BASE_IMAGE=$IMAGE_PREFIX/base"
--build-arg "BASE_TAG=latest")
fi
args+=("$context")
docker "${args[@]}"
echo " pushed: $tag_latest"
echo " pushed: $tag_sha"
}
for t in "${TARGETS[@]}"; do
build_one "$t"
done
echo
echo "=== Tutto pushato (commit $SHA) ==="
echo "VPS Watchtower farà pull entro WATCHTOWER_POLL_INTERVAL (default 5min)."
echo "Per forzare subito:"
echo " ssh <vps> 'cd /docker/cerbero_mcp && docker compose -f docker-compose.prod.yml pull && docker compose -f docker-compose.prod.yml up -d'"
+202
View File
@@ -0,0 +1,202 @@
#!/usr/bin/env bash
# Cerbero_mcp — deploy script per VPS produzione.
#
# Sul VPS NON viene clonato il repo: lo script scarica solo i file
# strettamente necessari al runtime (compose, Caddyfile, public assets)
# via raw HTTP da Gitea. Le image vengono pullate pre-built dal registry
# Gitea (buildate dal laptop dev con scripts/build-push.sh).
#
# Pre-requisiti sul VPS (NON gestiti da questo script):
# 1. Docker Engine ≥ 24 + plugin docker compose installati.
# 2. DNS A record `cerbero-mcp.tielogic.xyz` → IP del VPS (warn-only).
# 3. Porte 80 e 443 aperte sul firewall (per ACME + traffico HTTPS).
# 4. PAT Gitea con scope `read:package`, salvato in env `$GITEA_PAT`.
# 5. Username Gitea in env `$GITEA_USER` (default: adriano).
# 6. Secret JSON exchange + token bearer disponibili in $SECRETS_SRC
# (default: $DEPLOY_DIR/secrets/), che lo script copierà in
# $DEPLOY_DIR/secrets/ con permessi 600 (ignorato se SECRETS_SRC == DEPLOY_DIR/secrets).
#
# Idempotente: rieseguibile per aggiornamenti (riscarica i file di config
# dal branch corrente, NON tocca .env esistente).
set -euo pipefail
DEPLOY_DIR="${DEPLOY_DIR:-/docker/cerbero_mcp}"
SECRETS_SRC="${SECRETS_SRC:-$DEPLOY_DIR/secrets}"
GITEA_USER="${GITEA_USER:-adriano}"
GITEA_RAW_BASE="${GITEA_RAW_BASE:-https://git.tielogic.xyz/Adriano/Cerbero-mcp/raw/branch/main}"
REGISTRY="${REGISTRY:-git.tielogic.xyz}"
DOMAIN="${DOMAIN:-cerbero-mcp.tielogic.xyz}"
AUDIT_LOG_DIR="${AUDIT_LOG_DIR:-/var/log/cerbero-mcp}"
echo "=== Cerbero_mcp deploy (no-clone) → $DEPLOY_DIR (domain $DOMAIN) ==="
# ──────────────────────────────────────────────────────────────
# 1. Verifica pre-requisiti
# ──────────────────────────────────────────────────────────────
command -v docker >/dev/null || { echo "FATAL: docker non installato"; exit 1; }
command -v curl >/dev/null || { echo "FATAL: curl non installato"; exit 1; }
docker compose version >/dev/null || { echo "FATAL: docker compose plugin assente"; exit 1; }
if [ -z "${GITEA_PAT:-}" ]; then
echo "FATAL: env GITEA_PAT non settata. Export del PAT con scope read:package prima."
exit 1
fi
# Check DNS resolution (warning only, non blocca)
ip_resolved=$(getent hosts "$DOMAIN" | awk '{print $1}' | head -1 || true)
if [ -z "$ip_resolved" ]; then
echo "WARN: $DOMAIN non risolve via DNS — TLS Let's Encrypt fallirà finché DNS non propaga."
else
echo "DNS $DOMAIN$ip_resolved"
fi
# ──────────────────────────────────────────────────────────────
# 2. Login al container registry
# ──────────────────────────────────────────────────────────────
echo "=== docker login $REGISTRY ==="
echo "$GITEA_PAT" | docker login "$REGISTRY" -u "$GITEA_USER" --password-stdin
# ──────────────────────────────────────────────────────────────
# 3. Setup dir + scarica i file di config dal repo (no clone)
# ──────────────────────────────────────────────────────────────
sudo mkdir -p "$DEPLOY_DIR"
sudo chown "$USER:$USER" "$DEPLOY_DIR"
mkdir -p "$DEPLOY_DIR/secrets" "$DEPLOY_DIR/gateway/public"
# File di config necessari al runtime. Scaricati come raw da Gitea.
# Idempotente: ricarica sempre la versione di main.
download() {
local rel="$1"
local dst="$DEPLOY_DIR/$rel"
echo " fetch: $rel"
curl -fsSL -o "$dst" "$GITEA_RAW_BASE/$rel" \
|| { echo "FATAL: download $rel fallito"; exit 1; }
}
echo "=== Download config da $GITEA_RAW_BASE ==="
download docker-compose.prod.yml
download docker-compose.traefik.yml
download gateway/Caddyfile
download gateway/public/index.html
download gateway/public/status.js
download gateway/public/style.css
cd "$DEPLOY_DIR"
# ──────────────────────────────────────────────────────────────
# 4. Copia secrets con permessi 600
# ──────────────────────────────────────────────────────────────
if [ "$(realpath "$SECRETS_SRC")" != "$(realpath "$DEPLOY_DIR/secrets")" ]; then
if [ ! -d "$SECRETS_SRC" ]; then
echo "FATAL: secrets src dir $SECRETS_SRC non esiste."
echo " Atteso contenere: deribit.json bybit.json hyperliquid.json alpaca.json"
echo " macro.json sentiment.json core.token observer.token"
exit 1
fi
echo "=== Copia secrets da $SECRETS_SRC ==="
for f in deribit.json bybit.json hyperliquid.json alpaca.json macro.json sentiment.json core.token observer.token; do
if [ -f "$SECRETS_SRC/$f" ]; then
cp "$SECRETS_SRC/$f" "secrets/$f"
chmod 600 "secrets/$f"
echo " ok: secrets/$f"
else
echo " WARN: $SECRETS_SRC/$f assente — il servizio relativo fallirà al boot."
fi
done
else
echo "=== Secrets già in $DEPLOY_DIR/secrets — solo chmod 600 ==="
for f in deribit.json bybit.json hyperliquid.json alpaca.json macro.json sentiment.json core.token observer.token; do
[ -f "secrets/$f" ] && chmod 600 "secrets/$f" && echo " ok: secrets/$f" \
|| echo " WARN: secrets/$f assente — il servizio relativo fallirà al boot."
done
fi
# ──────────────────────────────────────────────────────────────
# 5. Crea/aggiorna .env (preserva esistente)
# ──────────────────────────────────────────────────────────────
if [ ! -f .env ]; then
echo "=== Creazione .env iniziale (testnet di default) ==="
cat > .env <<EOF
# Cerbero_mcp deploy config — modifica per passare a mainnet
ACME_EMAIL=adrianodalpastro@tielogic.com
GATEWAY_HTTP_PORT=80
GATEWAY_HTTPS_PORT=443
WRITE_ALLOWLIST="127.0.0.1/32 ::1/128 172.16.0.0/12"
IMAGE_TAG=latest
IMAGE_PREFIX=git.tielogic.xyz/adriano/cerbero-mcp
# Environment exchange (true=testnet, false=mainnet).
# IMPORTANTE: per mainnet aggiungi anche "environment":"mainnet" al secret JSON
# corrispondente, altrimenti il boot abortisce per safety (vedi consistency_check).
DERIBIT_TESTNET=true
BYBIT_TESTNET=true
HYPERLIQUID_TESTNET=true
ALPACA_PAPER=true
# Permette mainnet senza creds["environment"]="mainnet" esplicito (sconsigliato).
STRICT_MAINNET=true
# Audit log persistente per write endpoint (place_order, cancel, ecc.).
AUDIT_LOG_DIR=$AUDIT_LOG_DIR
# Watchtower polling auto-update (sec).
WATCHTOWER_POLL_INTERVAL=300
EOF
echo " $DEPLOY_DIR/.env creato. Rivedi prima del primo up."
else
echo "=== .env preesistente — non sovrascritto ==="
fi
# ──────────────────────────────────────────────────────────────
# 6. Audit log dir host (volume bind)
# ──────────────────────────────────────────────────────────────
sudo mkdir -p "$AUDIT_LOG_DIR"
sudo chown 1000:1000 "$AUDIT_LOG_DIR"
echo "Audit log dir: $AUDIT_LOG_DIR (chown 1000:1000)"
# ──────────────────────────────────────────────────────────────
# 7. Pull image + up
# ──────────────────────────────────────────────────────────────
COMPOSE_FILES=("-f" "docker-compose.prod.yml")
if [ "${BEHIND_TRAEFIK:-false}" = "true" ]; then
echo "=== Modalità behind-traefik attiva (network ${TRAEFIK_NETWORK:-gitea_traefik-public}) ==="
COMPOSE_FILES+=("-f" "docker-compose.traefik.yml")
fi
# Override locale specifico macchina (es. fix DOCKER_API_VERSION watchtower).
# Non versionato (in .gitignore), creato a mano sul VPS se serve.
if [ -f "docker-compose.local.yml" ]; then
echo "=== Override locale rilevato: docker-compose.local.yml ==="
COMPOSE_FILES+=("-f" "docker-compose.local.yml")
fi
echo "=== docker compose pull + up ==="
docker compose "${COMPOSE_FILES[@]}" --env-file .env pull
docker compose "${COMPOSE_FILES[@]}" --env-file .env up -d
# ──────────────────────────────────────────────────────────────
# 8. Verifica stato
# ──────────────────────────────────────────────────────────────
sleep 5
echo "=== Stato container ==="
docker compose "${COMPOSE_FILES[@]}" --env-file .env ps
echo
echo "=== Smoke test (health check via gateway pubblico) ==="
sleep 10
if curl -sf -o /dev/null -m 10 "https://$DOMAIN/mcp-macro/health"; then
echo " OK: https://$DOMAIN/mcp-macro/health → 200"
else
echo " WARN: https://$DOMAIN/mcp-macro/health non risponde (DNS o cert non ancora pronti?)"
echo " Riprova fra 30s o controlla: docker compose -f docker-compose.prod.yml logs gateway"
fi
echo
echo "=== Deploy completato ==="
echo "Comandi utili (compose files: ${COMPOSE_FILES[*]}):"
echo " Logs: docker compose ${COMPOSE_FILES[*]} --env-file .env logs -f <service>"
echo " Audit: tail -f $AUDIT_LOG_DIR/*.audit.jsonl"
echo " Restart: docker compose ${COMPOSE_FILES[*]} --env-file .env restart <service>"
echo " Stop: docker compose ${COMPOSE_FILES[*]} --env-file .env down"
echo " Update: ri-esegui questo script (riscarica config + pull image)"
+10 -1
View File
@@ -24,7 +24,11 @@ import uvicorn
from mcp_common.auth import load_token_store_from_files
from mcp_common.env_validation import fail_fast_if_missing, require_env, summarize
from mcp_common.environment import EnvironmentInfo, resolve_environment
from mcp_common.environment import (
EnvironmentInfo,
consistency_check,
resolve_environment,
)
from mcp_common.logging import configure_root_logging
@@ -69,6 +73,11 @@ def run_exchange_main(spec: ExchangeAppSpec) -> None:
default_base_url_testnet=spec.default_base_url_testnet,
)
# Safety: previene switch accidentali a mainnet senza conferma esplicita
# nel secret. Solleva EnvironmentMismatchError → boot abort se mismatch.
strict_mainnet = os.environ.get("STRICT_MAINNET", "true").lower() not in ("0", "false", "no")
consistency_check(env_info, creds, strict_mainnet=strict_mainnet)
client = spec.build_client(creds, env_info)
token_store = load_token_store_from_files(
+53 -6
View File
@@ -1,22 +1,68 @@
"""Audit log strutturato per write endpoint MCP (place_order, cancel,
set_*, close_*, transfer_*). Usa un logger dedicato `mcp.audit` su stream
JSON: in deployment può essere redirezionato a file/syslog/SIEM separato.
JSON.
Logica:
- `audit_write_op(principal, action, exchange, target, payload, result)`
emette UN record JSON per ogni operazione con esito (ok/error).
- Payload sensibile (api_key, secret) già filtrato dal SecretsFilter
Sink:
- stdout/stderr (sempre): tramite root JSON logger configurato da
`mcp_common.logging.configure_root_logging`.
- File JSONL persistente (opzionale): se env var `AUDIT_LOG_FILE` è
settata, aggiunge un `TimedRotatingFileHandler` che ruota a mezzanotte
con `AUDIT_LOG_BACKUP_DAYS` di retention (default 30). Una riga JSON
per record (formato `.jsonl`).
Per VPS produzione: setta `AUDIT_LOG_FILE=/var/log/cerbero-mcp/<service>.audit.jsonl`
con bind mount del volume `/var/log/cerbero-mcp` nel docker-compose.
Payload sensibile (api_key, secret) già filtrato dal SecretsFilter
globale; qui non si include creds.
"""
from __future__ import annotations
import logging
import os
from logging.handlers import TimedRotatingFileHandler
from typing import Any
from mcp_common.auth import Principal
from mcp_common.logging import get_json_logger
from mcp_common.logging import SecretsFilter, get_json_logger
try:
from pythonjsonlogger.json import JsonFormatter as _JsonFormatter # noqa: N813
except ImportError:
from pythonjsonlogger.jsonlogger import JsonFormatter as _JsonFormatter # noqa: N813
_logger = get_json_logger("mcp.audit", level=logging.INFO)
_file_handler_attached = False
def _configure_audit_sink() -> None:
"""Aggiunge FileHandler al logger mcp.audit se AUDIT_LOG_FILE è settato.
Idempotente: chiamato la prima volta da audit_write_op, poi no-op.
"""
global _file_handler_attached
if _file_handler_attached:
return
file_path = os.environ.get("AUDIT_LOG_FILE", "").strip()
if not file_path:
_file_handler_attached = True
return
backup_days = int(os.environ.get("AUDIT_LOG_BACKUP_DAYS", "30"))
os.makedirs(os.path.dirname(file_path) or ".", exist_ok=True)
handler = TimedRotatingFileHandler(
file_path,
when="midnight",
interval=1,
backupCount=backup_days,
encoding="utf-8",
utc=True,
)
handler.setFormatter(_JsonFormatter("%(asctime)s %(name)s %(levelname)s %(message)s"))
handler.addFilter(SecretsFilter())
_logger.addHandler(handler)
_file_handler_attached = True
def audit_write_op(
@@ -40,6 +86,7 @@ def audit_write_op(
result: output del client (order_id, status, ecc.).
error: stringa errore se l'operazione ha fallito.
"""
_configure_audit_sink()
record: dict[str, Any] = {
"audit_event": "write_op",
"action": action,
@@ -1,18 +1,31 @@
"""Resolver di ambiente (testnet/mainnet) per MCP exchange.
Precedenza: env var > campo secret > default True (testnet).
Safety: `consistency_check` previene switch accidentali a mainnet senza
conferma esplicita nel secret JSON.
"""
from __future__ import annotations
import logging
import os
from dataclasses import dataclass
from typing import Literal
logger = logging.getLogger(__name__)
Environment = Literal["testnet", "mainnet"]
Source = Literal["env", "credentials", "default"]
TRUTHY = {"1", "true", "yes", "on"}
# Tokens nel base_url che indicano endpoint testnet (case-insensitive).
TESTNET_URL_HINTS = ("test", "testnet", "paper")
class EnvironmentMismatchError(RuntimeError):
"""Boot abort: ambiente risolto non matcha conferma esplicita nel secret."""
@dataclass(frozen=True)
class EnvironmentInfo:
@@ -67,3 +80,59 @@ def resolve_environment(
env_value=env_value,
base_url=base_url,
)
def consistency_check(
env_info: EnvironmentInfo,
creds: dict,
*,
strict_mainnet: bool = True,
) -> list[str]:
"""Verifica coerenza environment risolto vs secret JSON. Restituisce
lista di warning string. Solleva EnvironmentMismatchError per mismatch
bloccanti.
Regole:
- Se `creds["environment"]` è presente e DIVERSO da `env_info.environment`:
→ raise EnvironmentMismatchError (declared vs resolved mismatch).
- Se `env_info.environment == "mainnet"` e `creds.get("environment") !=
"mainnet"`: con `strict_mainnet=True` → raise (richiede conferma
esplicita). Con `strict_mainnet=False` → warning.
- Se `env_info.base_url` contiene token testnet ("test", "testnet",
"paper") ma `env_info.environment == "mainnet"` (o viceversa): warning
(URL/environment incoerenti).
"""
warnings: list[str] = []
declared = creds.get("environment")
if declared and declared != env_info.environment:
raise EnvironmentMismatchError(
f"{env_info.exchange}: secret declared environment={declared!r} "
f"but resolver resolved environment={env_info.environment!r}"
)
if env_info.environment == "mainnet" and declared != "mainnet":
msg = (
f"{env_info.exchange}: resolved mainnet without explicit confirmation "
"in secret. Add `\"environment\": \"mainnet\"` to the credentials JSON."
)
if strict_mainnet:
raise EnvironmentMismatchError(msg)
warnings.append(msg)
url_lower = (env_info.base_url or "").lower()
has_test_hint = any(token in url_lower for token in TESTNET_URL_HINTS)
if env_info.environment == "mainnet" and has_test_hint:
warnings.append(
f"{env_info.exchange}: environment=mainnet but base_url contains "
f"testnet hint ({env_info.base_url!r})"
)
if env_info.environment == "testnet" and not has_test_hint and url_lower:
warnings.append(
f"{env_info.exchange}: environment=testnet but base_url does not "
f"appear to be a testnet endpoint ({env_info.base_url!r})"
)
for w in warnings:
logger.warning("environment consistency: %s", w)
return warnings
+2 -3
View File
@@ -21,6 +21,7 @@ Claude Code config esempio:
"""
from __future__ import annotations
import contextlib
from typing import Any
import httpx
@@ -63,10 +64,8 @@ def _derive_input_schemas(app: FastAPI, tool_names: list[str]) -> dict[str, dict
if pname == "return":
continue
if isinstance(ann, type) and issubclass(ann, BaseModel):
try:
with contextlib.suppress(Exception):
out[name] = ann.model_json_schema()
except Exception:
pass
break
return out
@@ -36,10 +36,7 @@ def orderbook_imbalance(
ask_vol = sum(q for _, q in top_asks)
total = bid_vol + ask_vol
if total == 0:
ratio = None
else:
ratio = (bid_vol - ask_vol) / total
ratio = None if total == 0 else (bid_vol - ask_vol) / total
# Microprice: best bid, best ask. Weighted by opposite-side size.
microprice = None
+33 -1
View File
@@ -3,6 +3,7 @@ from __future__ import annotations
import json
from unittest.mock import MagicMock, patch
import pytest
from mcp_common.app_factory import ExchangeAppSpec, run_exchange_main
from mcp_common.environment import EnvironmentInfo
@@ -75,7 +76,9 @@ def test_run_exchange_main_uses_default_port(tmp_path, monkeypatch):
def test_run_exchange_main_env_var_overrides_creds(tmp_path, monkeypatch):
creds_file = tmp_path / "creds.json"
creds_file.write_text(json.dumps({"testnet": True}))
# `environment: mainnet` esplicito perché env var override → mainnet
# e consistency_check richiede conferma per evitare switch accidentale.
creds_file.write_text(json.dumps({"testnet": True, "environment": "mainnet"}))
monkeypatch.setenv("TESTEX_CREDENTIALS_FILE", str(creds_file))
monkeypatch.setenv("TESTEX_TESTNET", "false")
@@ -95,6 +98,35 @@ def test_run_exchange_main_env_var_overrides_creds(tmp_path, monkeypatch):
assert captured["env_info"].source == "env"
def test_run_exchange_main_aborts_on_mainnet_without_confirmation(tmp_path, monkeypatch):
"""Mainnet senza creds['environment']='mainnet' → boot abort fail-fast."""
from mcp_common.environment import EnvironmentMismatchError
creds_file = tmp_path / "creds.json"
creds_file.write_text(json.dumps({"testnet": False}))
monkeypatch.setenv("TESTEX_CREDENTIALS_FILE", str(creds_file))
monkeypatch.delenv("TESTEX_TESTNET", raising=False)
monkeypatch.delenv("STRICT_MAINNET", raising=False)
spec = _make_spec()
with (
pytest.raises(EnvironmentMismatchError),
patch("mcp_common.app_factory.uvicorn.run"),
):
run_exchange_main(spec)
def test_run_exchange_main_strict_mainnet_disabled_via_env(tmp_path, monkeypatch):
"""STRICT_MAINNET=false permette mainnet senza conferma (warning soltanto)."""
creds_file = tmp_path / "creds.json"
creds_file.write_text(json.dumps({"testnet": False}))
monkeypatch.setenv("TESTEX_CREDENTIALS_FILE", str(creds_file))
monkeypatch.setenv("STRICT_MAINNET", "false")
spec = _make_spec()
with patch("mcp_common.app_factory.uvicorn.run"):
run_exchange_main(spec) # non solleva
def test_run_exchange_main_missing_creds_file_exits(monkeypatch):
monkeypatch.delenv("TESTEX_CREDENTIALS_FILE", raising=False)
+58
View File
@@ -95,3 +95,61 @@ def test_audit_write_op_no_principal(captured_records):
)
rec = captured_records[0]
assert rec.principal is None
def test_audit_write_op_writes_to_file_when_AUDIT_LOG_FILE_set(tmp_path, monkeypatch):
"""Con env AUDIT_LOG_FILE settato, una riga JSON appare nel file."""
import json
from mcp_common import audit as audit_mod
audit_file = tmp_path / "audit.jsonl"
monkeypatch.setenv("AUDIT_LOG_FILE", str(audit_file))
# Reset state idempotency flag così il test riesegue setup
audit_mod._file_handler_attached = False
# Pulisci handlers preesistenti dal logger (potrebbe avere file vecchio)
for h in list(audit_mod._logger.handlers):
from logging.handlers import TimedRotatingFileHandler
if isinstance(h, TimedRotatingFileHandler):
audit_mod._logger.removeHandler(h)
audit_write_op(
principal=Principal("core", {"core"}),
action="place_order",
exchange="bybit",
target="BTCUSDT",
payload={"side": "Buy", "qty": 0.01},
result={"order_id": "abc123", "status": "submitted"},
)
# Forza flush dei file handler
for h in audit_mod._logger.handlers:
h.flush()
assert audit_file.exists()
content = audit_file.read_text().strip()
assert content, "audit file empty"
record = json.loads(content.splitlines()[-1])
assert record["audit_event"] == "write_op"
assert record["action"] == "place_order"
assert record["exchange"] == "bybit"
assert record["target"] == "BTCUSDT"
assert record["principal"] == "core"
def test_audit_no_file_when_env_unset(tmp_path, monkeypatch):
"""Senza AUDIT_LOG_FILE, nessun file viene creato."""
from mcp_common import audit as audit_mod
monkeypatch.delenv("AUDIT_LOG_FILE", raising=False)
audit_mod._file_handler_attached = False
audit_write_op(
principal=Principal("core", {"core"}),
action="cancel_order",
exchange="bybit",
target="ord-1",
payload={},
)
# Niente file creato in tmp_path
files = list(tmp_path.iterdir())
assert files == []
+74 -1
View File
@@ -1,7 +1,12 @@
from __future__ import annotations
import pytest
from mcp_common.environment import resolve_environment
from mcp_common.environment import (
EnvironmentInfo,
EnvironmentMismatchError,
consistency_check,
resolve_environment,
)
def test_env_var_overrides_secret(monkeypatch):
@@ -114,3 +119,71 @@ def test_alpaca_paper_flag_key(monkeypatch):
)
assert info.environment == "mainnet"
assert info.source == "credentials"
# ───────── consistency_check ─────────
def _info(env: str, exchange: str = "deribit") -> EnvironmentInfo:
"""Helper costruisce EnvironmentInfo per test."""
return EnvironmentInfo(
exchange=exchange,
environment=env,
source="env",
env_value="false" if env == "mainnet" else "true",
base_url=f"https://api.{exchange}.com" if env == "mainnet" else f"https://test.{exchange}.com",
)
def test_consistency_check_testnet_no_confirmation_ok():
"""Testnet senza conferma esplicita → ok, ritorna []. Default safe."""
info = _info("testnet")
creds = {"api_key": "k", "api_secret": "s"}
warnings = consistency_check(info, creds)
assert warnings == []
def test_consistency_check_mainnet_no_confirmation_raises():
"""Mainnet senza creds['environment']='mainnet' esplicito → fail-fast."""
info = _info("mainnet")
creds = {"api_key": "k", "api_secret": "s"}
with pytest.raises(EnvironmentMismatchError, match="mainnet.*explicit confirmation"):
consistency_check(info, creds)
def test_consistency_check_mainnet_with_confirmation_ok():
info = _info("mainnet")
creds = {"api_key": "k", "api_secret": "s", "environment": "mainnet"}
warnings = consistency_check(info, creds)
assert warnings == []
def test_consistency_check_explicit_mismatch_raises():
"""Secret dichiara mainnet ma resolver risolve testnet → fail-fast."""
info = _info("testnet")
creds = {"environment": "mainnet"}
with pytest.raises(EnvironmentMismatchError, match="declared.*resolved"):
consistency_check(info, creds)
def test_consistency_check_strict_mainnet_disabled():
"""Con strict_mainnet=False mainnet senza conferma logga warning ma non raise."""
info = _info("mainnet")
creds = {"api_key": "k", "api_secret": "s"}
warnings = consistency_check(info, creds, strict_mainnet=False)
assert any("mainnet" in w for w in warnings)
def test_consistency_check_url_does_not_match_environment_warns():
"""Base URL contiene 'test' ma environment='mainnet' → warning."""
from mcp_common.environment import EnvironmentInfo
info = EnvironmentInfo(
exchange="bybit",
environment="mainnet",
source="env",
env_value="false",
base_url="https://api-testnet.bybit.com", # url DICE testnet ma resolver MAINNET
)
creds = {"environment": "mainnet"}
warnings = consistency_check(info, creds)
assert any("base_url" in w.lower() for w in warnings)
-1
View File
@@ -4,7 +4,6 @@ import asyncio
import httpx
import pytest
from mcp_common.http import async_client, call_with_retry
+2 -2
View File
@@ -112,7 +112,7 @@ def test_vol_cone_returns_percentiles_per_window():
closes = _gbm_series(mu=0.0, sigma=0.5, n=400)
out = vol_cone(closes, windows=[10, 30, 60])
assert set(out.keys()) == {10, 30, 60}
for w, stats in out.items():
for _w, stats in out.items():
assert "current" in stats
assert "p10" in stats and "p50" in stats and "p90" in stats
assert stats["p10"] <= stats["p50"] <= stats["p90"]
@@ -200,7 +200,7 @@ def test_autocorrelation_white_noise_low():
assert len(out) == 5
# white noise → all autocorr ≈ 0 (within ±2/sqrt(N))
bound = 2.0 / math.sqrt(len(rets))
for lag, val in out.items():
for _lag, val in out.items():
assert abs(val) < bound * 2 # generous
+3 -3
View File
@@ -71,13 +71,13 @@ def _asset_class_enum(ac: str) -> AssetClass:
def _serialize(obj: Any) -> Any:
"""Recursively convert pydantic/datetime objects → json-safe."""
if obj is None or isinstance(obj, (str, int, float, bool)):
if obj is None or isinstance(obj, str | int | float | bool):
return obj
if isinstance(obj, (_dt.datetime, _dt.date)):
if isinstance(obj, _dt.datetime | _dt.date):
return obj.isoformat()
if isinstance(obj, dict):
return {k: _serialize(v) for k, v in obj.items()}
if isinstance(obj, (list, tuple)):
if isinstance(obj, list | tuple):
return [_serialize(v) for v in obj]
if hasattr(obj, "model_dump"):
return _serialize(obj.model_dump())
@@ -1,10 +1,10 @@
from __future__ import annotations
import contextlib
import time
from dataclasses import dataclass, field
from typing import Any
import httpx
from mcp_common import indicators as ind
from mcp_common import microstructure as micro
from mcp_common import options as opt
@@ -196,10 +196,8 @@ class DeribitClient:
name = s.get("instrument_name")
oi = s.get("open_interest")
if name and oi is not None:
try:
with contextlib.suppress(TypeError, ValueError):
oi_by_name[name] = float(oi)
except (TypeError, ValueError):
pass
all_items = raw.get("result") or []
filtered: list[dict] = []
@@ -882,8 +880,7 @@ class DeribitClient:
shape = "backwardation"
short_term = next((x for x in ts if 8 <= x["dte"] <= 14), None)
mid_term = next((x for x in ts if 35 <= x["dte"] <= 45), None)
if short_term and mid_term:
if mid_term["atm_iv"] - short_term["atm_iv"] > 5:
if short_term and mid_term and mid_term["atm_iv"] - short_term["atm_iv"] > 5:
contango_steep = True
calendar_opp = True
@@ -1131,7 +1128,7 @@ class DeribitClient:
structure = self._guess_structure(enriched)
notional = sum(l["quantity"] * spot for l in enriched) if spot else 0.0
sum(l["quantity"] * spot for l in enriched) if spot else 0.0
fee_per_leg = min(0.0003 * (spot or 1) * sum(l["quantity"] for l in enriched),
0.125 * abs(net_premium)) if spot else 0.0
fees_open = round(fee_per_leg, 4)
@@ -1,5 +1,6 @@
from __future__ import annotations
import contextlib
import os
from fastapi import Depends, FastAPI, HTTPException
@@ -272,10 +273,8 @@ def create_app(
@asynccontextmanager
async def _lifespan(_app: FastAPI):
for inst in ("BTC-PERPETUAL", "ETH-PERPETUAL"):
try:
with contextlib.suppress(Exception):
await client.set_leverage(inst, cap_default)
except Exception:
pass
yield
app = build_app(
@@ -551,10 +550,8 @@ def create_app(
_check(principal, core=True)
lev = _enforce_leverage(body.leverage, creds=creds, exchange="deribit")
if lev != cap_default:
try:
with contextlib.suppress(Exception):
await client.set_leverage(body.instrument_name, lev)
except Exception:
pass
result = await client.place_order(
instrument_name=body.instrument_name,
side=body.side,
@@ -582,10 +579,8 @@ def create_app(
lev = _enforce_leverage(body.leverage, creds=creds, exchange="deribit")
if lev != cap_default:
for leg in body.legs:
try:
with contextlib.suppress(Exception):
await client.set_leverage(leg.instrument_name, lev)
except Exception:
pass
result = await client.place_combo_order(
legs=[leg.model_dump() for leg in body.legs],
side=body.side,
@@ -6,7 +6,6 @@ import asyncio
import datetime as _dt
from typing import Any
import httpx
from mcp_common import indicators as ind
from mcp_common.http import async_client
@@ -5,6 +5,7 @@ from typing import Any
import httpx
from mcp_common.http import async_client
from mcp_macro.cot import classify_extreme, compute_percentile, parse_disagg_row, parse_tff_row
from mcp_macro.cot_contracts import (
ALL_DISAGG_SYMBOLS,
+6 -4
View File
@@ -1,6 +1,11 @@
from __future__ import annotations
from mcp_macro.cot import classify_extreme, compute_percentile
from mcp_macro.cot import (
classify_extreme,
compute_percentile,
parse_disagg_row,
parse_tff_row,
)
def test_compute_percentile_basic():
@@ -44,9 +49,6 @@ def test_classify_extreme_none_input():
assert classify_extreme(None) == "neutral"
from mcp_macro.cot import parse_disagg_row, parse_tff_row
# Payload Socrata reale (subset campi rilevanti, valori arbitrari per test)
TFF_SOCRATA_ROW = {
"report_date_as_yyyy_mm_dd": "2026-04-22T00:00:00.000",
@@ -366,6 +366,7 @@ async def test_fetch_cot_disagg_unknown_symbol():
async def test_fetch_cot_extreme_positioning_flags_outliers(monkeypatch):
"""Mock fetch_cot_tff e fetch_cot_disagg per simulare history e ultimo punto."""
from unittest.mock import AsyncMock
from mcp_macro import fetchers as f
# Simula una serie ES dove ultimo lev_funds_net è in basso (extreme_short)
@@ -127,7 +127,6 @@ def test_get_market_overview_no_auth_401(http):
assert r.status_code == 401
from unittest.mock import AsyncMock, patch
def test_get_cot_tff_core_ok(http):
@@ -5,7 +5,6 @@ import re
import xml.etree.ElementTree as ET
from typing import Any
import httpx
from mcp_common.http import async_client
CRYPTOPANIC_URL = "https://cryptopanic.com/api/v1/posts/"
@@ -9,8 +9,6 @@ from mcp_common.mcp_bridge import mount_mcp_endpoint
from mcp_common.server import build_app
from pydantic import BaseModel
logger = logging.getLogger(__name__)
from mcp_sentiment.fetchers import (
fetch_cointegration_pairs,
fetch_cross_exchange_funding,
@@ -23,6 +21,8 @@ from mcp_sentiment.fetchers import (
fetch_world_news,
)
logger = logging.getLogger(__name__)
# --- Body models ---
class GetCryptoNewsReq(BaseModel):