feat(mcp+runtime): allineamento a Cerbero MCP V2 e flag operativi

Adegua Cerbero Bite alla nuova versione 2.0.0 del server MCP unificato
(testnet/mainnet routing per token, header X-Bot-Tag obbligatorio) e
introduce due interruttori operativi indipendenti per separare la
raccolta dati dall'esecuzione di strategia.

Auth e collegamento MCP
- Token bearer letto dalla nuova variabile CERBERO_BITE_MCP_TOKEN; il
  valore sceglie l'ambiente upstream (testnet vs mainnet) sul server.
  Rimosso il caricamento da file (`secrets/core.token`,
  CERBERO_BITE_CORE_TOKEN_FILE, Docker secret /run/secrets/core_token).
- Aggiunto header X-Bot-Tag (default `BOT__CERBERO_BITE`, override via
  CERBERO_BITE_MCP_BOT_TAG) su ogni call MCP, con validazione lato client
  (non vuoto, ≤ 64 caratteri).
- Cartella `secrets/` rimossa, `.gitignore` ripulito, Dockerfile e
  docker-compose.yml aggiornati con env passthrough e fail-fast quando
  manca il token.

Modalità operativa (RuntimeFlags)
- Nuovo modulo `config/runtime_flags.py` con `RuntimeFlags(
  data_analysis_enabled, strategy_enabled)` e loader che parserizza
  CERBERO_BITE_ENABLE_DATA_ANALYSIS e CERBERO_BITE_ENABLE_STRATEGY
  (true/false/yes/no/on/off/enabled/disabled, case-insensitive).
- L'orchestratore espone i flag, audita e logga la modalità al boot
  (`engine started: env=… data_analysis=… strategy=…`), e in
  `install_scheduler` esclude i job `entry`/`monitor` quando strategy è
  off e il job `market_snapshot` quando data analysis è off. I job di
  infrastruttura (health, backup, manual_actions) restano sempre attivi.
- Default profile = "solo analisi dati" (data_analysis=true,
  strategy=false), pensato per la finestra di soak post-deploy.

GUI saldi
- `gui/live_data.py::_fetch_deribit_currency` riconosce il campo soft
  `error` nel payload V2 (HTTP 200 con `error` valorizzato dal server
  quando l'auth Deribit fallisce) e lo propaga come `BalanceRow.error`,
  evitando di mostrare un fuorviante equity = 0,00.

CLI
- Sostituita l'opzione `--token-file` con `--token` (stringa) sui comandi
  start/dry-run/ping; il default proviene dall'env. Le chiamate al
  builder dell'orchestrator passano anche `bot_tag` e `flags`.

Documentazione
- `docs/04-mcp-integration.md`: descrizione del nuovo flusso di auth V2
  (token = ambiente, X-Bot-Tag nell'audit) e router unificati.
- `docs/06-operational-flow.md`: nuova sezione "Modalità operativa" con
  i tre profili canonici e tabella di gating per ogni job; aggiunto
  `market_snapshot` al cron summary.
- `docs/10-config-spec.md`: nuova sezione "Variabili d'ambiente"
  tabellare con tutti gli env, comprese le bool dei flag operativi.
- `docs/02-architecture.md`: layout del repo aggiornato (`secrets/`
  rimosso, `runtime_flags.py` aggiunto), descrizione di `config/`
  estesa.

Test
- 5 nuovi test su `_fetch_deribit_currency` (soft-error, payload pulito,
  eccezione, error blank, signature parity).
- 7 nuovi test su `load_runtime_flags` (default, override, parsing
  truthy/falsy, blank fallback, valore invalido).
- 4 nuovi test su `HttpToolClient` (X-Bot-Tag default e custom, blank e
  troppo lungo rifiutati).
- 3 nuovi test integration sull'orchestratore (gating dei job in base
  ai flag).
- Test esistenti su token/CLI ping/orchestrator aggiornati al nuovo
  schema. Suite intera: 404 passed, 1 skipped (sqlite3 CLI assente
  sull'host di sviluppo).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-01 17:14:40 +02:00
parent d9454fc996
commit ce158a92dd
23 changed files with 757 additions and 200 deletions
+27 -6
View File
@@ -3,8 +3,8 @@
# Copia: `cp .env.example .env` e popola i valori effettivi. # Copia: `cp .env.example .env` e popola i valori effettivi.
# --- Endpoint MCP --- # --- Endpoint MCP ---
# Default Docker network (interno alla suite Cerbero_mcp): # Default Docker network (interno alla suite Cerbero_mcp V2):
# CERBERO_BITE_MCP_DERIBIT_URL=http://mcp-deribit:9011 # CERBERO_BITE_MCP_DERIBIT_URL=http://cerbero-mcp:9000/mcp-deribit
# ... # ...
# Gateway pubblico (host esterno alla rete Docker): # Gateway pubblico (host esterno alla rete Docker):
CERBERO_BITE_MCP_DERIBIT_URL=https://cerbero-mcp.tielogic.xyz/mcp-deribit CERBERO_BITE_MCP_DERIBIT_URL=https://cerbero-mcp.tielogic.xyz/mcp-deribit
@@ -12,11 +12,32 @@ CERBERO_BITE_MCP_HYPERLIQUID_URL=https://cerbero-mcp.tielogic.xyz/mcp-hyperliqui
CERBERO_BITE_MCP_MACRO_URL=https://cerbero-mcp.tielogic.xyz/mcp-macro CERBERO_BITE_MCP_MACRO_URL=https://cerbero-mcp.tielogic.xyz/mcp-macro
CERBERO_BITE_MCP_SENTIMENT_URL=https://cerbero-mcp.tielogic.xyz/mcp-sentiment CERBERO_BITE_MCP_SENTIMENT_URL=https://cerbero-mcp.tielogic.xyz/mcp-sentiment
# --- Token bearer MCP ---
# Cerbero MCP V2 sceglie l'ambiente upstream (testnet vs mainnet) in
# base al token presentato nell'header Authorization. Per switchare a
# mainnet sostituire il valore con il MAINNET_TOKEN emesso dal cluster
# Cerbero_mcp e riavviare il bot. Il token NON viene mai loggato.
CERBERO_BITE_MCP_TOKEN=
# --- Bot tag (header X-Bot-Tag) ---
# Identifica il bot nell'audit log del server MCP. Default fissato dal
# progetto: `BOT__CERBERO_BITE`. Ridefinirlo solo per ambienti
# alternativi (es. shadow run, replay).
CERBERO_BITE_MCP_BOT_TAG=BOT__CERBERO_BITE
# --- Modalità operativa ---
# Due interruttori indipendenti che decidono cosa fa il bot a ogni
# giro del decision loop:
# * ENABLE_DATA_ANALYSIS=true → raccolta dati MCP, snapshot di
# mercato, calcolo indicatori, log e audit ATTIVI
# * ENABLE_STRATEGY=true → valutazione regole §2-§9 e
# proposta/esecuzione di entry/exit ATTIVE
# Periodo iniziale ("solo analisi dati"): tenere
# ENABLE_DATA_ANALYSIS=true e ENABLE_STRATEGY=false.
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
# --- Telegram (notify-only) --- # --- Telegram (notify-only) ---
# Lascia commentato per modalità disabled (no notifiche). # Lascia commentato per modalità disabled (no notifiche).
# CERBERO_BITE_TELEGRAM_BOT_TOKEN=123456:ABC-DEF... # CERBERO_BITE_TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
# CERBERO_BITE_TELEGRAM_CHAT_ID=-1001234567890 # CERBERO_BITE_TELEGRAM_CHAT_ID=-1001234567890
# --- Token core MCP ---
# Alternativa a --token-file. Default: /run/secrets/core_token (Docker).
# CERBERO_BITE_CORE_TOKEN_FILE=secrets/core.token
-3
View File
@@ -43,6 +43,3 @@ data/
.env .env
.env.* .env.*
!.env.example !.env.example
secrets/*
!secrets/.gitkeep
!secrets/README.md
+1 -2
View File
@@ -34,8 +34,7 @@ WORKDIR /app
ENV PATH=/opt/venv/bin:$PATH \ ENV PATH=/opt/venv/bin:$PATH \
PYTHONDONTWRITEBYTECODE=1 \ PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \ PYTHONUNBUFFERED=1
CERBERO_BITE_CORE_TOKEN_FILE=/run/secrets/core_token
COPY --from=builder /opt/venv /opt/venv COPY --from=builder /opt/venv /opt/venv
COPY --from=builder /app/src /app/src COPY --from=builder /app/src /app/src
+21 -18
View File
@@ -1,24 +1,24 @@
# docker-compose.yml — Cerbero Bite # docker-compose.yml — Cerbero Bite
# #
# Bite runs in its own Compose project but joins the same Docker # Bite runs in its own Compose project but joins the same Docker
# network used by Cerbero_mcp so it can resolve `mcp-deribit`, # network used by Cerbero MCP V2 so it can resolve the in-cluster
# `mcp-macro` and friends by their service name (see the gateway # service name when running co-located, and otherwise reaches the
# Caddyfile in Cerbero_mcp). # public gateway (`https://cerbero-mcp.tielogic.xyz`) over the host
# network.
# #
# The shared network is declared as external here. Create it once on # The shared network is declared as external here. Create it once on
# the host with `docker network create cerbero-suite` (or rename the # the host with `docker network create cerbero-suite` (or rename the
# Cerbero_mcp network to `cerbero-suite` and mark it external). # Cerbero_mcp network to `cerbero-suite` and mark it external).
# #
# Secrets are read from ./secrets/, which is .gitignore'd. # Authentication: a single bearer token is passed through from the
# host `.env` file via `CERBERO_BITE_MCP_TOKEN`. The Cerbero MCP V2
# server uses the token to decide whether the upstream environment
# is testnet or mainnet; switching environment = switching token.
networks: networks:
cerbero-suite: cerbero-suite:
external: true external: true
secrets:
core_token:
file: ./secrets/core.token
volumes: volumes:
bite-data: bite-data:
@@ -33,17 +33,20 @@ services:
cap_drop: [ALL] cap_drop: [ALL]
security_opt: security_opt:
- no-new-privileges:true - no-new-privileges:true
secrets:
- core_token
environment: environment:
CERBERO_BITE_CORE_TOKEN_FILE: /run/secrets/core_token # MCP auth — token is sourced from the host .env (compose
# Service URLs — the defaults below match the cerbero-suite # interpolation). The `X-Bot-Tag` value below is the audit
# network DNS. Override per service if you need to point at a # identifier the MCP server logs for every write call.
# different host (dev only). CERBERO_BITE_MCP_TOKEN: ${CERBERO_BITE_MCP_TOKEN:?missing CERBERO_BITE_MCP_TOKEN}
CERBERO_BITE_MCP_DERIBIT_URL: http://mcp-deribit:9011 CERBERO_BITE_MCP_BOT_TAG: ${CERBERO_BITE_MCP_BOT_TAG:-BOT__CERBERO_BITE}
CERBERO_BITE_MCP_HYPERLIQUID_URL: http://mcp-hyperliquid:9012 # Service URLs — defaults below match the in-cluster cerbero-suite
CERBERO_BITE_MCP_MACRO_URL: http://mcp-macro:9013 # network DNS (V2 unified image listening on port 9000). Override
CERBERO_BITE_MCP_SENTIMENT_URL: http://mcp-sentiment:9014 # any of them to point at the public gateway, a custom host, or
# localhost for dev work.
CERBERO_BITE_MCP_DERIBIT_URL: ${CERBERO_BITE_MCP_DERIBIT_URL:-http://cerbero-mcp:9000/mcp-deribit}
CERBERO_BITE_MCP_HYPERLIQUID_URL: ${CERBERO_BITE_MCP_HYPERLIQUID_URL:-http://cerbero-mcp:9000/mcp-hyperliquid}
CERBERO_BITE_MCP_MACRO_URL: ${CERBERO_BITE_MCP_MACRO_URL:-http://cerbero-mcp:9000/mcp-macro}
CERBERO_BITE_MCP_SENTIMENT_URL: ${CERBERO_BITE_MCP_SENTIMENT_URL:-http://cerbero-mcp:9000/mcp-sentiment}
# Telegram and Portfolio are no longer shared MCP services. The # Telegram and Portfolio are no longer shared MCP services. The
# bot now calls the Telegram Bot API directly and aggregates # bot now calls the Telegram Bot API directly and aggregates
# portfolio in-process from Deribit + Hyperliquid + Macro. # portfolio in-process from Deribit + Hyperliquid + Macro.
+7 -5
View File
@@ -88,9 +88,9 @@ Cerbero_Bite/
├── strategy.yaml # config golden + execution.environment ├── strategy.yaml # config golden + execution.environment
├── strategy.local.yaml.example # override locale (gitignored) ├── strategy.local.yaml.example # override locale (gitignored)
├── Dockerfile # image runtime + HEALTHCHECK ├── Dockerfile # image runtime + HEALTHCHECK
├── docker-compose.yml # rete external cerbero-suite + secrets ├── docker-compose.yml # rete external cerbero-suite, env passthrough
├── .env.example # template variabili (token MCP, bot tag, modalità)
├── docs/ # questa documentazione ├── docs/ # questa documentazione
├── secrets/ # gitignored (solo .gitkeep + README)
├── src/cerbero_bite/ ├── src/cerbero_bite/
│ ├── __init__.py │ ├── __init__.py
│ ├── __main__.py # entry point CLI │ ├── __main__.py # entry point CLI
@@ -135,7 +135,8 @@ Cerbero_Bite/
│ ├── config/ # caricamento e validazione yaml │ ├── config/ # caricamento e validazione yaml
│ │ ├── schema.py │ │ ├── schema.py
│ │ ├── loader.py │ │ ├── loader.py
│ │ ── mcp_endpoints.py # URL + token loader │ │ ── mcp_endpoints.py # URL + token + bot tag (da .env)
│ │ └── runtime_flags.py # ENABLE_DATA_ANALYSIS / ENABLE_STRATEGY
│ ├── reporting/ # report umani (Fase 5) │ ├── reporting/ # report umani (Fase 5)
│ ├── gui/ # Streamlit dashboard (Fase 4.5) │ ├── gui/ # Streamlit dashboard (Fase 4.5)
│ └── safety/ # kill switch, dead man, audit │ └── safety/ # kill switch, dead man, audit
@@ -170,8 +171,9 @@ Cerbero_Bite/
effetti collaterali. Espone `Orchestrator` come façade per il CLI. effetti collaterali. Espone `Orchestrator` come façade per il CLI.
- **`state/`** persistenza. Mai logica di business. Solo CRUD. - **`state/`** persistenza. Mai logica di business. Solo CRUD.
- **`config/`** caricamento di `strategy.yaml`, validazione, - **`config/`** caricamento di `strategy.yaml`, validazione,
esposizione immutabile dei parametri. Risolve gli URL MCP e legge esposizione immutabile dei parametri. Risolve gli URL MCP, legge
il bearer token al boot. il bearer token + il bot tag al boot ed espone i due interruttori
operativi `RuntimeFlags(data_analysis_enabled, strategy_enabled)`.
- **`safety/`** controlli trasversali (vedere `07-risk-controls.md`). - **`safety/`** controlli trasversali (vedere `07-risk-controls.md`).
- **`reporting/`** generazione di stringhe per Telegram. Niente - **`reporting/`** generazione di stringhe per Telegram. Niente
logica di trading, solo formatting. logica di trading, solo formatting.
+32 -13
View File
@@ -1,11 +1,15 @@
# 04 — MCP Integration # 04 — MCP Integration
Cerbero Bite consuma quattro servizi MCP HTTP della suite (`Cerbero_mcp`): Cerbero Bite consuma quattro router MCP HTTP della suite Cerbero MCP V2
`cerbero-deribit`, `cerbero-hyperliquid`, `cerbero-macro`, (`Cerbero_mcp`): `mcp-deribit`, `mcp-hyperliquid`, `mcp-macro`,
`cerbero-sentiment`. Non utilizza l'SDK Python `mcp`: ogni server `mcp-sentiment`. Dalla V2 i quattro router vivono nello stesso processo
espone gli endpoint REST `POST <base_url>/tools/<tool_name>` con FastAPI dietro lo stesso host (default in-cluster
autenticazione Bearer, e Cerbero Bite vi si collega tramite `http://cerbero-mcp:9000/mcp-{exchange}`, gateway pubblico
`httpx.AsyncClient` long-lived (`clients/_base.py`). `https://cerbero-mcp.tielogic.xyz/mcp-{exchange}`). Cerbero Bite non
utilizza l'SDK Python `mcp`: ogni router espone gli endpoint REST
`POST <base_url>/tools/<tool_name>` con autenticazione Bearer e header
`X-Bot-Tag`, e Cerbero Bite vi si collega tramite `httpx.AsyncClient`
long-lived (`clients/_base.py`).
Telegram e Portfolio, in passato esposti come servizi MCP condivisi, Telegram e Portfolio, in passato esposti come servizi MCP condivisi,
sono stati rimossi dal layer MCP e gestiti **in-process** da ogni bot sono stati rimossi dal layer MCP e gestiti **in-process** da ogni bot
@@ -22,13 +26,19 @@ con default che corrispondono al DNS della rete Docker
ecc.). Ogni servizio può essere sovrascritto da una variabile ecc.). Ogni servizio può essere sovrascritto da una variabile
d'ambiente dedicata, utile in sviluppo: d'ambiente dedicata, utile in sviluppo:
| Servizio | Variabile d'ambiente | Default Docker DNS | | Servizio | Variabile d'ambiente | Default Docker DNS legacy |
|---|---|---| |---|---|---|
| Deribit | `CERBERO_BITE_MCP_DERIBIT_URL` | `http://mcp-deribit:9011` | | Deribit | `CERBERO_BITE_MCP_DERIBIT_URL` | `http://mcp-deribit:9011` |
| Hyperliquid | `CERBERO_BITE_MCP_HYPERLIQUID_URL` | `http://mcp-hyperliquid:9012` | | Hyperliquid | `CERBERO_BITE_MCP_HYPERLIQUID_URL` | `http://mcp-hyperliquid:9012` |
| Macro | `CERBERO_BITE_MCP_MACRO_URL` | `http://mcp-macro:9013` | | Macro | `CERBERO_BITE_MCP_MACRO_URL` | `http://mcp-macro:9013` |
| Sentiment | `CERBERO_BITE_MCP_SENTIMENT_URL` | `http://mcp-sentiment:9014` | | Sentiment | `CERBERO_BITE_MCP_SENTIMENT_URL` | `http://mcp-sentiment:9014` |
I default mostrati sopra sono il legacy della topologia V1 (un container
per servizio). Sulla V2 unificata ogni URL deve includere il prefisso di
router, ad esempio `http://cerbero-mcp:9000/mcp-deribit` o
`https://cerbero-mcp.tielogic.xyz/mcp-deribit`. Le URL effettive sono
configurate in `.env`.
Telegram (notify-only) viene configurato direttamente via due Telegram (notify-only) viene configurato direttamente via due
variabili d'ambiente, lette al boot dal client in-process: variabili d'ambiente, lette al boot dal client in-process:
@@ -40,17 +50,26 @@ variabili d'ambiente, lette al boot dal client in-process:
Quando una delle due manca, il client Telegram entra in modalità Quando una delle due manca, il client Telegram entra in modalità
**disabled** e ogni `notify_*` diventa un no-op a livello di DEBUG. **disabled** e ogni `notify_*` diventa un no-op a livello di DEBUG.
Il bearer token per le chiamate è il token con capability `core` letto Il bearer token per le chiamate è letto dalla variabile d'ambiente
da `secrets/core.token` (path configurabile via `CERBERO_BITE_MCP_TOKEN` (vedi `.env`). Sulla V2 il valore del token
`CERBERO_BITE_CORE_TOKEN_FILE`, default `/run/secrets/core_token` nel decide quale ambiente upstream serve la richiesta: lo stesso server MCP
container). Non è loggato. fronteggia testnet e mainnet contemporaneamente, e si passa da uno
all'altro semplicemente sostituendo il valore della variabile e
riavviando il bot. Il token non viene mai loggato.
A ogni chiamata Cerbero Bite aggiunge anche l'header `X-Bot-Tag`, con
valore di default `BOT__CERBERO_BITE` (override via
`CERBERO_BITE_MCP_BOT_TAG`). Il server MCP scrive il valore nell'audit
record di ogni operazione di scrittura, così ogni write resta
attribuibile al bot d'origine.
```python ```python
# clients/_base.py — sintesi # clients/_base.py — sintesi
class HttpToolClient: class HttpToolClient:
service: str # "deribit", "macro", ... service: str # "deribit", "macro", ...
base_url: str # "http://mcp-deribit:9011" base_url: str # "https://cerbero-mcp.tielogic.xyz/mcp-deribit"
token: str # bearer token: str # bearer (testnet o mainnet, scelto da env)
bot_tag: str = "BOT__CERBERO_BITE" # X-Bot-Tag header
timeout_s: float = 8.0 timeout_s: float = 8.0
retry_max: int = 3 # esponenziale 1s/5s/30s retry_max: int = 3 # esponenziale 1s/5s/30s
client: httpx.AsyncClient | None # condiviso dal RuntimeContext client: httpx.AsyncClient | None # condiviso dal RuntimeContext
+54
View File
@@ -223,7 +223,61 @@ proposed
| `0 2,14 * * *` | Position monitoring | 2× giorno | | `0 2,14 * * *` | Position monitoring | 2× giorno |
| `0 12 1 * *` | Kelly recalibration | Mensile | | `0 12 1 * *` | Kelly recalibration | Mensile |
| `*/5 * * * *` | Health check | 5 min | | `*/5 * * * *` | Health check | 5 min |
| `*/15 * * * *` | Market snapshot (calibrazione soglie) | 15 min |
| `0 0 * * *` | Backup SQLite + rotation log | Giornaliero | | `0 0 * * *` | Backup SQLite + rotation log | Giornaliero |
| `0 8 * * *` | Daily digest Telegram | Giornaliero | | `0 8 * * *` | Daily digest Telegram | Giornaliero |
Tutti gli orari in UTC. Tutti gli orari in UTC.
## Modalità operativa (interruttori `RuntimeFlags`)
Il bot riconosce due interruttori indipendenti, letti da
`.env` al boot tramite `cerbero_bite.config.runtime_flags.load_runtime_flags()`:
| Variabile d'ambiente | Default | Cosa abilita |
|---|---|---|
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | `true` | Job `market_snapshot` ogni 15 min: raccolta dati MCP, scrittura tabella `market_snapshots`, calibrazione soglie. |
| `CERBERO_BITE_ENABLE_STRATEGY` | `false` | Job `entry` (lunedì 14:00 UTC) e `monitor` (2× giorno): valutazione regole §2-§9 di `01-strategy-rules.md` e proposta/esecuzione ordini. |
I job di infrastruttura (`health`, `backup`, `manual_actions`) sono
**sempre attivi**, indipendentemente dai flag, perché tengono in vita il
kill switch e la persistenza.
### Profilo "solo analisi dati" (default)
Configurazione standard del periodo di soak post-deploy:
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
```
Effetto: il bot raccoglie snapshot di mercato, alimenta `market_snapshots`,
ma **non** invia entry né chiude posizioni autonomamente. I metodi
`run_entry`/`run_monitor` restano richiamabili manualmente da CLI
(`cerbero-bite dry-run --cycle entry|monitor`) e tramite `manual_actions`
per testing e validazione.
### Profilo "trading attivo"
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=true
```
Effetto: tutti i job canonici vengono installati nello scheduler. Lo
switch va fatto solo dopo che la qualità dei dati raccolti è stata
validata e Adriano dà esplicito consenso al passaggio.
### Disattivazione completa dell'analisi dati
Caso eccezionale (manutenzione, problema MCP):
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=false
CERBERO_BITE_ENABLE_STRATEGY=false
```
Il bot resta vivo per health check e ricezione di manual actions, ma
non interroga MCP per dati di mercato e non opera. Il kill switch resta
operativo.
+28
View File
@@ -307,3 +307,31 @@ Non è permesso parametrizzare:
superiori, non ulteriormente liberalizzabili). superiori, non ulteriormente liberalizzabili).
- Lo **scheduler** per intervalli più stretti (un'ottimizzazione che - Lo **scheduler** per intervalli più stretti (un'ottimizzazione che
non si fa via config). non si fa via config).
## Variabili d'ambiente
`strategy.yaml` definisce **cosa** fa il bot quando è acceso. Le
variabili d'ambiente in `.env` definiscono **come** si collega al
mondo esterno e **quali interruttori operativi** sono attivi.
Queste vivono fuori da `strategy.yaml` perché cambiano per ambiente
(testnet vs mainnet, soak vs trading) ma non per regola di strategia.
| Variabile | Tipo | Default | Uso |
|---|---|---|---|
| `CERBERO_BITE_MCP_TOKEN` | string (obbligatoria) | — | Bearer token presentato a Cerbero MCP V2. Il valore decide l'ambiente upstream (testnet o mainnet). Cambia il valore = cambia l'ambiente. |
| `CERBERO_BITE_MCP_BOT_TAG` | string ≤ 64 char | `BOT__CERBERO_BITE` | Header `X-Bot-Tag` registrato nell'audit log del server MCP per ogni write. |
| `CERBERO_BITE_MCP_DERIBIT_URL` | URL | gateway pubblico | Override URL router Deribit. |
| `CERBERO_BITE_MCP_HYPERLIQUID_URL` | URL | gateway pubblico | Override URL router Hyperliquid. |
| `CERBERO_BITE_MCP_MACRO_URL` | URL | gateway pubblico | Override URL router Macro. |
| `CERBERO_BITE_MCP_SENTIMENT_URL` | URL | gateway pubblico | Override URL router Sentiment. |
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | bool (`true`/`false`) | `true` | Abilita il job `market_snapshot` (raccolta dati MCP ogni 15 min). |
| `CERBERO_BITE_ENABLE_STRATEGY` | bool (`true`/`false`) | `false` | Abilita i job `entry` e `monitor` (esecuzione regole §2-§9). |
| `CERBERO_BITE_TELEGRAM_BOT_TOKEN` | string | — | Token bot Telegram (notify-only). Senza, il client è in modalità disabled. |
| `CERBERO_BITE_TELEGRAM_CHAT_ID` | string | — | Chat ID destinatario notifiche Telegram. |
I valori bool accettano in input `1`/`0`, `true`/`false`, `yes`/`no`,
`on`/`off`, `enabled`/`disabled` (case-insensitive). Qualunque altro
valore fa fallire il boot con `ValueError`.
Vedi `06-operational-flow.md` §"Modalità operativa" per i profili
canonici di `ENABLE_DATA_ANALYSIS` e `ENABLE_STRATEGY`.
View File
-28
View File
@@ -1,28 +0,0 @@
# `secrets/`
Cartella runtime per i credenziali sensibili. Tutti i file in questa
directory sono `.gitignore`d eccetto questo README e `.gitkeep`.
## Contenuto atteso
| File | Origine | Uso |
|---|---|---|
| `core.token` | copia di `Cerbero_mcp/secrets/core.token` | bearer token con capability `core` per chiamare i tool MCP. Letta una sola volta al boot del container. |
## Setup
```bash
cp /path/to/Cerbero_mcp/secrets/core.token secrets/core.token
chmod 600 secrets/core.token
```
Il `docker-compose.yml` di Cerbero Bite monta `secrets/core.token`
come Docker secret a `/run/secrets/core_token` dentro il container, e
la variabile d'ambiente `CERBERO_BITE_CORE_TOKEN_FILE` punta lì per
default.
## Rotazione
Quando il token core viene ruotato sul cluster Cerbero_mcp, sostituire
anche la copia locale. Il container va riavviato perché il token è
letto solo all'avvio.
+28 -16
View File
@@ -31,9 +31,11 @@ from cerbero_bite.clients.sentiment import SentimentClient
from cerbero_bite.config.loader import compute_config_hash, load_strategy from cerbero_bite.config.loader import compute_config_hash, load_strategy
from cerbero_bite.config.mcp_endpoints import ( from cerbero_bite.config.mcp_endpoints import (
DEFAULT_ENDPOINTS, DEFAULT_ENDPOINTS,
load_bot_tag,
load_endpoints, load_endpoints,
load_token, load_token,
) )
from cerbero_bite.config.runtime_flags import load_runtime_flags
from cerbero_bite.logging import configure as configure_logging from cerbero_bite.logging import configure as configure_logging
from cerbero_bite.logging import get_logger from cerbero_bite.logging import get_logger
from cerbero_bite.runtime.orchestrator import Orchestrator, make_orchestrator from cerbero_bite.runtime.orchestrator import Orchestrator, make_orchestrator
@@ -205,9 +207,14 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
show_default=True, show_default=True,
), ),
click.option( click.option(
"--token-file", "--token",
type=click.Path(dir_okay=False, path_type=Path), type=str,
default=None, default=None,
help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). "
"The server uses the token to choose between testnet "
"and mainnet upstream environments."
),
), ),
click.option( click.option(
"--db", "--db",
@@ -243,7 +250,7 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
def _build_orchestrator( def _build_orchestrator(
*, *,
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -251,7 +258,7 @@ def _build_orchestrator(
enforce_hash: bool = True, enforce_hash: bool = True,
) -> Orchestrator: ) -> Orchestrator:
loaded = load_strategy(strategy_path, enforce_hash=enforce_hash) loaded = load_strategy(strategy_path, enforce_hash=enforce_hash)
token = load_token(path=token_file) resolved_token = load_token(value=token)
# Strategy file values win over the CLI defaults; explicit overrides # Strategy file values win over the CLI defaults; explicit overrides
# via env-style values (CLI flags) still apply when the user provides # via env-style values (CLI flags) still apply when the user provides
# them — Click signals "default" via Click's resilient_parsing flag, # them — Click signals "default" via Click's resilient_parsing flag,
@@ -270,11 +277,13 @@ def _build_orchestrator(
return make_orchestrator( return make_orchestrator(
cfg=loaded.config, cfg=loaded.config,
endpoints=load_endpoints(), endpoints=load_endpoints(),
token=token, token=resolved_token,
db_path=db, db_path=db,
audit_path=audit, audit_path=audit,
expected_environment=chosen_env, # type: ignore[arg-type] expected_environment=chosen_env, # type: ignore[arg-type]
eur_to_usd=chosen_fx, eur_to_usd=chosen_fx,
bot_tag=load_bot_tag(),
flags=load_runtime_flags(),
) )
@@ -282,7 +291,7 @@ def _build_orchestrator(
@_engine_options @_engine_options
def start( def start(
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -292,7 +301,7 @@ def start(
try: try:
orch = _build_orchestrator( orch = _build_orchestrator(
strategy_path=strategy_path, strategy_path=strategy_path,
token_file=token_file, token=token,
db=db, db=db,
audit=audit, audit=audit,
environment=environment, environment=environment,
@@ -322,7 +331,7 @@ def start(
) )
def dry_run( def dry_run(
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -332,7 +341,7 @@ def dry_run(
"""Execute one cycle without starting the scheduler.""" """Execute one cycle without starting the scheduler."""
orch = _build_orchestrator( orch = _build_orchestrator(
strategy_path=strategy_path, strategy_path=strategy_path,
token_file=token_file, token=token,
db=db, db=db,
audit=audit, audit=audit,
environment=environment, environment=environment,
@@ -506,10 +515,13 @@ def kill_switch_status(db: Path) -> None:
@main.command() @main.command()
@click.option( @click.option(
"--token-file", "--token",
type=click.Path(dir_okay=False, path_type=Path), type=str,
default=None, default=None,
help="Path to the bearer token file (default: secrets/core_token).", help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). The "
"server uses the token to choose between testnet and mainnet."
),
) )
@click.option( @click.option(
"--timeout", "--timeout",
@@ -518,16 +530,16 @@ def kill_switch_status(db: Path) -> None:
show_default=True, show_default=True,
help="Per-service timeout in seconds for the ping call.", help="Per-service timeout in seconds for the ping call.",
) )
def ping(token_file: Path | None, timeout: float) -> None: def ping(token: str | None, timeout: float) -> None:
"""Print health status for every MCP service Cerbero Bite uses.""" """Print health status for every MCP service Cerbero Bite uses."""
try: try:
token = load_token(path=token_file) resolved_token = load_token(value=token)
except (FileNotFoundError, ValueError) as exc: except ValueError as exc:
console.print(f"[red]token error[/red]: {exc}") console.print(f"[red]token error[/red]: {exc}")
sys.exit(1) sys.exit(1)
endpoints = load_endpoints() endpoints = load_endpoints()
rows = asyncio.run(_ping_all(endpoints, token=token, timeout=timeout)) rows = asyncio.run(_ping_all(endpoints, token=resolved_token, timeout=timeout))
table = Table(title="MCP services") table = Table(title="MCP services")
table.add_column("service") table.add_column("service")
+31 -5
View File
@@ -1,10 +1,13 @@
"""HTTP tool client common to every MCP wrapper. """HTTP tool client common to every MCP wrapper.
Each MCP service exposes ``POST <base_url>/tools/<tool_name>`` with a Each MCP service exposes ``POST <base_url>/tools/<tool_name>`` with a
JSON body and a ``Bearer <core_token>`` header. ``HttpToolClient`` is a JSON body, a ``Bearer <token>`` header (the token decides the upstream
thin wrapper around :class:`httpx.AsyncClient` that: environment, testnet or mainnet, on the Cerbero MCP V2 server), and an
``X-Bot-Tag`` header that identifies the calling bot in the audit log.
``HttpToolClient`` is a thin wrapper around :class:`httpx.AsyncClient`
that:
* Adds the auth header. * Adds the auth and bot-tag headers.
* Applies the project-wide timeout (default 8 s, see * Applies the project-wide timeout (default 8 s, see
``docs/10-config-spec.md`` ``mcp.call_timeout_s``). ``docs/10-config-spec.md`` ``mcp.call_timeout_s``).
* Retries the call on transient failures with exponential backoff * Retries the call on transient failures with exponential backoff
@@ -44,7 +47,7 @@ from cerbero_bite.clients._exceptions import (
McpToolError, McpToolError,
) )
__all__ = ["HttpToolClient"] __all__ = ["DEFAULT_BOT_TAG", "HttpToolClient"]
_log = logging.getLogger("cerbero_bite.clients") _log = logging.getLogger("cerbero_bite.clients")
@@ -53,6 +56,12 @@ _RETRYABLE: tuple[type[BaseException], ...] = (
McpServerError, McpServerError,
) )
# Bot identifier sent on every MCP call via the ``X-Bot-Tag`` header.
# The Cerbero MCP V2 server logs this value in the audit record so each
# write operation can be traced back to the originating bot.
DEFAULT_BOT_TAG = "BOT__CERBERO_BITE"
_BOT_TAG_MAX_LEN = 64
class HttpToolClient: class HttpToolClient:
"""Async client for ``POST <base>/tools/<tool>`` style MCP services. """Async client for ``POST <base>/tools/<tool>`` style MCP services.
@@ -61,7 +70,14 @@ class HttpToolClient:
service: short service identifier (``"deribit"``, ``"macro"`` …). service: short service identifier (``"deribit"``, ``"macro"`` …).
base_url: e.g. ``"http://mcp-deribit:9011"``. Trailing slash base_url: e.g. ``"http://mcp-deribit:9011"``. Trailing slash
is stripped. is stripped.
token: bearer token for the ``Authorization`` header. token: bearer token for the ``Authorization`` header. On
Cerbero MCP V2 the value of the token decides whether the
upstream environment is testnet or mainnet; the bot does
not need to know which is which.
bot_tag: value of the ``X-Bot-Tag`` header. Defaults to
:data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``). The
server rejects requests with a missing/empty/over-long
value with HTTP 400.
timeout_s: per-request timeout, default 8 seconds. timeout_s: per-request timeout, default 8 seconds.
retry_max: max number of attempts (1 = no retry). retry_max: max number of attempts (1 = no retry).
retry_base_delay: base delay for exponential backoff. retry_base_delay: base delay for exponential backoff.
@@ -74,15 +90,24 @@ class HttpToolClient:
service: str, service: str,
base_url: str, base_url: str,
token: str, token: str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0, timeout_s: float = 8.0,
retry_max: int = 3, retry_max: int = 3,
retry_base_delay: float = 1.0, retry_base_delay: float = 1.0,
sleep: Callable[[int | float], Awaitable[None] | None] | None = None, sleep: Callable[[int | float], Awaitable[None] | None] | None = None,
client: httpx.AsyncClient | None = None, client: httpx.AsyncClient | None = None,
) -> None: ) -> None:
cleaned_tag = bot_tag.strip()
if not cleaned_tag:
raise ValueError("bot_tag must be a non-empty string")
if len(cleaned_tag) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"bot_tag exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned_tag!r}"
)
self._service = service self._service = service
self._base_url = base_url.rstrip("/") self._base_url = base_url.rstrip("/")
self._token = token self._token = token
self._bot_tag = cleaned_tag
self._timeout = httpx.Timeout(timeout_s) self._timeout = httpx.Timeout(timeout_s)
self._retry_max = max(1, retry_max) self._retry_max = max(1, retry_max)
self._retry_base_delay = retry_base_delay self._retry_base_delay = retry_base_delay
@@ -114,6 +139,7 @@ class HttpToolClient:
headers = { headers = {
"Authorization": f"Bearer {self._token}", "Authorization": f"Bearer {self._token}",
"Content-Type": "application/json", "Content-Type": "application/json",
"X-Bot-Tag": self._bot_tag,
} }
payload = body or {} payload = body or {}
+66 -29
View File
@@ -1,30 +1,40 @@
"""Resolve MCP service URLs and the bearer token. """Resolve MCP service URLs, the bearer token and the bot tag.
Cerbero Bite runs in its own Docker container that joins the Cerbero MCP V2 (a single FastAPI image fronting Deribit, Hyperliquid,
``cerbero-suite`` network: every MCP service is reachable by the Macro, Sentiment and friends) is deployed on a dedicated VPS and reached
container DNS name plus its internal port (``mcp-deribit:9011`` etc.). through the public gateway at ``https://cerbero-mcp.tielogic.xyz``. The
server decides the upstream environment (testnet vs mainnet) entirely
from the bearer token attached to each request — Cerbero Bite does not
have to be told which is which: swapping the token in ``.env`` is enough
to switch environments.
The resolver supports two layers of override: The resolver supports the following layers of override:
1. Per-service environment variables (``CERBERO_BITE_MCP_DERIBIT_URL``, 1. Per-service URL env vars (``CERBERO_BITE_MCP_DERIBIT_URL``,
``CERBERO_BITE_MCP_MACRO_URL``…). Useful for dev when running ``CERBERO_BITE_MCP_HYPERLIQUID_URL``, ``CERBERO_BITE_MCP_MACRO_URL``,
outside Docker — point at ``http://localhost:9011`` etc. ``CERBERO_BITE_MCP_SENTIMENT_URL``). Useful for local dev when the
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var: path to the file that bot must talk to a same-host MCP server (``http://localhost:9000``)
stores the bearer token (default instead of the public gateway.
``/run/secrets/core_token``). The file is read at boot, the 2. ``CERBERO_BITE_MCP_TOKEN`` env var: the bearer token used on every
trailing whitespace is stripped, and the value is *not* logged. request. The token's value is *never* logged.
3. ``CERBERO_BITE_MCP_BOT_TAG`` env var: identifier sent on the
``X-Bot-Tag`` header (default ``BOT__CERBERO_BITE``). Must be a
non-empty string of at most 64 characters.
""" """
from __future__ import annotations from __future__ import annotations
import os import os
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path
from cerbero_bite.clients._base import DEFAULT_BOT_TAG
__all__ = [ __all__ = [
"DEFAULT_BOT_TAG",
"DEFAULT_ENDPOINTS", "DEFAULT_ENDPOINTS",
"MCP_SERVICES", "MCP_SERVICES",
"McpEndpoints", "McpEndpoints",
"load_bot_tag",
"load_endpoints", "load_endpoints",
"load_token", "load_token",
] ]
@@ -78,31 +88,58 @@ def load_endpoints(env: dict[str, str] | None = None) -> McpEndpoints:
return McpEndpoints(**resolved) return McpEndpoints(**resolved)
_DEFAULT_TOKEN_FILE = "/run/secrets/core_token" _TOKEN_ENV = "CERBERO_BITE_MCP_TOKEN"
_TOKEN_FILE_ENV = "CERBERO_BITE_CORE_TOKEN_FILE" _BOT_TAG_ENV = "CERBERO_BITE_MCP_BOT_TAG"
_BOT_TAG_MAX_LEN = 64
def load_token( def load_token(
*, *,
path: str | Path | None = None, value: str | None = None,
env: dict[str, str] | None = None, env: dict[str, str] | None = None,
) -> str: ) -> str:
"""Read the bearer token from disk and return it stripped. """Return the MCP bearer token, stripped of surrounding whitespace.
Resolution order: Resolution order:
1. explicit ``path`` argument; 1. explicit ``value`` argument (e.g. from a CLI flag);
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var; 2. ``CERBERO_BITE_MCP_TOKEN`` env var.
3. ``/run/secrets/core_token`` (Docker secrets default).
""" """
if value is not None:
token = value.strip()
if not token:
raise ValueError("explicit MCP token is empty")
return token
e = env if env is not None else os.environ e = env if env is not None else os.environ
target = ( raw = e.get(_TOKEN_ENV, "")
Path(path) token = raw.strip()
if path is not None
else Path(e.get(_TOKEN_FILE_ENV, _DEFAULT_TOKEN_FILE))
)
if not target.is_file():
raise FileNotFoundError(f"core token file not found: {target}")
token = target.read_text(encoding="utf-8").strip()
if not token: if not token:
raise ValueError(f"core token file is empty: {target}") raise ValueError(
f"{_TOKEN_ENV} is unset or empty; set it in .env to the testnet or "
"mainnet bearer issued by Cerbero MCP"
)
return token return token
def load_bot_tag(
*,
value: str | None = None,
env: dict[str, str] | None = None,
) -> str:
"""Return the ``X-Bot-Tag`` value, with the project default as fallback.
Resolution order:
1. explicit ``value`` argument;
2. ``CERBERO_BITE_MCP_BOT_TAG`` env var;
3. :data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``).
"""
raw = value if value is not None else (env if env is not None else os.environ).get(
_BOT_TAG_ENV, ""
)
cleaned = raw.strip() if raw else ""
if not cleaned:
return DEFAULT_BOT_TAG
if len(cleaned) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"{_BOT_TAG_ENV} exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned!r}"
)
return cleaned
+78
View File
@@ -0,0 +1,78 @@
"""Operational mode flags read from the environment.
Cerbero Bite supports two independent runtime switches:
* ``CERBERO_BITE_ENABLE_DATA_ANALYSIS`` — when ``true``, the periodic
market-snapshot job is scheduled and writes 15-minute snapshots to
``market_snapshots``; when ``false``, the bot still pings MCP for
health and reconciliation but does not record any market dataset.
* ``CERBERO_BITE_ENABLE_STRATEGY`` — when ``true``, the entry and
monitor cycles are scheduled and may propose/execute trades; when
``false``, no entry or monitor logic runs autonomously (the methods
remain callable from the CLI ``dry-run`` and via manual actions, so
the operator can still test code paths on demand).
The default profile is "analysis only": data analysis on, strategy off.
This is the mode used during the post-deploy soak window where the
team observes data quality before opening any position.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
__all__ = [
"DATA_ANALYSIS_ENV",
"STRATEGY_ENV",
"RuntimeFlags",
"load_runtime_flags",
]
DATA_ANALYSIS_ENV = "CERBERO_BITE_ENABLE_DATA_ANALYSIS"
STRATEGY_ENV = "CERBERO_BITE_ENABLE_STRATEGY"
_TRUE_TOKENS = frozenset({"1", "true", "yes", "on", "enabled"})
_FALSE_TOKENS = frozenset({"0", "false", "no", "off", "disabled"})
@dataclass(frozen=True)
class RuntimeFlags:
"""Boolean switches that gate optional cycles.
Both fields default to the canonical "analysis only" profile.
"""
data_analysis_enabled: bool = True
strategy_enabled: bool = False
def _parse_bool(raw: str, *, var: str, default: bool) -> bool:
cleaned = raw.strip().lower()
if not cleaned:
return default
if cleaned in _TRUE_TOKENS:
return True
if cleaned in _FALSE_TOKENS:
return False
raise ValueError(
f"{var}: expected one of "
f"{sorted(_TRUE_TOKENS | _FALSE_TOKENS)}, got {raw!r}"
)
def load_runtime_flags(env: dict[str, str] | None = None) -> RuntimeFlags:
"""Build a :class:`RuntimeFlags` from environment variables."""
e = env if env is not None else os.environ
return RuntimeFlags(
data_analysis_enabled=_parse_bool(
e.get(DATA_ANALYSIS_ENV, ""),
var=DATA_ANALYSIS_ENV,
default=True,
),
strategy_enabled=_parse_bool(
e.get(STRATEGY_ENV, ""),
var=STRATEGY_ENV,
default=False,
),
)
+21 -11
View File
@@ -6,7 +6,7 @@ constraint is relaxed: the dashboard fetches balances on demand,
caches the result with Streamlit's TTL cache, and never holds the caches the result with Streamlit's TTL cache, and never holds the
async client open between renders. Every fetch is a one-shot: async client open between renders. Every fetch is a one-shot:
* read endpoints + token from env / file (same path used by the CLI), * read endpoints + token from env (same path used by the CLI),
* spin up a short-lived ``httpx.AsyncClient``, * spin up a short-lived ``httpx.AsyncClient``,
* query Deribit `get_account_summary` for both ``USDC`` and ``USDT``, * query Deribit `get_account_summary` for both ``USDC`` and ``USDT``,
* query Hyperliquid `get_account_summary` (returns ``spot_usdc``, * query Hyperliquid `get_account_summary` (returns ``spot_usdc``,
@@ -21,11 +21,9 @@ and the others are still rendered.
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
import os
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime from datetime import UTC, datetime
from decimal import Decimal from decimal import Decimal
from pathlib import Path
from typing import Any from typing import Any
import httpx import httpx
@@ -90,14 +88,12 @@ def _decimal_or_none(value: Any) -> Decimal | None:
def _resolve_token() -> str: def _resolve_token() -> str:
"""Read the bearer token from disk, mirroring the CLI default chain.""" """Read the MCP bearer token from the environment.
explicit = os.environ.get("CERBERO_BITE_CORE_TOKEN_FILE")
if explicit: The token is sourced from ``CERBERO_BITE_MCP_TOKEN``; on Cerbero MCP
return load_token(path=Path(explicit)) V2 the same single token decides whether the upstream environment
# Fallback: project-relative `secrets/core.token` (typical local dev). is testnet or mainnet.
local = Path("secrets") / "core.token" """
if local.is_file():
return load_token(path=local)
return load_token() return load_token()
@@ -115,6 +111,20 @@ async def _fetch_deribit_currency(
unrealized_pnl=None, unrealized_pnl=None,
error=f"{type(exc).__name__}: {exc}", error=f"{type(exc).__name__}: {exc}",
) )
# Cerbero MCP V2 returns HTTP 200 with a soft ``error`` field when
# the upstream Deribit call failed (e.g. invalid credentials). Treat
# that as a row-level failure so the dashboard surfaces the cause
# instead of showing a misleading equity=0.
soft_error = summary.get("error")
if soft_error:
return BalanceRow(
exchange="deribit",
currency=currency,
equity=None,
available=None,
unrealized_pnl=None,
error=str(soft_error),
)
return BalanceRow( return BalanceRow(
exchange="deribit", exchange="deribit",
currency=currency, currency=currency,
+3 -1
View File
@@ -16,7 +16,7 @@ from pathlib import Path
import httpx import httpx
from cerbero_bite.clients._base import HttpToolClient from cerbero_bite.clients._base import DEFAULT_BOT_TAG, HttpToolClient
from cerbero_bite.clients.deribit import DeribitClient from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient from cerbero_bite.clients.macro import MacroClient
@@ -78,6 +78,7 @@ def build_runtime(
token: str, token: str,
db_path: Path | str, db_path: Path | str,
audit_path: Path | str, audit_path: Path | str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0, timeout_s: float = 8.0,
retry_max: int = 3, retry_max: int = 3,
clock: Callable[[], datetime] | None = None, clock: Callable[[], datetime] | None = None,
@@ -140,6 +141,7 @@ def build_runtime(
service=service, service=service,
base_url=endpoints.for_service(service), base_url=endpoints.for_service(service),
token=token, token=token,
bot_tag=bot_tag,
timeout_s=timeout_s, timeout_s=timeout_s,
retry_max=retry_max, retry_max=retry_max,
client=http_client, client=http_client,
+63 -23
View File
@@ -23,6 +23,7 @@ import structlog
from apscheduler.schedulers.asyncio import AsyncIOScheduler from apscheduler.schedulers.asyncio import AsyncIOScheduler
from cerbero_bite.config.mcp_endpoints import McpEndpoints from cerbero_bite.config.mcp_endpoints import McpEndpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.config.schema import StrategyConfig from cerbero_bite.config.schema import StrategyConfig
from cerbero_bite.runtime.dependencies import RuntimeContext, build_runtime from cerbero_bite.runtime.dependencies import RuntimeContext, build_runtime
from cerbero_bite.runtime.entry_cycle import EntryCycleResult, run_entry_cycle from cerbero_bite.runtime.entry_cycle import EntryCycleResult, run_entry_cycle
@@ -70,10 +71,12 @@ class Orchestrator:
*, *,
expected_environment: Environment, expected_environment: Environment,
eur_to_usd: Decimal, eur_to_usd: Decimal,
flags: RuntimeFlags | None = None,
) -> None: ) -> None:
self._ctx = ctx self._ctx = ctx
self._expected_env = expected_environment self._expected_env = expected_environment
self._eur_to_usd = eur_to_usd self._eur_to_usd = eur_to_usd
self._flags = flags or RuntimeFlags()
self._health = HealthCheck(ctx, expected_environment=expected_environment) self._health = HealthCheck(ctx, expected_environment=expected_environment)
self._scheduler: AsyncIOScheduler | None = None self._scheduler: AsyncIOScheduler | None = None
@@ -85,6 +88,10 @@ class Orchestrator:
def expected_environment(self) -> Environment: def expected_environment(self) -> Environment:
return self._expected_env return self._expected_env
@property
def flags(self) -> RuntimeFlags:
return self._flags
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Boot # Boot
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@@ -113,9 +120,18 @@ class Orchestrator:
"environment": info.environment, "environment": info.environment,
"health": health.state, "health": health.state,
"config_version": self._ctx.cfg.config_version, "config_version": self._ctx.cfg.config_version,
"data_analysis_enabled": self._flags.data_analysis_enabled,
"strategy_enabled": self._flags.strategy_enabled,
}, },
now=when, now=when,
) )
_log.info(
"engine started: env=%s health=%s data_analysis=%s strategy=%s",
info.environment,
health.state,
self._flags.data_analysis_enabled,
self._flags.strategy_enabled,
)
return _BootResult(environment=info.environment, health=health) return _BootResult(environment=info.environment, health=health)
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@@ -266,24 +282,40 @@ class Orchestrator:
await _safe("market_snapshot", _do) await _safe("market_snapshot", _do)
self._scheduler = build_scheduler( jobs: list[JobSpec] = [
[ JobSpec(name="health", cron=health_cron, coro_factory=_health),
JobSpec(name="entry", cron=entry_cron, coro_factory=_entry), JobSpec(name="backup", cron=backup_cron, coro_factory=_backup),
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor), JobSpec(
JobSpec(name="health", cron=health_cron, coro_factory=_health), name="manual_actions",
JobSpec(name="backup", cron=backup_cron, coro_factory=_backup), cron=manual_actions_cron,
JobSpec( coro_factory=_manual_actions,
name="manual_actions", ),
cron=manual_actions_cron, ]
coro_factory=_manual_actions, if self._flags.strategy_enabled:
), jobs.append(JobSpec(name="entry", cron=entry_cron, coro_factory=_entry))
jobs.append(
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor)
)
else:
_log.warning(
"strategy disabled (CERBERO_BITE_ENABLE_STRATEGY=false): "
"entry and monitor cycles are NOT scheduled"
)
if self._flags.data_analysis_enabled:
jobs.append(
JobSpec( JobSpec(
name="market_snapshot", name="market_snapshot",
cron=market_snapshot_cron, cron=market_snapshot_cron,
coro_factory=_market_snapshot, coro_factory=_market_snapshot,
), )
] )
) else:
_log.warning(
"data analysis disabled (CERBERO_BITE_ENABLE_DATA_ANALYSIS="
"false): market_snapshot job is NOT scheduled"
)
self._scheduler = build_scheduler(jobs)
return self._scheduler return self._scheduler
async def run_forever(self, *, lock_path: Path | None = None) -> None: async def run_forever(self, *, lock_path: Path | None = None) -> None:
@@ -376,17 +408,25 @@ def make_orchestrator(
audit_path: Path, audit_path: Path,
expected_environment: Environment, expected_environment: Environment,
eur_to_usd: Decimal, eur_to_usd: Decimal,
bot_tag: str | None = None,
flags: RuntimeFlags | None = None,
clock: Callable[[], datetime] | None = None, clock: Callable[[], datetime] | None = None,
) -> Orchestrator: ) -> Orchestrator:
"""Build a fresh :class:`Orchestrator` ready for ``boot``/``run_*``.""" """Build a fresh :class:`Orchestrator` ready for ``boot``/``run_*``."""
ctx = build_runtime( build_kwargs: dict[str, object] = {
cfg=cfg, "cfg": cfg,
endpoints=endpoints, "endpoints": endpoints,
token=token, "token": token,
db_path=db_path, "db_path": db_path,
audit_path=audit_path, "audit_path": audit_path,
clock=clock or (lambda: datetime.now(UTC)), "clock": clock or (lambda: datetime.now(UTC)),
) }
if bot_tag is not None:
build_kwargs["bot_tag"] = bot_tag
ctx = build_runtime(**build_kwargs) # type: ignore[arg-type]
return Orchestrator( return Orchestrator(
ctx, expected_environment=expected_environment, eur_to_usd=eur_to_usd ctx,
expected_environment=expected_environment,
eur_to_usd=eur_to_usd,
flags=flags,
) )
+47 -1
View File
@@ -11,6 +11,7 @@ from pytest_httpx import HTTPXMock
from cerbero_bite.config import golden_config from cerbero_bite.config import golden_config
from cerbero_bite.config.mcp_endpoints import load_endpoints from cerbero_bite.config.mcp_endpoints import load_endpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.runtime import Orchestrator from cerbero_bite.runtime import Orchestrator
from cerbero_bite.runtime.dependencies import build_runtime from cerbero_bite.runtime.dependencies import build_runtime
@@ -58,7 +59,12 @@ def _wire_health_probes(httpx_mock: HTTPXMock) -> None:
) )
def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator: def _build_orch(
tmp_path: Path,
*,
expected: str = "testnet",
flags: RuntimeFlags | None = None,
) -> Orchestrator:
ctx = build_runtime( ctx = build_runtime(
cfg=golden_config(), cfg=golden_config(),
endpoints=load_endpoints(env={}), endpoints=load_endpoints(env={}),
@@ -72,6 +78,8 @@ def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator:
ctx, ctx,
expected_environment=expected, # type: ignore[arg-type] expected_environment=expected, # type: ignore[arg-type]
eur_to_usd=Decimal("1.075"), eur_to_usd=Decimal("1.075"),
flags=flags
or RuntimeFlags(data_analysis_enabled=True, strategy_enabled=True),
) )
@@ -122,3 +130,41 @@ def test_install_scheduler_registers_canonical_jobs(tmp_path: Path) -> None:
"manual_actions", "manual_actions",
"market_snapshot", "market_snapshot",
} }
def test_install_scheduler_skips_strategy_jobs_when_disabled(tmp_path: Path) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=True, strategy_enabled=False),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
# data analysis stays on, plus the always-on infra jobs.
assert {"health", "backup", "manual_actions", "market_snapshot"}.issubset(job_ids)
def test_install_scheduler_skips_market_snapshot_when_data_analysis_off(
tmp_path: Path,
) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=False, strategy_enabled=True),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "market_snapshot" not in job_ids
assert {"entry", "monitor", "health", "backup", "manual_actions"}.issubset(
job_ids
)
def test_install_scheduler_analysis_only_default(tmp_path: Path) -> None:
"""The default RuntimeFlags profile (analysis only) drops entry/monitor."""
orch = _build_orch(tmp_path, flags=RuntimeFlags())
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
assert "market_snapshot" in job_ids
+11 -21
View File
@@ -7,25 +7,14 @@ contains the expected statuses.
from __future__ import annotations from __future__ import annotations
from pathlib import Path import pytest
from click.testing import CliRunner from click.testing import CliRunner
from pytest_httpx import HTTPXMock from pytest_httpx import HTTPXMock
from cerbero_bite.cli import main as cli_main from cerbero_bite.cli import main as cli_main
def _seed_token(tmp_path: Path) -> Path: def test_ping_reports_each_service(httpx_mock: HTTPXMock) -> None:
target = tmp_path / "core_token"
target.write_text("super-secret\n", encoding="utf-8")
return target
def test_ping_reports_each_service(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
token_file = _seed_token(tmp_path)
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info", url="http://mcp-deribit:9011/tools/environment_info",
json={ json={
@@ -51,7 +40,7 @@ def test_ping_reports_each_service(
) )
result = CliRunner().invoke( result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"] cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
) )
assert result.exit_code == 0, result.output assert result.exit_code == 0, result.output
assert "deribit" in result.output assert "deribit" in result.output
@@ -65,9 +54,8 @@ def test_ping_reports_each_service(
def test_ping_reports_failure_when_service_unreachable( def test_ping_reports_failure_when_service_unreachable(
tmp_path: Path, httpx_mock: HTTPXMock httpx_mock: HTTPXMock,
) -> None: ) -> None:
token_file = _seed_token(tmp_path)
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info", url="http://mcp-deribit:9011/tools/environment_info",
status_code=500, status_code=500,
@@ -88,15 +76,17 @@ def test_ping_reports_failure_when_service_unreachable(
) )
result = CliRunner().invoke( result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"] cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
) )
assert result.exit_code == 0 assert result.exit_code == 0
assert "FAIL" in result.output assert "FAIL" in result.output
def test_ping_token_missing_exits_nonzero(tmp_path: Path) -> None: def test_ping_token_missing_exits_nonzero(
result = CliRunner().invoke( monkeypatch: pytest.MonkeyPatch,
cli_main, ["ping", "--token-file", str(tmp_path / "nope")] ) -> None:
) # Ensure no env var leaks into the CLI invocation.
monkeypatch.delenv("CERBERO_BITE_MCP_TOKEN", raising=False)
result = CliRunner().invoke(cli_main, ["ping"])
assert result.exit_code == 1 assert result.exit_code == 1
assert "token error" in result.output assert "token error" in result.output
+22
View File
@@ -47,6 +47,28 @@ async def test_call_attaches_bearer_token(httpx_mock: HTTPXMock) -> None:
assert request is not None assert request is not None
assert request.headers["Authorization"] == "Bearer abc123" assert request.headers["Authorization"] == "Bearer abc123"
assert request.headers["Content-Type"] == "application/json" assert request.headers["Content-Type"] == "application/json"
# Default bot tag is sent on every request.
assert request.headers["X-Bot-Tag"] == "BOT__CERBERO_BITE"
@pytest.mark.asyncio
async def test_call_attaches_custom_bot_tag(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
client = _make_client(bot_tag="BOT__SHADOW")
await client.call("any")
request = httpx_mock.get_request()
assert request is not None
assert request.headers["X-Bot-Tag"] == "BOT__SHADOW"
def test_init_rejects_blank_bot_tag() -> None:
with pytest.raises(ValueError, match="non-empty"):
_make_client(bot_tag=" ")
def test_init_rejects_too_long_bot_tag() -> None:
with pytest.raises(ValueError, match="64"):
_make_client(bot_tag="x" * 65)
@pytest.mark.asyncio @pytest.mark.asyncio
+99
View File
@@ -0,0 +1,99 @@
"""Tests for the GUI live-balances fetcher (soft-error handling)."""
from __future__ import annotations
from decimal import Decimal
from typing import Any
import pytest
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.gui.live_data import _fetch_deribit_currency
class _FakeDeribit:
def __init__(self, payload: dict[str, Any] | Exception) -> None:
self._payload = payload
async def get_account_summary(self, currency: str) -> dict[str, Any]:
del currency # not used by the fake; kept for signature parity
if isinstance(self._payload, Exception):
raise self._payload
return self._payload
@pytest.mark.asyncio
async def test_soft_error_payload_becomes_row_error() -> None:
"""MCP V2 returns 200 + ``error`` field when upstream auth fails."""
fake = _FakeDeribit(
{
"equity": 0,
"balance": 0,
"available_funds": 0,
"unrealized_pnl": 0,
"error": "Deribit auth failed (code=13004): invalid_credentials",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.exchange == "deribit"
assert row.currency == "USDC"
assert row.equity is None
assert row.available is None
assert row.unrealized_pnl is None
assert row.error is not None
assert "invalid_credentials" in row.error
@pytest.mark.asyncio
async def test_clean_payload_populates_balance_fields() -> None:
fake = _FakeDeribit(
{
"equity": "12.5",
"available_funds": "10.0",
"unrealized_pnl": "-0.25",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("12.5")
assert row.available == Decimal("10.0")
assert row.unrealized_pnl == Decimal("-0.25")
@pytest.mark.asyncio
async def test_exception_becomes_row_error() -> None:
fake = _FakeDeribit(RuntimeError("boom"))
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.equity is None
assert row.error is not None
assert "RuntimeError" in row.error
assert "boom" in row.error
@pytest.mark.asyncio
async def test_blank_error_field_is_ignored() -> None:
"""An ``error`` field that is empty/None must not trigger the soft-error path."""
fake = _FakeDeribit(
{"equity": "1.0", "available_funds": "1.0", "unrealized_pnl": "0.0", "error": None}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("1.0")
# Sanity-check: the production class signature is what we expect to be drop-in
# replaceable by ``_FakeDeribit``.
def test_fake_matches_production_signature() -> None:
assert hasattr(DeribitClient, "get_account_summary")
+55 -18
View File
@@ -1,14 +1,14 @@
"""Tests for the MCP endpoint and token resolver.""" """Tests for the MCP endpoint, token and bot-tag resolver."""
from __future__ import annotations from __future__ import annotations
from pathlib import Path
import pytest import pytest
from cerbero_bite.config.mcp_endpoints import ( from cerbero_bite.config.mcp_endpoints import (
DEFAULT_BOT_TAG,
DEFAULT_ENDPOINTS, DEFAULT_ENDPOINTS,
MCP_SERVICES, MCP_SERVICES,
load_bot_tag,
load_endpoints, load_endpoints,
load_token, load_token,
) )
@@ -46,29 +46,66 @@ def test_for_service_unknown_raises_key_error() -> None:
endpoints.for_service("nope") endpoints.for_service("nope")
def test_load_token_uses_explicit_path(tmp_path: Path) -> None: def test_load_token_uses_explicit_value() -> None:
target = tmp_path / "core.token" assert load_token(value="abcdef") == "abcdef"
target.write_text("abcdef\n", encoding="utf-8")
assert load_token(path=target) == "abcdef"
def test_load_token_uses_env_var(tmp_path: Path) -> None: def test_load_token_strips_whitespace_in_explicit_value() -> None:
target = tmp_path / "core.token" assert load_token(value=" abcdef\n") == "abcdef"
target.write_text("xyz", encoding="utf-8")
token = load_token(env={"CERBERO_BITE_CORE_TOKEN_FILE": str(target)})
def test_load_token_uses_env_var() -> None:
token = load_token(env={"CERBERO_BITE_MCP_TOKEN": "xyz"})
assert token == "xyz" assert token == "xyz"
def test_load_token_raises_when_file_missing(tmp_path: Path) -> None: def test_load_token_strips_whitespace_in_env_var() -> None:
with pytest.raises(FileNotFoundError): token = load_token(env={"CERBERO_BITE_MCP_TOKEN": " xyz\n"})
load_token(path=tmp_path / "missing") assert token == "xyz"
def test_load_token_raises_when_file_empty(tmp_path: Path) -> None: def test_load_token_raises_when_missing() -> None:
target = tmp_path / "empty" with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
target.write_text("", encoding="utf-8") load_token(env={})
def test_load_token_raises_when_empty() -> None:
with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
load_token(env={"CERBERO_BITE_MCP_TOKEN": " "})
def test_load_token_raises_when_explicit_value_blank() -> None:
with pytest.raises(ValueError, match="empty"): with pytest.raises(ValueError, match="empty"):
load_token(path=target) load_token(value=" ")
def test_load_bot_tag_default_when_unset() -> None:
assert load_bot_tag(env={}) == DEFAULT_BOT_TAG
def test_load_bot_tag_explicit_value_overrides_env() -> None:
tag = load_bot_tag(value="BOT__CUSTOM", env={"CERBERO_BITE_MCP_BOT_TAG": "x"})
assert tag == "BOT__CUSTOM"
def test_load_bot_tag_uses_env_when_set() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": "BOT__SHADOW"})
assert tag == "BOT__SHADOW"
def test_load_bot_tag_strips_whitespace() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " BOT__X\n"})
assert tag == "BOT__X"
def test_load_bot_tag_falls_back_to_default_when_blank_env() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " "})
assert tag == DEFAULT_BOT_TAG
def test_load_bot_tag_rejects_too_long() -> None:
with pytest.raises(ValueError, match="exceeds 64"):
load_bot_tag(value="x" * 65)
def test_mcp_services_table_is_complete() -> None: def test_mcp_services_table_is_complete() -> None:
+63
View File
@@ -0,0 +1,63 @@
"""Tests for the runtime flag loader."""
from __future__ import annotations
import pytest
from cerbero_bite.config.runtime_flags import (
DATA_ANALYSIS_ENV,
STRATEGY_ENV,
RuntimeFlags,
load_runtime_flags,
)
def test_default_profile_is_analysis_only() -> None:
flags = load_runtime_flags(env={})
assert flags == RuntimeFlags(
data_analysis_enabled=True, strategy_enabled=False
)
def test_strategy_can_be_explicitly_enabled() -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: "true"})
assert flags.strategy_enabled is True
assert flags.data_analysis_enabled is True
def test_data_analysis_can_be_disabled() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: "false"})
assert flags.data_analysis_enabled is False
assert flags.strategy_enabled is False
@pytest.mark.parametrize(
"raw,expected",
[
("1", True),
("0", False),
("yes", True),
("no", False),
("on", True),
("OFF", False),
("ENABLED", True),
("Disabled", False),
("True", True),
("False", False),
(" true ", True),
],
)
def test_parses_common_truthy_falsy_tokens(raw: str, expected: bool) -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: raw})
assert flags.strategy_enabled is expected
def test_blank_value_falls_back_to_default() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: " ", STRATEGY_ENV: ""})
assert flags.data_analysis_enabled is True
assert flags.strategy_enabled is False
def test_unknown_token_raises() -> None:
with pytest.raises(ValueError, match=DATA_ANALYSIS_ENV):
load_runtime_flags(env={DATA_ANALYSIS_ENV: "maybe"})