12 Commits

Author SHA1 Message Date
root 18cc27a76e feat(gui): simulazione P/L con effetti dei miglioramenti FDAC + IV-RV
Estende il pannello "💰 P/L atteso" della pagina `📚 Strategia` per
applicare gli effetti stimati di IV-RV gate, A (delta dinamico),
D (vol-harvest) e F (auto-pause) leggendoli direttamente dai
`strategy.*.yaml` di ciascun profilo.

- Nuova `_detect_features(strategy)` che ispeziona la config:
    A → `short_strike.delta_by_dvol` non vuoto
    D → `exit.vol_harvest_dvol_decrease > 0`
    F → `auto_pause.enabled`
    IV → `entry.iv_minus_rv_filter_enabled`
- `_compute_pl` accetta ora un dict `features` opzionale e applica:
    IV: +5 pp win-rate, −25% trade/anno (skip-week aggressivo)
    A: +1.5 pp win-rate, sl_loss × 0.95 (strike picking migliore)
    D: 5% trade convertiti da loss a harvest exit (+0.20×credito)
    F: −8% trade/anno (skip-week dopo streak)
- `_render_profile_card` mostra ora:
    badge "🟢 Miglioramenti attivi" con la lista per profilo,
    delta vs base in E[trade] e P/L annuo,
    help con win_rate effettivo / prob_loss / trade/anno.
- Checkbox "Applica effetti dei miglioramenti" (default ON) per
  switchare tra simulazione realistica e formula base.
- Nuova mini-tabella "Contributo marginale di ogni feature": per
  ogni miglioramento mostra ΔP/L annuo e ΔAPR isolando l'effetto
  del singolo feature, con marker " attiva nel YAML".
- Sensibilità win-rate ora applica le feature attive ai due profili.

Effetti dichiarati come **stime ex-ante** dalla letteratura
short-vol systematic; i valori puntuali (+5 pp win, etc.) andranno
calibrati sul dataset accumulato.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 20:17:24 +00:00
root 1c6baaee83 feat(strategy): F+D+A miglioramenti — auto-pause, vol-harvest, delta dinamico
Implementa tre miglioramenti dalla roadmap di "📚 Strategia" + scaffolding del quarto.
Tutti retro-compatibili: i defaults della golden config disabilitano le nuove funzioni
così il comportamento attuale resta invariato finché l'operatore non le accende
esplicitamente in `strategy.yaml`. Il profilo `strategy.aggressiva.yaml` opta-in
agli incrementi più impattanti.

**F — Auto-pause su drawdown rolling (§7-bis)**

Circuit breaker sopra il kill-switch tecnico. Quando le ultime N posizioni
chiuse hanno cumulato perdite oltre `max_drawdown_pct × capitale_attuale`,
l'engine si auto-mette in pausa per `pause_weeks` settimane. Difende dai
regime change non rilevati dai filtri quant — se i filtri stanno fallendo
sistematicamente, fermarsi è meglio che continuare a sanguinare.

- `AutoPauseConfig` + `cfg.auto_pause` (top-level, default disabled).
- Migrazione SQL `0004_auto_pause.sql`: `system_state.auto_pause_until`
  e `auto_pause_reason` (NULL = engine attivo).
- Nuovo modulo puro `runtime/auto_pause.py` con `is_paused()` (gate I/O-free)
  e `evaluate_drawdown_breach()` (decide se armare).
- `entry_cycle` consulta `is_paused` subito dopo il kill-switch e arma
  la pausa dopo aver calcolato il capitale; nuovo status `_STATUS_AUTO_PAUSED`.
- Repository: `set_auto_pause`, `recent_closed_position_pnls_usd`.
- 12 test unitari: gate filter on/off, lookback insufficiente, soglia
  esatta, capitale non valido, transizioni paused → not-paused.

**D — Vol-collapse harvest (§7-bis)**

Exit opportunistica: quando DVOL è scesa di tot punti rispetto all'entry
e siamo in profit, esce subito. Edge IV-RV catturato, non c'è motivo di
tenere fino al profit-take. Nuovo `ExitAction = "CLOSE_VOL_HARVEST"`,
gate `exit.vol_harvest_dvol_decrease` (default 0 = off). 5 test unitari.

**A — Delta target dinamico per regime DVOL (§3.2)**

Strike short adattivo alla volatilità: a DVOL bassa il margine OTM è
generoso ⇒ posso prendere più premio (delta 0.15); a DVOL alta voglio
più safety distance (delta 0.10). Nuovo `DeltaByDvolBand` (step
function); quando `delta_by_dvol` è popolato, `_select_short` legge
la prima banda ascending con `dvol_now ≤ dvol_under`. Default vuoto =
comportamento invariato. `select_strikes` accetta nuovo kwarg
`dvol_now`, propagato da `entry_cycle`. 4 test unitari.

**C — Scaffolding profit-take graduale (§7.1bis)**

Schema in place ma runtime non ancora wirato. Aggiunge `PartialProfitLevel`
e `exit.profit_take_partial_levels` (default vuoto). Nuovo
`ExitAction = "CLOSE_PROFIT_PARTIAL"` nella Literal. La pipeline di
chiusure parziali nel runtime (entry_cycle / repository / clients)
richiede refactor del position model — lasciato come TODO per un PR
dedicato. La schema è pronta a recepire la config futura senza altri
breaking change.

**Profili aggiornati**

- `strategy.yaml` (golden, 1.2.0): tutto disabilitato by default.
- `strategy.conservativa.yaml` (1.2.0-cons): identico al golden.
- `strategy.aggressiva.yaml` (1.2.0-aggr): A+D+F enabled
  (delta_by_dvol 0.15/0.12/0.10, vol_harvest a 15 pt vol,
  auto_pause @ 15% DD su 5 trade, 2 settimane pausa).

Bump versioni 1.1.0 → 1.2.0, hash ricalcolati, test pinning aggiornato.

Suite: 426 passed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 20:07:25 +00:00
root 21e865ffb0 feat(gui+infra): pagina Strategia, P/L parametrico, profili Conservativa/Aggressiva, dashboard via Traefik
Espone la GUI Streamlit su https://cerbero-bite.tielogic.xyz tramite il
Traefik già attivo sull'host (label allineate al pattern di cerbero-mcp,
TLS via Let's Encrypt, websocket pass-through). Aggiunge:

- nuova tab `📚 Strategia` con stato live dei gate §2 confrontati con
  l'ultimo tick di market_snapshots, pannello P/L parametrico
  affiancato Conservativa vs Aggressiva, tabella di sensibilità
  win-rate → APR e rendering del documento canonico esteso.
- doc `13-strategia-spiegata.md` che lega ogni regola §2-§9 al campo di
  market_snapshots che la alimenta, con sezioni §4-bis (P/L atteso
  realistico, win-rate empirico, drawdown, Sharpe) e §4-ter (confronto
  fra i due profili e quando passare dall'uno all'altro).
- `strategy.conservativa.yaml` (golden config v1.0.0 esplicita) e
  `strategy.aggressiva.yaml` (cap_per_trade 4×, max_concurrent 2×,
  max_contracts 4×, deroga §11 documentata) con config_hash validi.
- nel compose: servizio dedicato `cerbero-bite-gui` (Streamlit su
  0.0.0.0:8765, healthcheck su /_stcore/health, label Traefik), env
  condivisi via anchor YAML `x-bite-env`, `--environment mainnet`
  passato a `start` per allineare il boot check al token del .env (era
  testnet vs mainnet → kill switch armato all'avvio).
- Dockerfile installa anche l'extra `gui` (streamlit) e copia
  `docs/` + i due nuovi profili nell'immagine; `.dockerignore` non
  esclude più `docs/` (causa del primo build silenzioso).

Fix bonus: `_try_load` nella pagina ritornava `LoadedConfig` ma la GUI
leggeva `.sizing.*` direttamente — l'`except: pass` mascherava
l'AttributeError facendo cadere sui default conservativi sia nel
pannello P/L sia nello stato gate (stesso pattern presente nella
Calibrazione). Ora ritorna `.config`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:20:23 +00:00
Adriano ce158a92dd feat(mcp+runtime): allineamento a Cerbero MCP V2 e flag operativi
Adegua Cerbero Bite alla nuova versione 2.0.0 del server MCP unificato
(testnet/mainnet routing per token, header X-Bot-Tag obbligatorio) e
introduce due interruttori operativi indipendenti per separare la
raccolta dati dall'esecuzione di strategia.

Auth e collegamento MCP
- Token bearer letto dalla nuova variabile CERBERO_BITE_MCP_TOKEN; il
  valore sceglie l'ambiente upstream (testnet vs mainnet) sul server.
  Rimosso il caricamento da file (`secrets/core.token`,
  CERBERO_BITE_CORE_TOKEN_FILE, Docker secret /run/secrets/core_token).
- Aggiunto header X-Bot-Tag (default `BOT__CERBERO_BITE`, override via
  CERBERO_BITE_MCP_BOT_TAG) su ogni call MCP, con validazione lato client
  (non vuoto, ≤ 64 caratteri).
- Cartella `secrets/` rimossa, `.gitignore` ripulito, Dockerfile e
  docker-compose.yml aggiornati con env passthrough e fail-fast quando
  manca il token.

Modalità operativa (RuntimeFlags)
- Nuovo modulo `config/runtime_flags.py` con `RuntimeFlags(
  data_analysis_enabled, strategy_enabled)` e loader che parserizza
  CERBERO_BITE_ENABLE_DATA_ANALYSIS e CERBERO_BITE_ENABLE_STRATEGY
  (true/false/yes/no/on/off/enabled/disabled, case-insensitive).
- L'orchestratore espone i flag, audita e logga la modalità al boot
  (`engine started: env=… data_analysis=… strategy=…`), e in
  `install_scheduler` esclude i job `entry`/`monitor` quando strategy è
  off e il job `market_snapshot` quando data analysis è off. I job di
  infrastruttura (health, backup, manual_actions) restano sempre attivi.
- Default profile = "solo analisi dati" (data_analysis=true,
  strategy=false), pensato per la finestra di soak post-deploy.

GUI saldi
- `gui/live_data.py::_fetch_deribit_currency` riconosce il campo soft
  `error` nel payload V2 (HTTP 200 con `error` valorizzato dal server
  quando l'auth Deribit fallisce) e lo propaga come `BalanceRow.error`,
  evitando di mostrare un fuorviante equity = 0,00.

CLI
- Sostituita l'opzione `--token-file` con `--token` (stringa) sui comandi
  start/dry-run/ping; il default proviene dall'env. Le chiamate al
  builder dell'orchestrator passano anche `bot_tag` e `flags`.

Documentazione
- `docs/04-mcp-integration.md`: descrizione del nuovo flusso di auth V2
  (token = ambiente, X-Bot-Tag nell'audit) e router unificati.
- `docs/06-operational-flow.md`: nuova sezione "Modalità operativa" con
  i tre profili canonici e tabella di gating per ogni job; aggiunto
  `market_snapshot` al cron summary.
- `docs/10-config-spec.md`: nuova sezione "Variabili d'ambiente"
  tabellare con tutti gli env, comprese le bool dei flag operativi.
- `docs/02-architecture.md`: layout del repo aggiornato (`secrets/`
  rimosso, `runtime_flags.py` aggiunto), descrizione di `config/`
  estesa.

Test
- 5 nuovi test su `_fetch_deribit_currency` (soft-error, payload pulito,
  eccezione, error blank, signature parity).
- 7 nuovi test su `load_runtime_flags` (default, override, parsing
  truthy/falsy, blank fallback, valore invalido).
- 4 nuovi test su `HttpToolClient` (X-Bot-Tag default e custom, blank e
  troppo lungo rifiutati).
- 3 nuovi test integration sull'orchestratore (gating dei job in base
  ai flag).
- Test esistenti su token/CLI ping/orchestrator aggiornati al nuovo
  schema. Suite intera: 404 passed, 1 skipped (sqlite3 CLI assente
  sull'host di sviluppo).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:14:40 +02:00
Adriano d9454fc996 feat(state+runtime+gui): market_snapshots — calibrazione soglie da dati
Sistema dedicato di raccolta dati per scegliere le soglie dei filtri
sui percentili reali invece di valori a istinto.

Nuovi componenti:

* state/migrations/0003_market_snapshots.sql — tabella + index, PK
  composta (timestamp, asset). Ogni colonna numerica è NULL-able per
  preservare la continuità della serie quando un singolo MCP fallisce.
* state/models.py — MarketSnapshotRecord Pydantic.
* state/repository.py — record_market_snapshot, list_market_snapshots,
  _row_to_market_snapshot.
* runtime/market_snapshot_cycle.py — collettore best-effort che chiama
  spot/dvol/realized_vol/dealer_gamma/funding_perp/funding_cross/
  liquidation_heatmap/macro per ogni asset; raccoglie gli errori in
  fetch_errors_json e segna fetch_ok=false ma persiste comunque la
  riga.
* clients/deribit.py — generalizzati dealer_gamma_profile(currency),
  realized_vol(currency), spot_perp_price(asset). dealer_gamma_profile_eth
  resta come alias per la chiamata dell'entry cycle.
* runtime/orchestrator.py — nuovo job APScheduler `market_snapshot`
  cron */15 con assets configurabili (default ETH+BTC); il consumer
  manual_actions ora dispatcha anche kind=run_cycle cycle=market_snapshot
  per la GUI.
* gui/data_layer.py — load_market_snapshots, enqueue_run_cycle accetta
  market_snapshot; tipo MarketSnapshotRecord esposto.
* gui/pages/6_📐_Calibrazione.py — selezione asset+finestra, conteggio
  fetch_ok, per ogni metrica: istogramma, soglia da strategy.yaml come
  vline rossa, percentili P5/P10/P25/P50/P75/P90/P95, % di tick che la
  soglia avrebbe filtrato.
* gui/pages/1_📊_Status.py — bottone "📐 Forza snapshot" (4° del pannello
  Forza ciclo) per popolare la tabella senza aspettare il cron.

5 nuovi test sul collector (happy, fault tolerance, asset switch,
macro fail, empty assets); test_orchestrator job set aggiornato.
368/368 tests pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 14:39:09 +02:00
Adriano 63d1aa4262 feat(gui): traduzione italiana, logo Cerbero, saldi live e Forza ciclo
* Localizzazione italiana di tutte le pagine (Stato, Audit, Equity,
  Storico, Posizione) e della home; date relative ("5s fa", "12m fa").
* Logo Cerbero (cane a tre teste) in src/cerbero_bite/gui/assets/
  cerbero_logo.png — sostituisce l'emoji 🐺 (lupo, semanticamente
  errata) sia come favicon (`page_icon`) sia in sidebar e header.
* Caricamento automatico di `.env` dal CWD all'avvio della CLI (skip
  sotto pytest tramite PYTEST_CURRENT_TEST), evitando di doversi
  esportare manualmente le 4 URL MCP. Aggiunto python-dotenv come
  dipendenza, `.env.example` committato come template, `.env` resta
  ignorato da git.
* Pagina Stato: nuovo pannello "Saldi exchange" che fa fetch live
  via gateway MCP (Deribit USDC + USDT, Hyperliquid USDC + opzionale
  USDT spot) con cache TTL 60s e bottone refresh; tile riassuntivi
  totale USD / EUR / cambio.
* Pagina Stato: nuovo pannello "Forza ciclo" con tre bottoni
  (entry/monitor/health) che accodano azioni `run_cycle` nella tabella
  manual_actions; il consumer dell'engine — quando in esecuzione —
  dispatcha al `Orchestrator.run_*` corrispondente.
* manual_actions: nuovo `kind="run_cycle"` nello schema
  ManualAction; consumer accetta dict di cycle_runners che
  l'orchestrator popola in install_scheduler. 3 nuovi test (dispatch
  entry, ciclo sconosciuto, fallback senza runner).
* gui/live_data.py — modulo dedicato al fetch MCP dalla GUI
  (relax controllato della regola "no MCP from GUI" solo per i saldi,
  non per i dati di trading).

363/363 tests pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 14:11:40 +02:00
Adriano da88e7f746 docs: align 05/06/09/11 with implemented GUI Phases A–D
* docs/11-gui-streamlit.md — replaces the original spec with what was
  actually built: implementation status table, real page filenames
  (1_Status, 2_Audit, 3_Equity, 4_History, 5_Position), per-page
  inventory of implemented vs deferred sections, GUI ↔ engine table
  showing arm_kill/disarm_kill via manual_actions and the
  not_supported markers for force_close + approve/reject_proposal,
  consumer signature with cron */1, lock model clarified (no GUI
  lockfile), DoD updated with current state.
* docs/05-data-model.md — manual_actions is no longer "pianificata":
  populated by gui/data_layer.py, drained by the manual_actions job;
  per-kind status table (arm/disarm OK, others not_supported).
* docs/09-development-roadmap.md — Phase 4.5 marked implemented with
  per-task / markers for the deferred items (auto-refresh,
  AppTest, force-close hook).
* docs/06-operational-flow.md — adds Flusso 5b describing the
  manual_actions consumer pattern (enqueue → KillSwitch transition →
  audit log linkage).

360/360 tests still pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:31:25 +02:00
Adriano e8345a29c8 feat(gui+runtime): Phase D — kill-switch arm/disarm from the dashboard
Wires the GUI's first write path through the manual_actions queue:

* runtime/manual_actions_consumer.py — drains the queue and
  dispatches arm_kill / disarm_kill via KillSwitch (preserving the
  audit chain). Unsupported kinds (force_close, approve/reject_proposal)
  are marked result="not_supported" so they don't sit forever.
* runtime/orchestrator.py — adds a `manual_actions` job at */1 cron
  to the canonical scheduler manifest.
* gui/data_layer.py — write helpers enqueue_arm_kill /
  enqueue_disarm_kill (the only write path the GUI uses) plus
  load_pending_manual_actions for the pending strip.
* gui/pages/1_📊_Status.py — kill-switch arm/disarm panel with typed
  confirmation ("yes I am sure") + reason field; pending-actions table
  rendered when the queue is non-empty.

End-to-end smoke against the testnet state.sqlite:
  GUI enqueue → consumer dispatch → KillSwitch transition → audit
  chain hash linkage holds, "source":"manual_gui" recorded.

7 new unit tests for the consumer (arm, disarm, drain, unsupported,
default-reason, KillSwitchError handling, empty queue); 360/360 pass.
ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:33:58 +02:00
Adriano 6f6dd4c8dd feat(gui): Phase C — Position drilldown with payoff diagram
* gui/data_layer.py — adds load_position_by_id, load_decisions_for_position,
  compute_payoff_curve (pure math: bull_put / bear_call piecewise linear
  P&L at expiry, with breakeven), compute_distance_metrics (OTM%,
  days-to-expiry, days-held, width%).
* gui/pages/5_💼_Position.py — selector across open + 10 most-recent
  closed positions (with deep-link support via ?proposal_id=…), header
  metrics, distance summary, leg snapshot table (entry-time only —
  the GUI never calls MCP), plotly payoff diagram with strike/breakeven/
  entry-spot annotations and max profit/max loss tiles, decision
  history table from the decisions table.

Live greeks/mid are deliberately not pulled: per docs/11-gui-streamlit.md
the GUI reads SQLite + audit log only and lets the engine refresh data.

Validated math against a synthetic bull_put 2475/2350 × 2 contracts:
breakeven 2452.50, max profit $45, max loss $-160 — all matching the
expected formulas (credit, width × n − credit).

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:28:26 +02:00
Adriano db888ce0e8 feat(gui): Phase B — Equity + History pages
Adds the analytics surface of the dashboard:

* gui/data_layer.py — extended with load_closed_positions (windowed
  filter on closed_at) and three pure-function aggregators:
  compute_equity_curve, compute_kpis, compute_monthly_stats. Drawdown
  is measured against the running peak of cumulative realised P&L.
* gui/pages/3_📈_Equity.py — KPI strip, plotly cumulative-PnL line,
  drawdown area below, P&L histogram by close_reason, per-month table
  with win-rate.
* gui/pages/4_📜_History.py — windowed table of closed trades with
  multiselect close-reason and winners/losers radio filters, six-tile
  KPI strip, CSV export button.
* pyproject.toml — relax mypy on plotly + pandas (no shipped stubs).

Validated with synthetic data: 3 trades, 67% win rate, $50 total,
max drawdown $30 — all matching expected math. GUI launches, HTTP 200
on / and /_stcore/health.

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:11:02 +02:00
Adriano 1af983aff1 feat(gui): Phase A — read-only Streamlit dashboard (Status + Audit)
Implements the foundation of the local observation dashboard described
in docs/11-gui-streamlit.md:

* gui/data_layer.py — read-only wrappers over Repository (system_state,
  open positions) and audit_log (tail iteration, chain verify). The GUI
  never imports runtime/ nor calls MCP services.
* gui/main.py — Streamlit entry point with sidebar (engine health
  badge, kill switch banner, last health check age), home overview.
* gui/pages/1_📊_Status.py — engine status with colored health banner,
  kill switch detail, audit anchor, open positions table.
* gui/pages/2_🔍_Audit.py — live audit log stream (newest-first),
  event filters, hash-chain integrity verify button.
* cli.py gui — replaces the placeholder with os.execvpe to
  `python -m streamlit run` bound to 127.0.0.1, --browser.gatherUsageStats
  false; --db / --audit paths exported via env to the GUI process.
* pyproject.toml — N999 ignore for src/cerbero_bite/gui/pages/* (Streamlit
  auto-discovers pages whose filename contains numbers and emoji icons).

Smoke test: GUI launches, HTTP 200 on / and /_stcore/health, data layer
correctly reflects current testnet state (engine=running, kill_switch
disarmed, 0 open positions, audit chain integra 7 entries).

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:07:23 +02:00
Adriano abf5a140e2 refactor: telegram + portfolio in-process (drop shared MCP)
Each bot now manages its own notification + portfolio aggregation:

* TelegramClient calls the public Bot API directly via httpx, reading
  CERBERO_BITE_TELEGRAM_BOT_TOKEN / CERBERO_BITE_TELEGRAM_CHAT_ID from
  env. No credentials → silent disabled mode.
* PortfolioClient composes DeribitClient + HyperliquidClient + the new
  MacroClient.get_asset_price/eur_usd_rate to expose equity (EUR) and
  per-asset exposure as the bot's own slice (no cross-bot view).
* mcp-telegram and mcp-portfolio removed from MCP_SERVICES / McpEndpoints
  and the cerbero-bite ping CLI; health_check no longer probes portfolio.

Docs (02/04/06/07) and docker-compose updated to reflect the new
architecture.

353/353 tests pass; ruff clean; mypy src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:31:20 +02:00
77 changed files with 8094 additions and 780 deletions
-1
View File
@@ -5,7 +5,6 @@
.pytest_cache/
__pycache__/
data/
docs/
tests/
.coverage
htmlcov/
+43
View File
@@ -0,0 +1,43 @@
# Template per `.env` (questo file viene committato; `.env` no).
#
# Copia: `cp .env.example .env` e popola i valori effettivi.
# --- Endpoint MCP ---
# Default Docker network (interno alla suite Cerbero_mcp V2):
# CERBERO_BITE_MCP_DERIBIT_URL=http://cerbero-mcp:9000/mcp-deribit
# ...
# Gateway pubblico (host esterno alla rete Docker):
CERBERO_BITE_MCP_DERIBIT_URL=https://cerbero-mcp.tielogic.xyz/mcp-deribit
CERBERO_BITE_MCP_HYPERLIQUID_URL=https://cerbero-mcp.tielogic.xyz/mcp-hyperliquid
CERBERO_BITE_MCP_MACRO_URL=https://cerbero-mcp.tielogic.xyz/mcp-macro
CERBERO_BITE_MCP_SENTIMENT_URL=https://cerbero-mcp.tielogic.xyz/mcp-sentiment
# --- Token bearer MCP ---
# Cerbero MCP V2 sceglie l'ambiente upstream (testnet vs mainnet) in
# base al token presentato nell'header Authorization. Per switchare a
# mainnet sostituire il valore con il MAINNET_TOKEN emesso dal cluster
# Cerbero_mcp e riavviare il bot. Il token NON viene mai loggato.
CERBERO_BITE_MCP_TOKEN=
# --- Bot tag (header X-Bot-Tag) ---
# Identifica il bot nell'audit log del server MCP. Default fissato dal
# progetto: `BOT__CERBERO_BITE`. Ridefinirlo solo per ambienti
# alternativi (es. shadow run, replay).
CERBERO_BITE_MCP_BOT_TAG=BOT__CERBERO_BITE
# --- Modalità operativa ---
# Due interruttori indipendenti che decidono cosa fa il bot a ogni
# giro del decision loop:
# * ENABLE_DATA_ANALYSIS=true → raccolta dati MCP, snapshot di
# mercato, calcolo indicatori, log e audit ATTIVI
# * ENABLE_STRATEGY=true → valutazione regole §2-§9 e
# proposta/esecuzione di entry/exit ATTIVE
# Periodo iniziale ("solo analisi dati"): tenere
# ENABLE_DATA_ANALYSIS=true e ENABLE_STRATEGY=false.
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
# --- Telegram (notify-only) ---
# Lascia commentato per modalità disabled (no notifiche).
# CERBERO_BITE_TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
# CERBERO_BITE_TELEGRAM_CHAT_ID=-1001234567890
-3
View File
@@ -43,6 +43,3 @@ data/
.env
.env.*
!.env.example
secrets/*
!secrets/.gitkeep
!secrets/README.md
+9 -4
View File
@@ -14,12 +14,12 @@ ENV UV_PROJECT_ENVIRONMENT=/opt/venv \
# Install only the dependencies first so the layer is cached when the
# source tree changes.
COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-dev --no-install-project
RUN uv sync --frozen --no-dev --no-install-project --extra gui
# Now copy the source tree and install the project itself.
COPY src ./src
COPY README.md ./
RUN uv sync --frozen --no-dev
RUN uv sync --frozen --no-dev --extra gui
FROM python:3.13-slim AS runtime
@@ -34,13 +34,18 @@ WORKDIR /app
ENV PATH=/opt/venv/bin:$PATH \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
CERBERO_BITE_CORE_TOKEN_FILE=/run/secrets/core_token
PYTHONUNBUFFERED=1
COPY --from=builder /opt/venv /opt/venv
COPY --from=builder /app/src /app/src
COPY scripts /app/scripts
COPY strategy.yaml /app/strategy.yaml
# Profili alternativi confrontati nella pagina "📚 Strategia".
COPY strategy.conservativa.yaml /app/strategy.conservativa.yaml
COPY strategy.aggressiva.yaml /app/strategy.aggressiva.yaml
# Documentation is shipped at runtime so the Streamlit "Strategia"
# page can render the canonical strategy doc directly.
COPY docs /app/docs
# Persistent state + audit go into /app/data, mounted as a volume in
# docker-compose.yml.
+100 -28
View File
@@ -1,27 +1,48 @@
# docker-compose.yml — Cerbero Bite
#
# Bite runs in its own Compose project but joins the same Docker
# network used by Cerbero_mcp so it can resolve `mcp-deribit`,
# `mcp-macro` and friends by their service name (see the gateway
# Caddyfile in Cerbero_mcp).
# network used by Cerbero MCP V2 and Traefik (`traefik`) so it can
# either resolve the in-cluster service name (`cerbero-mcp:9000`)
# or reach the public gateway (`https://cerbero-mcp.tielogic.xyz`)
# transparently.
#
# The shared network is declared as external here. Create it once on
# the host with `docker network create cerbero-suite` (or rename the
# Cerbero_mcp network to `cerbero-suite` and mark it external).
# The reverse-proxy network (`traefik`) is declared as external
# here. It is created by the Traefik stack at /opt/docker/traefik
# and shared by every web-facing service on the host.
#
# Secrets are read from ./secrets/, which is .gitignore'd.
# Authentication: a single bearer token is passed through from the
# host `.env` file via `CERBERO_BITE_MCP_TOKEN`. The Cerbero MCP V2
# server uses the token to decide whether the upstream environment
# is testnet or mainnet; switching environment = switching token.
#
# Two services are defined:
# * `cerbero-bite` — the trading engine / CLI worker
# * `cerbero-bite-gui` — the Streamlit dashboard, exposed by
# Traefik at https://cerbero-bite.<DOMAIN>
networks:
cerbero-suite:
traefik:
external: true
secrets:
core_token:
file: ./secrets/core.token
volumes:
bite-data:
x-bite-env: &bite-env
CERBERO_BITE_MCP_TOKEN: ${CERBERO_BITE_MCP_TOKEN:?missing CERBERO_BITE_MCP_TOKEN}
CERBERO_BITE_MCP_BOT_TAG: ${CERBERO_BITE_MCP_BOT_TAG:-BOT__CERBERO_BITE}
# Two independent runtime flags that decide what each cycle does.
# Initial period ("data-only"): DATA_ANALYSIS=true, STRATEGY=false.
CERBERO_BITE_ENABLE_DATA_ANALYSIS: ${CERBERO_BITE_ENABLE_DATA_ANALYSIS:-true}
CERBERO_BITE_ENABLE_STRATEGY: ${CERBERO_BITE_ENABLE_STRATEGY:-false}
# Service URLs — defaults below match the in-cluster Traefik network
# DNS (V2 unified image listening on port 9000). Override any of
# them via .env to point at the public gateway, a custom host, or
# localhost for dev work.
CERBERO_BITE_MCP_DERIBIT_URL: ${CERBERO_BITE_MCP_DERIBIT_URL:-http://cerbero-mcp:9000/mcp-deribit}
CERBERO_BITE_MCP_HYPERLIQUID_URL: ${CERBERO_BITE_MCP_HYPERLIQUID_URL:-http://cerbero-mcp:9000/mcp-hyperliquid}
CERBERO_BITE_MCP_MACRO_URL: ${CERBERO_BITE_MCP_MACRO_URL:-http://cerbero-mcp:9000/mcp-macro}
CERBERO_BITE_MCP_SENTIMENT_URL: ${CERBERO_BITE_MCP_SENTIMENT_URL:-http://cerbero-mcp:9000/mcp-sentiment}
services:
cerbero-bite:
build:
@@ -29,23 +50,18 @@ services:
dockerfile: Dockerfile
image: cerbero-bite:dev
restart: unless-stopped
networks: [cerbero-suite]
networks: [traefik]
cap_drop: [ALL]
security_opt:
- no-new-privileges:true
secrets:
- core_token
environment:
CERBERO_BITE_CORE_TOKEN_FILE: /run/secrets/core_token
# Service URLs — the defaults below match the cerbero-suite
# network DNS. Override per service if you need to point at a
# different host (dev only).
CERBERO_BITE_MCP_DERIBIT_URL: http://mcp-deribit:9011
CERBERO_BITE_MCP_HYPERLIQUID_URL: http://mcp-hyperliquid:9012
CERBERO_BITE_MCP_MACRO_URL: http://mcp-macro:9013
CERBERO_BITE_MCP_SENTIMENT_URL: http://mcp-sentiment:9014
CERBERO_BITE_MCP_TELEGRAM_URL: http://mcp-telegram:9017
CERBERO_BITE_MCP_PORTFOLIO_URL: http://mcp-portfolio:9018
<<: *bite-env
# Telegram and Portfolio are no longer shared MCP services. The
# bot now calls the Telegram Bot API directly and aggregates
# portfolio in-process from Deribit + Hyperliquid + Macro.
# Set the two env vars below to enable Telegram notifications.
# CERBERO_BITE_TELEGRAM_BOT_TOKEN: ...
# CERBERO_BITE_TELEGRAM_CHAT_ID: ...
volumes:
- bite-data:/app/data
healthcheck:
@@ -55,6 +71,62 @@ services:
timeout: 5s
retries: 3
start_period: 120s
# Default command runs the engine status check; override with the
# CLI subcommand of choice (start, ping, dry-run, ...).
command: ["status"]
# Engine main loop (scheduler + monitoring). Switch to `status`,
# `ping`, `dry-run`, ... for one-shot diagnostics. The MCP token in
# `.env` decides the upstream environment server-side; the `start`
# flag below tells the local boot check what to expect (must match,
# otherwise the engine arms the kill switch).
command: ["start", "--environment", "mainnet"]
# Streamlit dashboard published by Traefik on
# https://cerbero-bite.${DOMAIN_NAME:-tielogic.xyz}
#
# The CLI sub-command `cerbero-bite gui` hard-codes the listen
# address to 127.0.0.1, so we bypass the entrypoint and invoke
# Streamlit directly. The two `CERBERO_BITE_GUI_*` env vars match
# what the CLI normally injects (see src/cerbero_bite/cli.py).
cerbero-bite-gui:
image: cerbero-bite:dev
restart: unless-stopped
depends_on:
- cerbero-bite
networks: [traefik]
cap_drop: [ALL]
security_opt:
- no-new-privileges:true
environment:
<<: *bite-env
CERBERO_BITE_GUI_DB: /app/data/state.sqlite
CERBERO_BITE_GUI_AUDIT: /app/data/log/audit.jsonl
volumes:
- bite-data:/app/data
entrypoint:
- python
- -m
- streamlit
- run
- /app/src/cerbero_bite/gui/main.py
- --server.address=0.0.0.0
- --server.port=8765
- --server.headless=true
- --browser.gatherUsageStats=false
command: []
healthcheck:
test:
- "CMD"
- "python"
- "-c"
- "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8765/_stcore/health', timeout=3).close()"
interval: 30s
timeout: 5s
retries: 3
start_period: 30s
labels:
- traefik.enable=true
- traefik.docker.network=traefik
- "traefik.http.routers.cerbero-bite.rule=Host(`cerbero-bite.${DOMAIN_NAME:-tielogic.xyz}`)"
- traefik.http.routers.cerbero-bite.tls=true
- traefik.http.routers.cerbero-bite.entrypoints=websecure
- traefik.http.routers.cerbero-bite.tls.certresolver=mytlschallenge
- traefik.http.services.cerbero-bite.loadbalancer.server.port=8765
- com.centurylinklabs.watchtower.enable=true
+8 -6
View File
@@ -75,7 +75,7 @@ Adriano gli eventi post-fact (entry placed, exit filled, alert).
| Format/lint | `ruff` | Standard del progetto |
| Dependency manager | `uv` | Coerente con `Cerbero_mcp` |
| Client MCP | `httpx.AsyncClient` long-lived (pooling) + `tenacity` per retry | HTTP REST diretto, non SDK `mcp` |
| Notifiche | MCP `cerbero-telegram` (notify-only) | Riusa il canale esistente |
| Notifiche | Bot API Telegram in-process (notify-only) | Token e chat-id da env, no-op se non configurati |
| GUI | `streamlit` ≥ 1.40 + `plotly` (Fase 4.5) | Dashboard locale, processo separato |
## Layout cartelle
@@ -88,9 +88,9 @@ Cerbero_Bite/
├── strategy.yaml # config golden + execution.environment
├── strategy.local.yaml.example # override locale (gitignored)
├── Dockerfile # image runtime + HEALTHCHECK
├── docker-compose.yml # rete external cerbero-suite + secrets
├── docker-compose.yml # rete external cerbero-suite, env passthrough
├── .env.example # template variabili (token MCP, bot tag, modalità)
├── docs/ # questa documentazione
├── secrets/ # gitignored (solo .gitkeep + README)
├── src/cerbero_bite/
│ ├── __init__.py
│ ├── __main__.py # entry point CLI
@@ -135,7 +135,8 @@ Cerbero_Bite/
│ ├── config/ # caricamento e validazione yaml
│ │ ├── schema.py
│ │ ├── loader.py
│ │ ── mcp_endpoints.py # URL + token loader
│ │ ── mcp_endpoints.py # URL + token + bot tag (da .env)
│ │ └── runtime_flags.py # ENABLE_DATA_ANALYSIS / ENABLE_STRATEGY
│ ├── reporting/ # report umani (Fase 5)
│ ├── gui/ # Streamlit dashboard (Fase 4.5)
│ └── safety/ # kill switch, dead man, audit
@@ -170,8 +171,9 @@ Cerbero_Bite/
effetti collaterali. Espone `Orchestrator` come façade per il CLI.
- **`state/`** persistenza. Mai logica di business. Solo CRUD.
- **`config/`** caricamento di `strategy.yaml`, validazione,
esposizione immutabile dei parametri. Risolve gli URL MCP e legge
il bearer token al boot.
esposizione immutabile dei parametri. Risolve gli URL MCP, legge
il bearer token + il bot tag al boot ed espone i due interruttori
operativi `RuntimeFlags(data_analysis_enabled, strategy_enabled)`.
- **`safety/`** controlli trasversali (vedere `07-risk-controls.md`).
- **`reporting/`** generazione di stringhe per Telegram. Niente
logica di trading, solo formatting.
+81 -28
View File
@@ -1,10 +1,22 @@
# 04 — MCP Integration
Cerbero Bite consuma sei servizi MCP HTTP della suite (`Cerbero_mcp`).
Non utilizza l'SDK Python `mcp`: ogni server espone gli endpoint REST
`POST <base_url>/tools/<tool_name>` con autenticazione Bearer, e Cerbero
Bite vi si collega tramite `httpx.AsyncClient` long-lived
(`clients/_base.py`).
Cerbero Bite consuma quattro router MCP HTTP della suite Cerbero MCP V2
(`Cerbero_mcp`): `mcp-deribit`, `mcp-hyperliquid`, `mcp-macro`,
`mcp-sentiment`. Dalla V2 i quattro router vivono nello stesso processo
FastAPI dietro lo stesso host (default in-cluster
`http://cerbero-mcp:9000/mcp-{exchange}`, gateway pubblico
`https://cerbero-mcp.tielogic.xyz/mcp-{exchange}`). Cerbero Bite non
utilizza l'SDK Python `mcp`: ogni router espone gli endpoint REST
`POST <base_url>/tools/<tool_name>` con autenticazione Bearer e header
`X-Bot-Tag`, e Cerbero Bite vi si collega tramite `httpx.AsyncClient`
long-lived (`clients/_base.py`).
Telegram e Portfolio, in passato esposti come servizi MCP condivisi,
sono stati rimossi dal layer MCP e gestiti **in-process** da ogni bot
della suite: il client Telegram chiama direttamente la Bot API
pubblica e l'aggregatore di portafoglio compone equity ed esposizioni
dai client di scambio (Deribit + Hyperliquid) convertendo in EUR
attraverso `cerbero-macro.get_asset_price("EURUSD")`.
## Configurazione di connessione
@@ -14,26 +26,50 @@ con default che corrispondono al DNS della rete Docker
ecc.). Ogni servizio può essere sovrascritto da una variabile
d'ambiente dedicata, utile in sviluppo:
| Servizio | Variabile d'ambiente | Default Docker DNS |
| Servizio | Variabile d'ambiente | Default Docker DNS legacy |
|---|---|---|
| Deribit | `CERBERO_BITE_MCP_DERIBIT_URL` | `http://mcp-deribit:9011` |
| Hyperliquid | `CERBERO_BITE_MCP_HYPERLIQUID_URL` | `http://mcp-hyperliquid:9012` |
| Macro | `CERBERO_BITE_MCP_MACRO_URL` | `http://mcp-macro:9013` |
| Sentiment | `CERBERO_BITE_MCP_SENTIMENT_URL` | `http://mcp-sentiment:9014` |
| Telegram | `CERBERO_BITE_MCP_TELEGRAM_URL` | `http://mcp-telegram:9017` |
| Portfolio | `CERBERO_BITE_MCP_PORTFOLIO_URL` | `http://mcp-portfolio:9018` |
Il bearer token per le chiamate è il token con capability `core` letto
da `secrets/core.token` (path configurabile via
`CERBERO_BITE_CORE_TOKEN_FILE`, default `/run/secrets/core_token` nel
container). Non è loggato.
I default mostrati sopra sono il legacy della topologia V1 (un container
per servizio). Sulla V2 unificata ogni URL deve includere il prefisso di
router, ad esempio `http://cerbero-mcp:9000/mcp-deribit` o
`https://cerbero-mcp.tielogic.xyz/mcp-deribit`. Le URL effettive sono
configurate in `.env`.
Telegram (notify-only) viene configurato direttamente via due
variabili d'ambiente, lette al boot dal client in-process:
| Variabile | Uso |
|---|---|
| `CERBERO_BITE_TELEGRAM_BOT_TOKEN` | Token del bot fornito da BotFather |
| `CERBERO_BITE_TELEGRAM_CHAT_ID` | Identificativo della chat o del gruppo destinatario |
Quando una delle due manca, il client Telegram entra in modalità
**disabled** e ogni `notify_*` diventa un no-op a livello di DEBUG.
Il bearer token per le chiamate è letto dalla variabile d'ambiente
`CERBERO_BITE_MCP_TOKEN` (vedi `.env`). Sulla V2 il valore del token
decide quale ambiente upstream serve la richiesta: lo stesso server MCP
fronteggia testnet e mainnet contemporaneamente, e si passa da uno
all'altro semplicemente sostituendo il valore della variabile e
riavviando il bot. Il token non viene mai loggato.
A ogni chiamata Cerbero Bite aggiunge anche l'header `X-Bot-Tag`, con
valore di default `BOT__CERBERO_BITE` (override via
`CERBERO_BITE_MCP_BOT_TAG`). Il server MCP scrive il valore nell'audit
record di ogni operazione di scrittura, così ogni write resta
attribuibile al bot d'origine.
```python
# clients/_base.py — sintesi
class HttpToolClient:
service: str # "deribit", "macro", ...
base_url: str # "http://mcp-deribit:9011"
token: str # bearer
base_url: str # "https://cerbero-mcp.tielogic.xyz/mcp-deribit"
token: str # bearer (testnet o mainnet, scelto da env)
bot_tag: str = "BOT__CERBERO_BITE" # X-Bot-Tag header
timeout_s: float = 8.0
retry_max: int = 3 # esponenziale 1s/5s/30s
client: httpx.AsyncClient | None # condiviso dal RuntimeContext
@@ -100,22 +136,35 @@ Cerbero Bite è deterministico e non interpreta testi liberi.
| Tool | Uso |
|---|---|
| `get_macro_calendar(days, country_filter, importance_min)` | Filtro entry §2.5: zero eventi `high` in `country_filter` (default `["US","EU"]`) entro la finestra DTE |
| `get_asset_price(ticker="EURUSD")` | Tasso di cambio EUR/USD usato dall'aggregatore di portafoglio per convertire l'equity USD degli scambi in EUR |
### `cerbero-portfolio`
## Componenti in-process
| Tool | Uso |
### Portfolio aggregator (`clients/portfolio.py`)
Il client `PortfolioClient` non chiama più un servizio MCP dedicato;
compone i dati dei due exchange usati dal bot e applica il cambio
EUR/USD letto da `cerbero-macro`.
| Metodo | Comportamento |
|---|---|
| `get_total_portfolio_value(currency="EUR")` | Capitale di base per il sizing engine, dopo conversione in USD |
| `get_holdings()` | Aggregazione manuale di `current_value_eur` per i ticker che contengono `"ETH"`, usata dal filtro §2.7 (`eth_holdings_pct_max`) |
| `total_equity_eur()` | Somma `equity` USD di Deribit (USDC) e Hyperliquid, divide per `EURUSD` per ottenere il capitale in EUR consumato dal sizing engine |
| `asset_pct_of_portfolio(ticker)` | Somma il notional USD assoluto delle posizioni aperte su entrambi gli scambi il cui `instrument`/`coin` contiene `ticker`, e lo divide per l'equity totale USD. Usato dal filtro §2.7 (`eth_holdings_pct_max`) |
### `cerbero-telegram`
**Nota di scope**: la vista è la *slice* del singolo bot. Holdings su
exchange esterni, in cold storage, o gestiti da altri bot della suite
non vengono contati. Il filtro §2.7 va quindi inteso come cap
per-bot, non come cap suite-wide.
Cerbero Bite usa Telegram in modalità **notify-only**: nessuna conferma
manuale, nessun callback. L'engine apre e chiude le posizioni
automaticamente quando le regole sono soddisfatte; Telegram viene
informato post-fact.
### Telegram client (`clients/telegram.py`)
| Tool | Uso |
Cerbero Bite usa Telegram in modalità **notify-only**: nessuna
conferma manuale, nessun callback. L'engine apre e chiude le
posizioni automaticamente quando le regole sono soddisfatte; il
client invia il messaggio al `chat_id` configurato chiamando
direttamente `https://api.telegram.org/bot<TOKEN>/sendMessage`.
| Metodo | Uso |
|---|---|
| `notify(message, priority, tag)` | Alert MEDIUM o messaggi informativi |
| `notify_position_opened(instrument, side, size, strategy, greeks, expected_pnl)` | Notifica di entry placed |
@@ -123,16 +172,20 @@ informato post-fact.
| `notify_alert(source, message, priority)` | Alert HIGH (kill switch) |
| `notify_system_error(message, component, priority)` | Alert CRITICAL |
Quando le credenziali env non sono configurate, il client è in
modalità disabled e ogni invio diventa un no-op silente: il ciclo
decisionale non viene bloccato.
## Errori e degradation
| Server fuori uso | Comportamento |
| Componente fuori uso | Comportamento |
|---|---|
| `cerbero-deribit` | **Hard fail**: senza dati di mercato e canale di esecuzione il ciclo viene saltato; in monitor le posizioni esistenti restano nello stato corrente, alert HIGH e kill switch |
| `cerbero-hyperliquid` | Skip del filtro funding §2.6 con warning; il ciclo prosegue se le altre condizioni sono soddisfatte |
| `cerbero-sentiment` | Bias §3.1 cade su `no_entry` per default (senza funding cross il bias non può fissare la direzione) |
| `cerbero-macro` | Hard fail per il filtro §2.5; senza calendar non si apre |
| `cerbero-portfolio` | Skip dei filtri §2.7 con warning; il sizing usa l'ultimo capitale noto da SQLite |
| `cerbero-telegram` | Skip notifiche post-fact; il ciclo decisionale non viene bloccato (l'engine non aspetta risposte) |
| `cerbero-macro` | Hard fail per il filtro §2.5 e per la conversione EUR/USD del portfolio aggregator; senza calendar/FX non si apre |
| Portfolio aggregator (deribit o hyperliquid down) | I metodi di `PortfolioClient` propagano l'eccezione dell'exchange sottostante; il sizing engine si comporta come per un guasto MCP del livello inferiore |
| Telegram client | Errore HTTP o `ok=false` dalla Bot API → `TelegramError` propagata dal chiamante. In modalità disabled (env mancanti) tutti i `notify_*` sono no-op silenti e il ciclo decisionale prosegue |
I trigger HIGH e CRITICAL armano il kill switch e propagano un alert
in audit chain.
+23 -10
View File
@@ -152,27 +152,40 @@ CREATE TABLE dvol_history (
### `manual_actions`
Coda di azioni manuali generate dalla GUI Streamlit (vedi
`11-gui-streamlit.md`). Schema previsto in vista della Fase 4.5; al
momento la GUI non è implementata e la tabella resta vuota.
`11-gui-streamlit.md`). La tabella è popolata dal layer
`gui/data_layer.py` (`enqueue_arm_kill`, `enqueue_disarm_kill`) ed è
drenata dal job APScheduler `manual_actions`
(`runtime/manual_actions_consumer.consume_manual_actions`, cron
`*/1 * * * *`).
```sql
CREATE TABLE manual_actions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
kind TEXT NOT NULL, -- approve_proposal, reject_proposal,
-- force_close, arm_kill, disarm_kill
kind TEXT NOT NULL, -- arm_kill, disarm_kill,
-- force_close, approve_proposal, reject_proposal
proposal_id TEXT, -- NULL se l'azione non è legata a una proposta
payload_json TEXT, -- JSON con motivo, conferma typed, ecc.
payload_json TEXT, -- JSON con reason, conferma typed, ecc.
created_at TEXT NOT NULL,
consumed_at TEXT, -- NULL = ancora da processare
consumed_by TEXT,
result TEXT
consumed_by TEXT, -- "engine" quando applicata dal consumer
result TEXT -- "ok" / "not_supported" / "error: ..."
);
CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at);
```
Le `manual_actions` non bypassano i risk control: il consumer
(quando esisterà) applicherà gli stessi check di
`safety.system_healthy()` prima di eseguire.
Stato implementativo per `kind`:
| `kind` | Implementato | Effetto |
|---|---|---|
| `arm_kill` | ✅ | `KillSwitch.arm(reason, source="manual_gui")` |
| `disarm_kill` | ✅ | `KillSwitch.disarm(reason, source="manual_gui")` |
| `force_close` | ⏳ | Marcato `result="not_supported"` finché l'orchestrator non espone `handle_force_close` |
| `approve_proposal` / `reject_proposal` | ⏳ | Idem |
Le `manual_actions` **non** bypassano i risk control: ogni azione di
kill switch passa dalla classe `KillSwitch`, che valida lo stato e
appende l'evento corrispondente alla audit chain. La typed
confirmation lato GUI è gating prima dell'enqueue.
### `system_state`
+75 -1
View File
@@ -140,7 +140,7 @@ Trigger: ogni 5 minuti.
- macro.get_macro_calendar(days=1)
- sentiment.get_cross_exchange_funding (no asset filter)
- hyperliquid.get_funding_rate("ETH")
- portfolio.get_total_portfolio_value
- portfolio: skip (componente in-process, copertura indiretta dai probe deribit/hyperliquid/macro)
- telegram: skip (notify-only, no probe non invasivo)
2. SQLite read-write probe (transazione fittizia)
3. Lock file ancora valido
@@ -154,6 +154,26 @@ Trigger: ogni 5 minuti.
Il dead-man (`scripts/dead_man.sh`) sorveglia che `HEALTH_OK` venga
scritto: silenzio > 15 min → kill switch via SQLite e alert.
## Flusso 5b — Manual actions consumer
Trigger: cron `*/1 * * * *` (job APScheduler `manual_actions`).
```
1. Mentre la coda ha righe non consumate:
- leggi `next_unconsumed_action` (oldest-first)
- dispatch per kind:
arm_kill → KillSwitch.arm(reason, source="manual_gui")
disarm_kill → KillSwitch.disarm(reason, source="manual_gui")
force_close / approve_proposal / reject_proposal → result="not_supported"
- mark_action_consumed con consumed_by="engine" e result
2. Latenza tipica end-to-end (enqueue da GUI → effetto): ≤ 60 sec.
```
Il consumer è il **canale unico** di scrittura dalla GUI verso il
runtime: ogni transizione del kill switch passa dalla classe
`KillSwitch` per mantenere SQLite e audit chain in lock-step. Vedi
`runtime/manual_actions_consumer.py` e `docs/11-gui-streamlit.md`.
## Flusso 6 — Recovery dopo crash
All'avvio o dopo un riavvio del container:
@@ -203,7 +223,61 @@ proposed
| `0 2,14 * * *` | Position monitoring | 2× giorno |
| `0 12 1 * *` | Kelly recalibration | Mensile |
| `*/5 * * * *` | Health check | 5 min |
| `*/15 * * * *` | Market snapshot (calibrazione soglie) | 15 min |
| `0 0 * * *` | Backup SQLite + rotation log | Giornaliero |
| `0 8 * * *` | Daily digest Telegram | Giornaliero |
Tutti gli orari in UTC.
## Modalità operativa (interruttori `RuntimeFlags`)
Il bot riconosce due interruttori indipendenti, letti da
`.env` al boot tramite `cerbero_bite.config.runtime_flags.load_runtime_flags()`:
| Variabile d'ambiente | Default | Cosa abilita |
|---|---|---|
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | `true` | Job `market_snapshot` ogni 15 min: raccolta dati MCP, scrittura tabella `market_snapshots`, calibrazione soglie. |
| `CERBERO_BITE_ENABLE_STRATEGY` | `false` | Job `entry` (lunedì 14:00 UTC) e `monitor` (2× giorno): valutazione regole §2-§9 di `01-strategy-rules.md` e proposta/esecuzione ordini. |
I job di infrastruttura (`health`, `backup`, `manual_actions`) sono
**sempre attivi**, indipendentemente dai flag, perché tengono in vita il
kill switch e la persistenza.
### Profilo "solo analisi dati" (default)
Configurazione standard del periodo di soak post-deploy:
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
```
Effetto: il bot raccoglie snapshot di mercato, alimenta `market_snapshots`,
ma **non** invia entry né chiude posizioni autonomamente. I metodi
`run_entry`/`run_monitor` restano richiamabili manualmente da CLI
(`cerbero-bite dry-run --cycle entry|monitor`) e tramite `manual_actions`
per testing e validazione.
### Profilo "trading attivo"
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=true
```
Effetto: tutti i job canonici vengono installati nello scheduler. Lo
switch va fatto solo dopo che la qualità dei dati raccolti è stata
validata e Adriano dà esplicito consenso al passaggio.
### Disattivazione completa dell'analisi dati
Caso eccezionale (manutenzione, problema MCP):
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=false
CERBERO_BITE_ENABLE_STRATEGY=false
```
Il bot resta vivo per health check e ricezione di manual actions, ma
non interroga MCP per dati di mercato e non opera. Il kill switch resta
operativo.
+1 -1
View File
@@ -34,7 +34,7 @@ infrastrutturali o decisioni umane fuori posto.
| Causa | Auto-arm | Implementato | Note |
|---|---|---|---|
| MCP `cerbero-deribit` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH |
| MCP `cerbero-macro` / `cerbero-portfolio` / `cerbero-hyperliquid` / `cerbero-sentiment` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH |
| MCP `cerbero-macro` / `cerbero-hyperliquid` / `cerbero-sentiment` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH |
| `mcp-deribit.environment_info.environment``strategy.execution.environment` | Sì | `runtime/orchestrator.boot` + health check | Severity CRITICAL al boot, HIGH a runtime |
| Mismatch tra il tail del file `data/audit.log` e `system_state.last_audit_hash` (truncation o tampering) | Sì | `runtime/orchestrator._verify_audit_anchor` | Severity CRITICAL al boot |
| Stato SQLite incoerente con il broker (recovery non risolutivo) | Sì | `runtime/recovery.py` | Severity CRITICAL al boot |
+25 -16
View File
@@ -126,29 +126,38 @@ Definition of Done:
- Engine può girare in `--dry-run` per 24h senza errori
- I log sono leggibili e completi
## Fase 4.5 — GUI Streamlit (4 giorni)
## Fase 4.5 — GUI Streamlit (4 giorni) ✅ implementata
**Obiettivo:** dashboard locale per osservazione e azioni manuali. Spec
dettagliata in `11-gui-streamlit.md`.
Tasks:
Implementata in quattro round (AD):
1. Setup `gui/main.py` + sidebar nav + auto-refresh
2. Pagina Status (engine, capitale, MCP health, kill switch panel)
3. Pagina Equity (curve, drawdown, monthly stats)
4. Pagina Position (legs, payoff plotly, decision history, force-close)
5. Pagina History (filtri, KPI, export CSV)
6. Pagina Audit (log live, verify chain, search)
7. Tabella `manual_actions` + consumer job APScheduler nell'engine
8. Test integration con `streamlit.testing.v1.AppTest`
1. `gui/main.py` + sidebar nav (auto-refresh attivo non cablato; il
re-render Streamlit è sufficiente per la frequenza tipica)
2. Pagina Status (engine state, kill switch panel con typed
confirmation, audit anchor, open positions)
3. Pagina Equity (cumulative P&L, drawdown, P&L distribution per
close reason, per-month stats)
4. ✅ Pagina Position (legs from entry snapshot, payoff plotly per
bull_put/bear_call con annotazioni, decision history) — greche
live e force-close differiti
5. ✅ Pagina History (filtri window/reason/winners-losers, KPI strip,
CSV export)
6. ✅ Pagina Audit (live log stream, chain verify, event filter)
7. ✅ Consumer `runtime/manual_actions_consumer.py` con job APScheduler
`*/1` per arm/disarm (force_close = `not_supported` per ora)
8. ⏳ Test integration con `streamlit.testing.v1.AppTest`
Definition of Done:
Definition of Done — stato:
- `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765`
- Tutte le 5 pagine raggiungibili e popolate
- Disarm da GUI loggato in audit chain ed effettivo entro 30 sec
- Force-close da GUI consumato dall'engine entro 30 sec
- Test integration su ogni pagina passing
- `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765`
- Tutte le 5 pagine raggiungibili e popolate
- Disarm da GUI loggato in audit chain (`source="manual_gui"`) ed
effettivo entro ~1 minuto
- ⏳ Force-close da GUI: l'enqueue funziona ma l'orchestrator deve
ancora esporre `handle_force_close`
- ⏳ Test integration AppTest: non scritti
## Fase 5 — Reporting e UX (3-5 giorni)
+28
View File
@@ -307,3 +307,31 @@ Non è permesso parametrizzare:
superiori, non ulteriormente liberalizzabili).
- Lo **scheduler** per intervalli più stretti (un'ottimizzazione che
non si fa via config).
## Variabili d'ambiente
`strategy.yaml` definisce **cosa** fa il bot quando è acceso. Le
variabili d'ambiente in `.env` definiscono **come** si collega al
mondo esterno e **quali interruttori operativi** sono attivi.
Queste vivono fuori da `strategy.yaml` perché cambiano per ambiente
(testnet vs mainnet, soak vs trading) ma non per regola di strategia.
| Variabile | Tipo | Default | Uso |
|---|---|---|---|
| `CERBERO_BITE_MCP_TOKEN` | string (obbligatoria) | — | Bearer token presentato a Cerbero MCP V2. Il valore decide l'ambiente upstream (testnet o mainnet). Cambia il valore = cambia l'ambiente. |
| `CERBERO_BITE_MCP_BOT_TAG` | string ≤ 64 char | `BOT__CERBERO_BITE` | Header `X-Bot-Tag` registrato nell'audit log del server MCP per ogni write. |
| `CERBERO_BITE_MCP_DERIBIT_URL` | URL | gateway pubblico | Override URL router Deribit. |
| `CERBERO_BITE_MCP_HYPERLIQUID_URL` | URL | gateway pubblico | Override URL router Hyperliquid. |
| `CERBERO_BITE_MCP_MACRO_URL` | URL | gateway pubblico | Override URL router Macro. |
| `CERBERO_BITE_MCP_SENTIMENT_URL` | URL | gateway pubblico | Override URL router Sentiment. |
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | bool (`true`/`false`) | `true` | Abilita il job `market_snapshot` (raccolta dati MCP ogni 15 min). |
| `CERBERO_BITE_ENABLE_STRATEGY` | bool (`true`/`false`) | `false` | Abilita i job `entry` e `monitor` (esecuzione regole §2-§9). |
| `CERBERO_BITE_TELEGRAM_BOT_TOKEN` | string | — | Token bot Telegram (notify-only). Senza, il client è in modalità disabled. |
| `CERBERO_BITE_TELEGRAM_CHAT_ID` | string | — | Chat ID destinatario notifiche Telegram. |
I valori bool accettano in input `1`/`0`, `true`/`false`, `yes`/`no`,
`on`/`off`, `enabled`/`disabled` (case-insensitive). Qualunque altro
valore fa fallire il boot con `ValueError`.
Vedi `06-operational-flow.md` §"Modalità operativa" per i profili
canonici di `ENABLE_DATA_ANALYSIS` e `ENABLE_STRATEGY`.
+183 -134
View File
@@ -46,129 +46,152 @@ uv run streamlit run src/cerbero_bite/gui/main.py \
--browser.gatherUsageStats false
```
## Stato implementativo
La dashboard è stata costruita in quattro fasi incrementali:
| Fase | Contenuto | Stato |
|---|---|---|
| A | Status + Audit (osservazione di base) | ✅ |
| B | Equity + History (analitica + export CSV) | ✅ |
| C | Position drilldown con payoff plotly + decision history | ✅ |
| D | Kill-switch arm/disarm dalla dashboard via coda `manual_actions` | ✅ |
Per scelta di scope, restano fuori dalla prima iterazione: force-close
dalla GUI (richiede un hook `handle_force_close` nell'orchestrator),
approve/reject di una proposta (il bot decide autonomamente, non c'è un
flusso di proposta in attesa) e auto-refresh attivo via
`st_autorefresh`. Il consumer di `manual_actions` riconosce già i
`kind` corrispondenti e li archivia con `result="not_supported"`
finché i flussi non saranno cablati.
## Layout cartelle
```
src/cerbero_bite/gui/
├── __init__.py
├── main.py # entry point streamlit, sidebar nav
├── pages/
│ ├── 1_📊_status.py
├── 2_📈_equity.py
├── 3_💼_position.py
├── 4_📜_history.py
── 5_🔍_audit.py
├── components/
│ ├── kill_switch_panel.py
│ ├── mcp_health_grid.py
│ ├── pending_proposal_card.py
│ ├── payoff_chart.py
│ └── greeks_panel.py
└── data_layer.py # wrapper read-only verso state.repository
├── main.py # entry Streamlit, sidebar, home
├── data_layer.py # wrapper read-only + write helpers
└── pages/
├── 1_📊_Status.py # health, kill switch, audit anchor
├── 2_🔍_Audit.py # log stream + chain integrity
├── 3_📈_Equity.py # cumulative P&L + drawdown
── 4_📜_History.py # closed trades + KPI + CSV
└── 5_💼_Position.py # drilldown + payoff plotly
```
I componenti riutilizzabili descritti nello spec originale
(`kill_switch_panel`, `payoff_chart`, ecc.) non sono stati estratti in
file separati: ogni pagina è autonoma e tiene la propria UI inline,
così l'evoluzione resta locale al singolo file. La promozione a
componenti separati è giustificata solo se più pagine condividono lo
stesso widget — al momento non è il caso.
## Pagine
### 1. 📊 Status (home)
Vista a colpo d'occhio dello stato corrente.
Stato corrente e controlli sul kill switch.
Sezioni:
Sezioni implementate:
- **Engine status**: badge verde/giallo/rosso (running/degraded/killed),
uptime, ultimo health check, kill_switch state, kill_reason se armato.
- **Capitale**: equity corrente da `cerbero-portfolio` (cache ultimo
valore noto + timestamp), variazione % vs giorno prima, vs settimana,
vs mese.
- **Posizione attiva**: card con riepilogo (proposal_id, expiry, credit,
P&L unrealized stimato, days_to_expiry) o "nessuna posizione aperta".
- **MCP health grid**: 8 box, uno per server, con latenza ms e semaforo.
- **Pending action**: se l'engine ha una proposta in attesa di conferma
e il timeout Telegram è scaduto, qui appare una card con `Approve`/`Reject`.
Effetto: la decisione viene scritta in coda e il decision orchestrator
la legge al prossimo health-check.
- **Big buttons**: `🟢 Disarm` / `🔴 Arm Kill Switch` (con conferma
typed `"yes I am sure"`).
- **Engine status banner** colorato in base alla health derivata dalla
combinazione `system_state.kill_switch` + età di `last_health_check`
(`running`/`degraded`/`stopped`/`killed`/`unknown`).
- **Top metric tiles**: posizioni aperte, età ultimo health check,
`started_at`, `config_version`.
- **Kill switch controls**: form arm/disarm con typed confirmation
(`"yes I am sure"`) + reason obbligatoria. La submission scrive
un'azione in `manual_actions`; il consumer la applica entro un minuto.
- **Pending manual actions**: tabella delle azioni in coda non ancora
consumate (visibile solo se la coda è non vuota).
- **Audit anchor**: hash chain head persistito in `system_state`.
- **Open positions table**: spread type, contracts, credit, max loss,
strikes, status, opened/expiry.
Auto-refresh: 5 secondi.
Sezioni non ancora implementate rispetto allo spec originale: capitale
con variazioni %, MCP health grid (i probe sono fatti dall'engine e
visibili in audit), pending-proposal card. Il refresh automatico è
manuale (la pagina si aggiorna alla navigazione o al re-render
spontaneo di Streamlit).
### 2. 📈 Equity
### 2. 🔍 Audit
Grafico storia capitale e analitica.
Live log stream + verifica integrità della hash chain.
Sezioni:
Sezioni implementate:
- **Equity curve** (line chart): capitale nel tempo dall'inizio del
tracking. Risoluzione giornaliera. Sovrapposizione opzionale:
- banda Monte Carlo P5/P50/P95 (statica, dal documento)
- DVOL nel tempo (asse Y secondario)
- eventi macro (vertical lines sui giorni FOMC/CPI)
- **Drawdown rolling** (sotto curve): area chart del DD% corrente.
- **P&L distribution** (histogram): trade chiusi raggruppati per outcome
(profit_take, stop_loss, vol_stop, time_stop, ecc.).
- **Tabella mensile**: per ogni mese — n trade, win rate, P&L, max DD.
- **Chain integrity verify**: bottone che richiama `verify_chain` e
riporta numero di entries verificate o l'errore di mismatch.
- **Filtri**: limit (10500) + event filter (auto-popolato dagli event
effettivamente presenti nella tail).
- **Event-count strip**: `Counter` dei tipi di evento nella finestra.
- **Tail table**: timestamp, event, payload JSON canonico, hash
abbreviato — newest-first.
Filtri: range temporale, asset (solo ETH per ora).
### 3. 📈 Equity
Auto-refresh: 30 secondi (cambia raramente).
Curva P&L cumulato e analitica trade chiusi.
### 3. 💼 Position
Sezioni implementate:
Drill-down sulla posizione attualmente aperta (se esiste).
- **KPI strip**: closed trades, win rate, total P&L, edge per trade,
max drawdown (USD + %).
- **Cumulative P&L** (Plotly): riempito a zero, con linea zero di
riferimento.
- **Drawdown** (Plotly area chart, asse invertito).
- **P&L distribution by close reason**: istogrammi Plotly sovrapposti
con conteggio trades per reason in metric tiles.
- **Per-month stats**: tabella aggregata UTC (mese, n trade, vincitori,
win rate, P&L totale, P&L medio).
Sezioni:
- **Header**: proposal_id, opened_at, expiry, days_left, status.
- **Legs table**: instrument, side, size, mid corrente, delta,
theta, vega — refresh periodico via `clients.deribit`.
- **Greche aggregate**: delta/theta/vega netti.
- **Payoff diagram** (plotly): P&L vs spot ETH a scadenza, con
breakeven, max profit, max loss, spot corrente come marker.
- **Decision history**: tabella con tutte le `decisions` di tipo
`exit_check` per questa posizione, in ordine cronologico, con
outcome HOLD / CLOSE_*.
- **Distance metrics**: short strike a `X% OTM`, delta corrente,
distanza in sigma.
- **Force close** (collapsibile): typed confirmation + reason field.
Su submit: scrive in coda azione `manual_close`, l'engine la consuma
al prossimo monitor cycle.
Auto-refresh: 10 secondi.
Window picker: All time, last 30/90 giorni, year-to-date. Banda Monte
Carlo, overlay DVOL e linee eventi macro non sono ancora implementati.
### 4. 📜 History
Storico trade chiusi.
Storico trade chiusi con filtri ed esportazione.
Sezioni:
Sezioni implementate:
- **Filtri**: range temporale, outcome (multiselect), P&L > 0 / < 0 / tutti.
- **Tabella trade chiusi** (`st.dataframe` sortable): proposal_id,
opened_at, closed_at, expiry, n_contracts, credit_usd, debit_paid_usd,
pnl_usd, outcome, days_held.
- **KPI strip**: n trade, win rate, avg win, avg loss, edge per trade,
edge cumulato.
- **Confronto Monte Carlo**: side-by-side delle metriche reali vs
attese da simulazione, con delta in %.
- **Export CSV**: bottone download per uso fiscale.
- **Window picker**: All time, last 7/30/90 giorni, year-to-date.
- **Filtri di dettaglio**: multiselect su `close_reason`, radio
vincitori/perdenti/tutti.
- **KPI strip a sei tile**: trades, win rate, total P&L, avg win,
avg loss, edge per trade.
- **Tabella trade chiusi**: proposal_id (short), spread type, asset,
contracts, strikes, credit/max_loss, P&L, close_reason, days_held,
opened/closed/expiry.
- **CSV export**: download diretto via `st.download_button`.
Auto-refresh: manuale (button).
Confronto Monte Carlo side-by-side non ancora implementato.
### 5. 🔍 Audit
### 5. 💼 Position
Log e audit chain.
Drilldown su una posizione specifica (open o ultime 10 chiuse).
Sezioni:
Sezioni implementate:
- **Live log stream**: ultimi 100 eventi, filtro per `level` e `event`.
Auto-refresh 5 sec.
- **Audit chain status**: bottone `Verify`. Mostra "✅ chain integra
fino a 14.382 eventi" o "❌ tampering rilevato a evento N".
- **Search**: ricerca testuale negli ultimi 30 giorni di log.
- **Stats engine**: numero kill switch armati nell'ultimo mese, MCP
failure count per server, average decision loop latency.
- **Export log**: download `.jsonl.gz` per analisi forensica.
- **Position selector** con label `proposal_id · spread_type ·
short/long · status`. Supporta deep-link via query string
`?proposal_id=…`.
- **Header tiles**: status, spread, contracts, credit USD; caption con
proposal_id pieno + opened/expiry.
- **Distance metrics**: short strike OTM%, days-to-expiry, days-held,
delta at entry, width % of spot.
- **Legs table** (snapshot al momento dell'entry, non live): leg, instrument,
strike, side, size, delta. Una caption ricorda che mid e greche live
non sono fetchate dalla GUI.
- **Payoff at expiry** (Plotly): curva P&L con annotazioni per short
strike, long strike, breakeven, entry spot. Tile riassuntivi per
max profit, max loss, breakeven. Implementato per `bull_put` e
`bear_call`; gli iron condor cadono su una curva piatta (placeholder).
- **Decision history**: tabella delle righe `decisions` legate al
`proposal_id`, newest-first, con outputs JSON canonici.
Auto-refresh: manuale.
Le greche/mid live e il force-close manuale richiedono che l'engine
esponga rispettivamente uno snapshot persistito e l'hook
`handle_force_close` — fuori scope della prima iterazione.
## Comunicazione GUI ↔ Engine
@@ -177,53 +200,70 @@ MCP. Tutto passa via:
| Azione GUI | Effetto |
|---|---|
| Visualizzazione stato | Read da `state/repository.py` (SQLite) |
| Equity / storico | Read da SQLite + `data/log/*.jsonl` |
| MCP health | Read da `state.system_state.last_health_check` (l'engine fa il check) |
| **Disarm kill switch** | Write su `system_state` con `kill_switch=0`; l'engine al prossimo health check rileva e log `KILL_SWITCH_DISARMED` |
| **Arm kill switch** | Write su `system_state` con `kill_switch=1, kill_reason="manual via GUI"` |
| **Force close** | Insert riga in tabella `manual_actions` (nuova) con `kind="force_close", proposal_id=...`; l'engine al prossimo monitor cycle la consuma |
| **Approve pending proposal** | Insert riga in `manual_actions` con `kind="approve_proposal", proposal_id=...` |
| Visualizzazione stato | Read da `state/repository.py` (SQLite) tramite `gui/data_layer.py` |
| Equity / storico | Read da SQLite (`positions` con `status='closed'`) + audit log |
| MCP health | Read indiretto da `system_state.last_health_check` (l'engine fa il probe) |
| **Disarm kill switch** | `enqueue_disarm_kill(reason)` → riga in `manual_actions` con `kind="disarm_kill"`; consumer chiama `KillSwitch.disarm` (audit `KILL_SWITCH_DISARMED`, `source="manual_gui"`) |
| **Arm kill switch** | `enqueue_arm_kill(reason)` → riga `kind="arm_kill"`; consumer chiama `KillSwitch.arm` |
| Force close | Pianificato: `kind="force_close"`. Oggi il consumer marca `result="not_supported"`; richiede l'hook `Orchestrator.handle_force_close` |
| Approve / reject pending proposal | Pianificato: `kind="approve_proposal"` / `"reject_proposal"`. Stesso stato (non implementato lato orchestrator) |
**Nuova tabella SQLite** (`05-data-model.md` da estendere):
La GUI **non** scrive direttamente su `system_state`: ogni transizione
del kill switch passa dal consumer e dalla classe `KillSwitch`, così
SQLite e audit chain restano sincronizzati come per le transizioni
automatiche.
**Schema SQLite** (vedi `05-data-model.md`):
```sql
CREATE TABLE manual_actions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
kind TEXT NOT NULL, -- approve_proposal, reject_proposal, force_close, etc.
kind TEXT NOT NULL,
proposal_id TEXT,
payload_json TEXT,
created_at TEXT NOT NULL,
consumed_at TEXT, -- NULL = ancora da processare
consumed_at TEXT,
consumed_by TEXT,
result TEXT
);
CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at);
```
L'engine include un nuovo job APScheduler `every 30s`:
**Consumer**: `runtime/manual_actions_consumer.consume_manual_actions`.
Registrato come job APScheduler `manual_actions` con cron
`*/1 * * * *` (latenza ≤ 1 minuto, sufficiente per kill-switch). Il
consumer drena tutta la coda a ogni tick e per ogni azione setta
`consumed_at`, `consumed_by="engine"` e `result` (`"ok"`,
`"not_supported"` o `"error: …"`).
```python
async def consume_manual_actions():
actions = state.fetch_unconsumed_manual_actions()
for a in actions:
if a.kind == "force_close":
await orchestrator.handle_force_close(a.proposal_id, a.payload)
elif a.kind == "approve_proposal":
await orchestrator.handle_proposal_approved(a.proposal_id)
# etc.
state.mark_action_consumed(a.id, result="ok")
# src/cerbero_bite/runtime/manual_actions_consumer.py — sintesi
async def consume_manual_actions(ctx, *, now=None):
while (action := ctx.repository.next_unconsumed_action(...)) is not None:
if action.kind == "arm_kill":
ctx.kill_switch.arm(reason=payload.get("reason"), source="manual_gui")
elif action.kind == "disarm_kill":
ctx.kill_switch.disarm(reason=payload.get("reason"), source="manual_gui")
else:
result = "not_supported"
ctx.repository.mark_action_consumed(...)
```
Le azioni write **non bypassano** i risk control: una `force_close` deve
comunque passare dal `safety.system_healthy()` e da una conferma typed
nella GUI prima di essere scritta in coda.
Le azioni write **non bypassano** i risk control: la transizione passa
sempre per `KillSwitch.arm/disarm`, che valida lo stato e logga in
audit. La typed confirmation (`"yes I am sure"`) è gating lato GUI
prima dell'enqueue.
## Lock e concorrenza
- L'engine tiene `data/.lockfile` esclusivo.
- La GUI tiene `data/.gui-lockfile` esclusivo (impedisce due tab/Streamlit aperti).
- Entrambi possono leggere SQLite (modalità WAL).
- L'engine tiene `data/.lockfile` esclusivo via `runtime/lockfile.py`.
- La GUI **non** acquisisce un lock dedicato; più tab Streamlit
contemporanee sono possibili (sconsigliate ma non impedite). Il
vincolo single-writer su SQLite è preservato perché ogni write
passa dalla riga `manual_actions` (auto-increment) e dal consumer
dell'engine.
- Entrambi possono leggere SQLite (le connessioni sono in modalità
short-lived: aperte per chiamata e chiuse subito).
- Le `manual_actions` sono il **canale di scrittura** condiviso, con
primary key auto-increment e flag `consumed_at` per consumo idempotente.
@@ -251,26 +291,35 @@ Per chiarezza:
Telegram resta il canale primario; la GUI è canale di **fallback**
per quando Adriano è davanti al laptop e non al telefono.
## Stima di sforzo
## Stima di sforzo (storica)
Inserita come **Fase 4.5** nella roadmap, tra Orchestrator e Reporting:
La Fase 4.5 è stata implementata in quattro round (AD). Lo spec
originale stimava ~4 giorni ed è stato consegnato in linea con la
stima, con il caveat che `streamlit.testing.v1.AppTest` non è ancora
cablato (le pagine sono validate manualmente via smoke test HTTP) e
che force-close + approve/reject restano fuori scope.
| Task | Giorni |
|---|---|
| Setup `gui/main.py` + sidebar nav + autorefresh | 0.5 |
| Pagina Status + MCP health grid + kill_switch panel | 0.5 |
| Pagina Equity + drawdown + plot mensili | 0.5 |
| Pagina Position + payoff plotly + decision history | 1.0 |
| Pagina History + filtri + export CSV | 0.5 |
| Pagina Audit + search log + verify chain | 0.5 |
| `manual_actions` table + consumer job APScheduler | 0.5 |
| Test integration (Streamlit AppTest framework) | 0.5 |
| **Totale** | **~4 giorni** |
| Task | Giorni stimati | Stato |
|---|---|---|
| Setup `gui/main.py` + sidebar nav + autorefresh | 0.5 | ✅ (autorefresh non attivo) |
| Pagina Status + kill_switch panel | 0.5 | ✅ (MCP health grid non implementata) |
| Pagina Equity + drawdown + plot mensili | 0.5 | ✅ |
| Pagina Position + payoff plotly + decision history | 1.0 | ✅ (greche live e force-close differiti) |
| Pagina History + filtri + export CSV | 0.5 | ✅ |
| Pagina Audit + verify chain | 0.5 | ✅ (search e export gz differiti) |
| `manual_actions` consumer + APScheduler | 0.5 | ✅ (arm/disarm; force_close = `not_supported`) |
| Test integration (Streamlit AppTest) | 0.5 | ⏳ |
| **Totale stimato** | **~4 giorni** | |
Definition of Done:
Definition of Done — stato attuale:
- `cerbero-bite gui` lancia la dashboard
- Tutte le 5 pagine raggiungibili e popolate (anche con dati fake)
- Disarm da GUI loggato in audit chain ed effettivo entro 30 sec
- Force-close da GUI consumato dall'engine entro 30 sec
- Test integration con `streamlit.testing.v1.AppTest` per ogni pagina
- ✅ `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765`
- ✅ Tutte le 5 pagine raggiungibili e popolate dai dati di runtime
- ✅ Disarm da GUI loggato in audit chain (`source="manual_gui"`) ed
effettivo entro ~1 minuto (cron `*/1`)
- ⏳ Force-close da GUI: l'enqueue è possibile, ma l'orchestrator non
ha ancora `handle_force_close`; il consumer marca `result="not_supported"`
- ⏳ Test integration con `streamlit.testing.v1.AppTest`: non scritti
Le voci aperte sono follow-up isolati e non bloccano l'uso quotidiano
della dashboard come tableau d'observation.
+592
View File
@@ -0,0 +1,592 @@
# 13 — Strategia spiegata: dalle regole ai dati
> Documento operativo che lega ogni decisione del rule engine al dato
> osservabile da cui dipende. Pensato per chi guarda il cruscotto e
> vuole capire **a cosa servono** le metriche raccolte ogni 15 minuti
> nella tabella `market_snapshots`. La versione canonica e immutabile
> delle regole resta in `01-strategy-rules.md`; questo documento è la
> guida descrittiva da leggere prima di toccare le soglie in
> `strategy.yaml`.
---
## TL;DR
Cerbero Bite vende **credit spread settimanali su ETH/Deribit** quando
la volatilità implicita è **abbastanza alta da pagare bene**, il
mercato non è in **stress di liquidazione**, non ci sono **eventi macro
forti** in finestra, e il bias direzionale è **chiaro** (bull o bear).
Tutto il resto del tempo, l'engine **non opera**: la disciplina è la
strategia.
Ogni 15 minuti raccoglie 1 riga per asset (ETH e BTC) nella tabella
`market_snapshots`. Quei dati alimentano tre obiettivi distinti:
1. **Decisione live** — l'entry ciclo del lunedì 14:00 UTC legge i
campi più freschi per dire "go/no-go".
2. **Monitoring continuo** — il decision loop di gestione attiva
confronta la situazione con quella all'apertura.
3. **Calibrazione** — la pagina `📐 Calibrazione` usa la distribuzione
storica di ciascun campo per scegliere soglie basate sui percentili
reali del proprio ambiente, non a istinto.
---
## 1. Cosa c'è in `market_snapshots` (1 riga ogni 15 min, per asset)
| Campo | Unità | Sorgente MCP | A che serve nella strategia |
|---|---|---|---|
| `timestamp` | UTC ISO | scheduler | indicizzazione della time-series |
| `asset` | ETH / BTC | scheduler | partizionamento (ETH = sottostante operativo, BTC = controllo macro) |
| `spot` | USD | mcp-deribit `spot_perp_price` | trend 30g (§3.1), distanza % strike (§3.2/3.3), context generale |
| `dvol` | indice 0200 | mcp-deribit `latest_dvol` | gate entry §2.3-§2.4 (35 ≤ DVOL ≤ 90), aggiustamento sizing §5.3, vol-stop §7.3 |
| `realized_vol_30d` | % annualizzata | mcp-deribit `realized_vol` | confronto con DVOL → mean-reversion edge |
| `iv_minus_rv` | punti vol | derivato | richness della IV: > 0 = premio "ricco" da vendere |
| `funding_perp_annualized` | frazione | mcp-hyperliquid `funding_rate_annualized` | gate entry §2.6 (\|f\| ≤ 80% annualizzato), bias §3.1 |
| `funding_cross_annualized` | frazione | mcp-sentiment `funding_cross_median_annualized` | bias direzionale §3.1 (mediana 4 maggiori exchange) |
| `dealer_net_gamma` | USD | mcp-deribit `dealer_gamma_profile` | filtro quant §2.8 (long-gamma regime sopprime la vol → ideale per vendere spread) |
| `gamma_flip_level` | USD | mcp-deribit `dealer_gamma_profile` | livello spot oltre il quale il regime di gamma flippa |
| `oi_delta_pct_4h` | % | mcp-sentiment `liquidation_heatmap` | proxy di accumulo/sgonfiaggio leverage nelle ultime 4h |
| `liquidation_long_risk` | low / med / high | mcp-sentiment `liquidation_heatmap` | rischio long squeeze imminente |
| `liquidation_short_risk` | low / med / high | mcp-sentiment `liquidation_heatmap` | rischio short squeeze imminente |
| `macro_days_to_event` | giorni | mcp-macro `next_high_severity_within` | gate §2.5 (no entry se evento macro entro DTE) |
| `fetch_ok` | bool | scheduler | qualità riga (true = tutte le sotto-chiamate sono andate) |
| `fetch_errors_json` | json o NULL | scheduler | mappa errori per debugging best-effort |
> Un campo `NULL` non invalida la riga: la collezione è
> **best-effort**, una MCP giù non blocca le altre. Le distribuzioni
> si calcolano sui campi disponibili, l'engine entry-cycle invece
> rifiuta l'entry se il dato che gli serve è `NULL` (sicurezza:
> meglio saltare un trade che operare alla cieca).
---
## 2. Le sei famiglie di dati e il "perché"
### 2.1 — Volatilità implicita (DVOL, realized vol, IVRV)
**Cosa misura.** DVOL è l'indice Deribit della IV ETM 30g. Realized
30g è la deviazione standard annualizzata dei rendimenti spot. La
differenza `IV RV` quantifica quanto le opzioni stanno **pagando
sopra** la volatilità che il mercato ha effettivamente realizzato:
**questa è la materia prima del credit spread venditore.**
**Come la usa l'engine.**
- §2.3 / §2.4: `dvol_min = 35`, `dvol_max = 90` — sotto 35 il premio
è troppo magro rispetto a fees+slippage, sopra 90 si è in
stress-regime (rischio gap > edge).
- §5.3: `dvol_adjustment` riduce la size all'aumentare del DVOL
(×1.0 sotto 45, ×0.85 fra 4560, ×0.65 fra 6080, no entry > 80).
- §7.3: `vol_stop_dvol_increase = 10` — se durante la posizione
DVOL sale di 10 punti rispetto all'entry, si chiude.
**Cosa si calibra dai dati raccolti.** Un mese di tick ti dà la
distribuzione di DVOL nel TUO regime (testnet vs mainnet, bull vs
bear). I percentili P25/P50/P75 nella pagina `📐 Calibrazione`
dicono se 35 è davvero il "fondo" o se andrebbe alzato.
### 2.2 — Funding rate (perpetual + cross-exchange median)
**Cosa misura.** Il funding annualizzato del perpetual ETH-PERP
(Hyperliquid principalmente) e la mediana dei funding sui 4 maggiori
exchange. Il funding è la fee periodica che paga il lato sbilanciato
del perp: **è il termometro più diretto del posizionamento leveraged
del mercato.**
**Come la usa l'engine.**
- §2.6: `funding_perp_abs_max_annualized = 0.80` — funding > 80%
annualizzato (in valore assoluto) = liquidazioni a cascata
imminenti, no entry.
- §3.1: il **bias direzionale** dipende dal funding cross:
- `funding_bull_threshold_annualized = 0.20` ⇒ bias bull se
cross-funding ≥ +20%.
- `funding_bear_threshold_annualized = -0.20` ⇒ bias bear se
≤ -20%.
- In mezzo + trend neutro = candidato Iron Condor.
- Trend e funding discordi = no entry.
**Perché due funding diversi.** Il perp di Hyperliquid è il segnale
"è esecutibile la chiusura?" (l'ETH-PERP è la sede di hedge
pratico). La mediana cross-exchange è il segnale macro
"dove sta il mercato globale": più robusta a manipolazioni o picchi
locali.
### 2.3 — Dealer gamma (net gamma + flip level)
**Cosa misura.** L'esposizione netta di gamma dei dealer di opzioni
su Deribit, ricostruita da OI per strike e direzione. Quando
`dealer_net_gamma > 0` (long gamma), i dealer **sopprimono** la
volatilità realizzata col loro hedge (vendono salendo, comprano
scendendo). Quando è negativo, **amplificano** ogni movimento.
**Come la usa l'engine.**
- §2.8: `dealer_gamma_min = 0`, `dealer_gamma_filter_enabled = true`
— entry solo in regime long-gamma. Vendere credit spread con
dealer corto-gamma è statisticamente perdente.
- `gamma_flip_level` è il prezzo spot al quale il regime cambierebbe.
Se siamo a 1% dal flip, il margine di sicurezza è basso anche se
il segno è positivo.
**Cosa si calibra dai dati raccolti.** La distribuzione di
`dealer_net_gamma` nel proprio universo (qualche miliardo USD su
mainnet, ordini di grandezza diversi su testnet) suggerisce se
`min = 0` è troppo permissivo — su mainnet è frequente che il segno
si trovi positivo per molto tempo, qui ha senso una soglia più alta.
### 2.4 — Liquidation heatmap (OI delta + long/short squeeze risk)
**Cosa misura.** Da `mcp-sentiment`:
- `oi_delta_pct_4h`: variazione % dell'open interest aggregato nelle
ultime 4h. Spike positivo → leverage in entrata (rischio fragile);
spike negativo → squeeze appena avvenuta.
- `liquidation_long_risk` / `liquidation_short_risk`: classificazione
qualitativa (`low` / `med` / `high`) della densità di livelli di
liquidazione vicini allo spot.
**Come la usa l'engine.**
- §2.8 (`liquidation_filter_enabled = true`): l'entry cycle scarta
setup con `_risk = high` sul lato che ci interesserebbe (es. un
bull put spread in regime di `long_risk = high` è esposto a un
long-squeeze giù).
- Anche fuori dall'entry, queste due colonne servono come "filtro di
realtà" per il monitoring: se durante la posizione lo squeeze risk
cambia da low a high, è un primo segnale di vol-stop in arrivo.
### 2.5 — Macro calendar (giorni al prossimo evento)
**Cosa misura.** `mcp-macro` restituisce il numero di giorni al
prossimo evento ad alta severità (FOMC, CPI USA, NFP, ECB, Powell
speech) per US/EU. `NULL` = nessun evento entro la finestra DTE.
**Come la usa l'engine.**
- §2.5: se `macro_days_to_event ≤ dte_target = 18`, no entry. Le
uscite macro si trasformano in gap di volatilità che mangiano in
un'ora il credito di tre settimane.
- Le entry sono comunque possibili poco dopo l'evento (vol elevata
appena dopo + RV destinata a comprimersi → IVRV alto = setup di
scuola).
### 2.6 — Spot ETH (e BTC come controllo)
**Cosa misura.** Prezzo last/perp di ETH (e BTC come controllo).
**Come la usa l'engine.**
- §3.1: trend 30g calcolato come `(spot_now / spot_30g_ago - 1)`.
Soglie ±5% definiscono bias bull / bear / neutro.
- §3.2: distanza % degli strike short dallo spot (1525% OTM).
- §7.6: `adverse_move_4h_pct = 0.05` — close su movimento contrario
≥ 5% in 4h.
**Perché anche BTC.** ETH è il sottostante operativo, BTC è il
**termometro macro crypto**: in regimi di alta correlazione, un
movimento BTC che ETH non sta seguendo è un segnale di divergenza che
spesso precede un riallineamento brusco.
---
## 3. Il flusso decisionale, allineato al dato
Quanto segue è la versione "leggibile" delle regole §2-§9 di
`01-strategy-rules.md`. Ogni passo cita i campi di
`market_snapshots` che lo alimentano.
### Fase 1 — Trigger (lunedì 14:00 UTC, festività italiane escluse)
```
SE NESSUNA posizione aperta
E capitale ≥ 720 USD
E 35 ≤ dvol ≤ 90 # market_snapshots.dvol
E |funding_perp_annualized| ≤ 0.80 # market_snapshots.funding_perp_annualized
E macro_days_to_event > dte_target (oppure NULL) # market_snapshots.macro_days_to_event
E ETH holdings cerbero-portfolio ≤ 30%
E (filtri quant: dealer_net_gamma > 0,
liquidation_*_risk ≠ high) # market_snapshots.dealer_net_gamma + liquidation_*
ALLORA
procedi alla Fase 2
ALTRIMENTI
no entry, log motivo, ritento la settimana successiva
```
### Fase 2 — Bias e struttura
```
trend_30g = spot_now / spot_30g_ago - 1 # market_snapshots.spot
funding_x = funding_cross_annualized # market_snapshots.funding_cross_annualized
SE trend_30g ≥ +5% E funding_x ≥ +20%:
struttura = Bull Put Spread
SE trend_30g ≤ -5% E funding_x ≤ -20%:
struttura = Bear Call Spread
SE |trend_30g| < 5% E |funding_x| < 20%
E dvol ≥ 55 E ADX(14) < 20:
struttura = Iron Condor
ALTRIMENTI:
no entry (mercato indeciso o discordante)
```
### Fase 3 — Selezione strike (delta-target + distanza % spot)
Lo strike short è quello a delta target ≈ 0.12 (tolleranza 0.100.15)
**e** OTM 1525%. Lo strike long è a 4% del spot (35% accettabile).
Tutti i numeri sono parametrizzati in `strategy.yaml > structure`.
Lo `spot` corrente per il calcolo viene da `market_snapshots.spot`.
### Fase 4 — Sizing (Kelly frazionario + cap aggregato + DVOL clamp)
```
risk_target = capitale * 0.13 # quarter Kelly
risk_target = min(risk_target, 200 EUR) # cap per-trade
n = floor(risk_target / max_loss_per_contract)
n = min(n, 4, vincolo aggregato 1000 EUR)
n = round_down(n * dvol_multiplier) # market_snapshots.dvol → §5.3
```
### Fase 5 — Esecuzione (combo limit GTC al mid)
Limit al mid del combo, riprezzamento +1 tick / 30min fino a 3 step.
Su trigger urgenti (CLOSE_STOP / CLOSE_VOL / CLOSE_DELTA) l'engine
accetta fino a 5 step di slippage perché l'urgenza prevale sul
prezzo.
### Fase 6 — Monitoring (cron di gestione attiva, default ogni 12h)
Per ogni posizione aperta, in **ordine** (primo trigger vince):
| # | Trigger | Dato sorgente |
|---|---|---|
| 1 | Profit take: mark ≤ 50% credito | combo mark via deribit |
| 2 | Stop loss: mark ≥ 250% credito | combo mark via deribit |
| 3 | Vol stop: dvol_now ≥ dvol_entry + 10 | `market_snapshots.dvol` |
| 4 | Time stop: dte ≤ 7 (skip se ≥ 70% profit) | scadenza struttura |
| 5 | Delta breach: \|delta_short\| ≥ 0.30 | option chain via deribit |
| 6 | Adverse move: \|return_4h_ETH\| ≥ 5% contro | `market_snapshots.spot` |
| 7 | Altrimenti | HOLD |
Il monitoring NON consulta `market_snapshots` per i prezzi opzioni
(legge live), ma li consulta per `dvol` e `spot` con il vantaggio di
una serie storica già normalizzata e auditabile.
---
## 4. Cosa fa OGGI il bot in modalità "data-only"
Il bot oggi è in **modalità raccolta dati** (`ENABLE_DATA_ANALYSIS=true`,
`ENABLE_STRATEGY=false`). Vuol dire:
- Il job `market_snapshot` (cron `*/15`) gira: scrive nuove righe in
SQLite, alimenta calibrazione e monitoring storico.
- Il job `health` (`*/5`) verifica disponibilità MCP e ambiente
Deribit; alza il kill switch se qualcosa non torna.
- Il job `backup` (`0 *`) snapshotta lo stato ogni ora.
- Il job `manual_actions` (`*/1`) consuma comandi dalla GUI.
- I cicli `entry` e `monitor` **non sono nemmeno schedulati**: nessun
ordine può partire, nessuno strike viene letto.
Quando si vuole passare alla fase operativa (paper trading o
mainnet), basta:
1. Riempire `strategy.yaml` con le **soglie calibrate** sui
percentili reali della pagina `📐 Calibrazione` (non lasciare i
valori default a istinto).
2. Bumpare `config_version` + rigenerare `config_hash` con
`cerbero-bite config hash --file strategy.yaml`.
3. Settare `ENABLE_STRATEGY=true` in `.env` e ricreare il container.
4. Disarmare il kill switch da GUI o CLI con motivazione esplicita.
5. **Una settimana di paper trading** (mainnet con ordini disabilitati
o testnet) prima di alzare il flag definitivo.
---
## 4-bis. P/L atteso (realistico)
I numeri qui sotto sono **stime ex-ante**, non promesse. Servono ad
allineare le aspettative con la geometria della strategia: capire
**quanto poco si rischia per trade**, **quanto raramente si entra**, e
**perché l'edge è strutturalmente sottile**.
> **Domanda onesta che chiunque guardi i numeri dovrebbe farsi:** se a
> win-rate 7072% l'aspettativa per trade è circa zero, **che senso ha
> la strategia?**
>
> **Risposta:** il selling vol nudo è effettivamente neutro a quel
> win-rate. **L'edge della Cerbero Bite non è "vendere vol"; è
> "vendere vol solo quando i filtri quant alzano il win-rate sopra il
> 75%".** I gate §2 (DVOL band, dealer gamma > 0, no macro entro DTE,
> liquidation risk ≠ high, bias trend × funding concorde) sono
> **costruiti per saltare proprio le finestre statisticamente
> perdenti** e operare solo in quelle favorevoli. La pagina
> `📚 Strategia` ha una tabella di sensibilità che mostra come l'APR
> passa da ≈0% (win 0.72) a +3-5% (win 0.780.80): è esattamente la
> distanza che i filtri devono coprire. Per questo i primi giorni di
> raccolta dati servono a **misurare** se i filtri stanno effettivamente
> alzando il win-rate prima di committare capitale.
### Per singolo trade (riferimento: ETH spot ≈ 3000 USD)
| Voce | Formula / fonte | Valore tipico |
|---|---|---|
| Larghezza spread | 4% × spot | **120 USD / contratto** |
| Credito incassato | ≥ 30% × larghezza | **3648 USD / contratto** |
| Max profit teorico | = credito (a scadenza OTM) | 3648 USD / contratto |
| **Profit-take §7.1 (50% credito)** | 0.5 × credito | **+1824 USD / contratto** |
| **Stop-loss §7.2 (mark = 2.5× credito)** | 1.5 × credito | **5472 USD / contratto** |
| Margine bloccato | ≈ larghezza | 120 USD / contratto |
| Fees Deribit | 0.03% notional × 2 leg | ~12 USD / contratto / trade |
> Su spot più basso (2000 USD) la larghezza scende a 80 USD/contratto
> e i numeri assoluti seguono proporzionalmente.
### Sizing tipico vs capitale
Il sizing è governato dal Quarter-Kelly **+ cap per-trade 200 EUR
(~215 USD)**. Sopra una certa soglia, il cap domina: alzare il
capitale **non aumenta** i contratti per trade.
| Capitale | risk_target (Kelly) | risk effettivo (post-cap) | Contratti tipici (spot=3000) |
|---|---|---|---|
| 720 USD (minimo) | 94 USD | 94 USD | **01** (entry spesso saltata per sizing) |
| 1 500 USD | 195 USD | 195 USD | **1** |
| 3 000 USD | 390 USD | **215 USD** (cap) | **1** |
| 10 000 USD | 1 300 USD | **215 USD** (cap) | **1** |
| 50 000 USD+ | 6 500 USD | **215 USD** (cap) | **1** (cap aggregato 1 075 USD = max 4 trade aperti, ma `max_concurrent_positions: 1`) |
> Con i cap correnti la strategia è **dimensionata per capitale
> piccolo (1.510 k USD)**: oltre, il rendimento sul totale scala
> sotto-lineare e tende a zero.
### Frequenza realistica di entry
La regola si valuta una volta a settimana, ma la maggioranza dei
lunedì viene saltata per:
| Motivo di skip | Frequenza tipica |
|---|---|
| DVOL fuori banda (3590) | 2540% |
| Bias non chiaro (trend × funding discordi o entrambi neutri senza IC) | 2535% |
| Macro entro DTE | 1020% |
| Funding o liquidation risk fuori soglia | 515% |
| Capitale o sizing insufficiente | 05% |
**Risultato netto: 3050% delle settimane finisce in entry effettiva
⇒ 1525 trade / anno** (52 lunedì × 3050%). Le altre settimane il
bot sta fermo. È il design.
### Win-rate atteso (short delta 0.12 + profit-take 50%)
Letteratura e backtest su credit spread short delta 0.100.15 con
TP@50% e SL@1.5×:
| Esito | Probabilità tipica | Risultato |
|---|---|---|
| Profit-take a 50% credito | **~7075%** | +1824 USD/contratto |
| Stop-loss a 1.5× credito | ~1520% | 5472 USD/contratto |
| Time-stop o exit DTE 7g | ~510% | piccolo positivo (~+510 USD) |
| Vol/delta/macro stop | ~35% | variabile, mediamente neutro |
Atteso medio per contratto:
```
E[trade] ≈ 0.72 × 21 + 0.18 × (-63) + 0.07 × 7 + 0.03 × 0
≈ 15.1 11.3 + 0.5 + 0
≈ +4.3 USD lordi / contratto
```
**Al netto di fees (~1.5 USD round-trip) e slippage (~5% del credito
≈ 2 USD): E[trade] ≈ +13 USD per contratto.**
### Proiezione annuale (1 contratto medio per trade)
| Scenario | Trade/anno | E[trade] netto | P/L lordo annuo | Su capitale 1 500 USD | Su capitale 3 000 USD |
|---|---|---|---|---|---|
| **Pessimistico** (vol bassa, regime bear vol) | 12 | +1 USD | **+12 USD** | +0.8% | +0.4% |
| **Realistico medio** | 18 | +2.5 USD | **+45 USD** | +3% | +1.5% |
| **Buono** (regime favorevole, IVRV alto) | 22 | +4 USD | **+88 USD** | +5.9% | +2.9% |
| **Eccellente** (cherry-picking ex-post) | 25 | +6 USD | **+150 USD** | +10% | +5% |
**Realisticamente: +1.5% / +5% APR sul capitale totale**, con i cap
correnti. È in linea con la letteratura su short-vol systematic con
disciplina di stop. **Non è una strategia "raddoppia il capitale".**
È una strategia che vuole guadagnare il **premio di rischio della
volatilità** in modo controllato.
### Drawdown e rischio coda
- **Streak realistico di perdite consecutive**: 35 stop-loss di fila
capitano. Drawdown su 1 contratto: 150 / 300 USD assoluti.
- **Su capitale 1 500 USD** = drawdown del 1020% del capitale
totale. Aspettarselo, è dentro il design.
- **Tail risk:** un evento gap notturno (sentenza SEC, hack
exchange, default importante) può portare il mark a 100% della
larghezza prima che lo stop sia eseguibile. **Perdita massima
reale per trade = larghezza intera** (`width - credit_iniziale`),
cioè 7296 USD/contratto, non i 5472 USD del modello stop-loss.
- I **filtri quant** (`dealer_gamma_min`, `liquidation_filter`) e
il **macro filter** sono stati introdotti **per ridurre la coda**,
non per migliorare l'aspettativa media.
### Sharpe atteso
Strategie short-vol sistematiche con disciplina hanno:
- **Sharpe 0.81.5** in regimi favorevoli (mercato lento + IV alta).
- **Sharpe 0.30.8** in regimi normali.
- **Sharpe negativo** in regimi di vol-of-vol (es. Q1 2020, Maggio
2021, FTX week). I filtri li mitigano, non li annullano.
### Cosa cambia con `ENABLE_STRATEGY=true`
In modalità data-only (oggi) il P/L atteso è **0** — l'engine
**non opera**. Il valore della raccolta di oggi è:
1. **Calibrare** soglie su percentili reali → P/L atteso più
realistico al go-live.
2. **Validare** i filtri quant osservando ex-post quanti tick
sarebbero stati filtrati (vedi pagina `📐 Calibrazione`, colonna
"% bloccato dalla soglia").
3. **Misurare** la quota effettiva di lunedì che superano i filtri
nel proprio regime, prima di committare capitale.
> Suggerimento: 4 settimane di dati = 4 lunedì × probabilità entry =
> 12 candidate entry effettive. **Aspettare almeno 8 settimane**
> prima di tarare le soglie dà uno storico con dispersione
> sufficiente per decisioni non-rumorose.
---
## 4-ter. Due profili: Conservativa vs Aggressiva
Il P/L del §4-bis assume i cap della golden config v1.0.0
(`cap_per_trade_eur: 200`, `max_concurrent_positions: 1`,
`max_contracts_per_trade: 4`). Su quel profilo il P/L assoluto è
piccolo per design — la strategia è dimensionata come **macchina di
conservazione del capitale** con premio modesto su T-bill.
Per chi vuole rendimenti significativi, il repo include un secondo
file di config — `strategy.aggressiva.yaml` — che **deroga
esplicitamente** alla §11 di `01-strategy-rules.md` allargando le tre
leve dominanti:
| Leva | Conservativa | Aggressiva | Effetto sul P/L |
|---|---|---|---|
| `cap_per_trade_eur` | 200 | **800** | 4× la size per trade |
| `cap_aggregate_open_eur` | 1 000 | **3 200** | 4× il rischio aggregato |
| `max_concurrent_positions` | 1 | **2** | 2× le posizioni aperte simultanee |
| `max_contracts_per_trade` | 4 | **16** | toglie il vincolo aggregato anche su capitali maggiori |
| `kelly_fraction` | 0.13 | **0.13** | invariato (la disciplina Kelly resta) |
| Filtri quant (gamma, liquidation, macro) | ON | **ON** | invariati (l'edge è qui, non si tocca) |
**Risultato atteso (a parità di filtri e win-rate):** P/L ≈ 48× il
profilo conservativo. Drawdown atteso scala con lo stesso fattore
(2040% del capitale impiegato in streak avverse, contro 1020% del
conservativo). La pagina `📚 Strategia` ha un pannello affiancato che
calcola entrambi sugli stessi slider.
**Il rovescio della medaglia.**
- La deroga alla §11 va **autorizzata esplicitamente** nel commit che
switcha la config; tre settimane di paper trading dedicato sono
raccomandate.
- Il drawdown maggiore richiede capitale "growth", non capitale di
parcheggio.
- I filtri quant restano **identici** — non c'è "più aggressivo" sui
trigger di entry, perché lì non c'è alpha da spremere senza
peggiorare il win-rate.
**Multi-asset (ETH + BTC) — caveat.**
L'ulteriore moltiplicatore 2× citato nel §4-bis (multi-asset) **non è
abilitato** dalla sola modifica della config: il rule engine attuale è
single-asset (`asset.symbol`). Per estenderlo servono modifiche in:
- `cerbero_bite/runtime/entry_cycle.py` (loop sui simboli)
- `cerbero_bite/state/repository.py` (multi-position chiave per asset)
- `cerbero_bite/runtime/orchestrator.py` (scheduler one-asset → N)
Il job di raccolta dati è già multi-asset (`DEFAULT_ASSETS = ("ETH",
"BTC")`), quindi tutto il dataset utile per validare l'estensione è
già disponibile. È un lavoro di codice ben circoscritto, da fare in
un branch dedicato dopo che il dataset di calibrazione è abbondante.
**Quando passare dal profilo conservativo all'aggressivo.**
Solo se **tutte** le seguenti sono vere:
1. ≥ 8 settimane di dati raccolti su mainnet (≥ ~2k snapshot).
2. Win-rate empirico misurato (paper trading o backtest sui tick
raccolti) **≥ 0.75**.
3. APR atteso del profilo aggressivo (vedi pannello GUI) **≥ 8%**
netto a quel win-rate.
4. Capitale impegnato è **growth capital**, non riserva tattica.
5. Sopporti emotivamente un drawdown a doppia cifra senza disarmare
manualmente la strategia in mezzo a una streak.
Se anche solo uno dei 5 manca → **resta sulla conservativa**, è
quella che il sistema parte ad eseguire.
---
## 5. Come leggere il dato giorno per giorno
Tre euristiche operative sui campi raccolti:
1. **Premio "ricco":** `iv_minus_rv` consistentemente > 5 punti per
N giorni → il regime sta pagando bene la vendita di vol. Sono i
periodi in cui la strategia ha edge maggiore.
2. **Premio "magro":** `dvol < 35` per più giorni → la finestra del
lunedì viene saltata. Non è un fallimento: è la disciplina che
funziona.
3. **Stress imminente:** `liquidation_*_risk = high` o spike di
`oi_delta_pct_4h` (> 5% in valore assoluto) + funding ai limiti
→ atteso vol stop / time stop attivi nei prossimi cicli, anche
se la posizione è in profit.
Nei giorni di **eventi macro** (`macro_days_to_event` piccolo) la
combinazione utile è: aspettare l'evento, lasciare che `dvol` scenda
quando `realized_vol_30d` non si è realizzata, e cogliere il setup
classico **post-evento**.
---
## 6. Glossario rapido
- **Credit spread:** vendita di un'opzione e acquisto di un'opzione
più OTM stessa scadenza per cap del rischio. Si incassa un credito,
si vince se il sottostante non rompe lo strike short.
- **Bull put / Bear call:** credit spread direzionali (rispettivamente
bullish / bearish).
- **Iron condor:** Bull put + bear call sullo stesso sottostante e
scadenza. Si vince in regime laterale.
- **DVOL:** indice Deribit della IV ETM 30g, scala 0200.
- **Realized vol 30g:** σ annualizzata dei rendimenti spot sui 30g
rolling.
- **IV RV:** differenza tra IV implicita (DVOL) e RV; > 0 = "premio"
positivo per il venditore di vol.
- **Funding annualizzato:** funding rate del perp moltiplicato per le
finestre standard (di solito 8h × 3 al giorno × 365).
- **Dealer net gamma:** somma di gamma per tutti gli strike, pesata
per direzione dei dealer (long = riduce vol, short = amplifica).
- **OI delta % 4h:** variazione % dell'open interest aggregato nelle
ultime 4 ore.
- **DTE:** Days To Expiry, giorni alla scadenza dell'opzione.
- **Kill switch:** flag persistente che blocca apertura di nuove
posizioni; armato automaticamente su mismatch ambiente o failure
ripetuti, disarmato solo manualmente con motivazione.
---
## 7. Riferimenti incrociati
- Regole canoniche e immutabili: `01-strategy-rules.md`
- Schema dati persistente: `05-data-model.md`
- Algoritmi (calcolo trend, IVRV, ecc.): `03-algorithms.md`
- Dettaglio integrazioni MCP: `04-mcp-integration.md`
- Pagina GUI di calibrazione: `📐 Calibrazione`
- Sorgente del collector: `src/cerbero_bite/runtime/market_snapshot_cycle.py`
- Modello pydantic riga: `cerbero_bite.state.models.MarketSnapshotRecord`
+5 -1
View File
@@ -20,6 +20,7 @@ dependencies = [
"httpx>=0.27",
"tenacity>=9.0",
"python-dateutil>=2.9",
"python-dotenv>=1.2.2",
]
[project.optional-dependencies]
@@ -96,6 +97,9 @@ ignore = [
[tool.ruff.lint.per-file-ignores]
"tests/**" = ["PLR2004", "ARG", "S101", "ERA001", "B017"]
# Streamlit auto-discovers pages whose file names start with a number and
# may contain icons; the convention conflicts with N999.
"src/cerbero_bite/gui/pages/*" = ["N999"]
[tool.ruff.format]
quote-style = "double"
@@ -113,7 +117,7 @@ no_implicit_reexport = true
files = ["src/cerbero_bite"]
[[tool.mypy.overrides]]
module = ["apscheduler.*"]
module = ["apscheduler.*", "plotly.*", "pandas.*"]
ignore_missing_imports = true
[tool.pytest.ini_options]
View File
-28
View File
@@ -1,28 +0,0 @@
# `secrets/`
Cartella runtime per i credenziali sensibili. Tutti i file in questa
directory sono `.gitignore`d eccetto questo README e `.gitkeep`.
## Contenuto atteso
| File | Origine | Uso |
|---|---|---|
| `core.token` | copia di `Cerbero_mcp/secrets/core.token` | bearer token con capability `core` per chiamare i tool MCP. Letta una sola volta al boot del container. |
## Setup
```bash
cp /path/to/Cerbero_mcp/secrets/core.token secrets/core.token
chmod 600 secrets/core.token
```
Il `docker-compose.yml` di Cerbero Bite monta `secrets/core.token`
come Docker secret a `/run/secrets/core_token` dentro il container, e
la variabile d'ambiente `CERBERO_BITE_CORE_TOKEN_FILE` punta lì per
default.
## Rotazione
Quando il token core viene ruotato sul cluster Cerbero_mcp, sostituire
anche la copia locale. Il container va riavviato perché il token è
letto solo all'avvio.
+101 -26
View File
@@ -10,6 +10,7 @@ without changing the surface.
from __future__ import annotations
import asyncio
import os
import sys
from collections.abc import Callable
from datetime import UTC, datetime
@@ -26,14 +27,15 @@ from cerbero_bite.clients import HttpToolClient, McpError
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.clients.sentiment import SentimentClient
from cerbero_bite.config.loader import compute_config_hash, load_strategy
from cerbero_bite.config.mcp_endpoints import (
DEFAULT_ENDPOINTS,
load_bot_tag,
load_endpoints,
load_token,
)
from cerbero_bite.config.runtime_flags import load_runtime_flags
from cerbero_bite.logging import configure as configure_logging
from cerbero_bite.logging import get_logger
from cerbero_bite.runtime.orchestrator import Orchestrator, make_orchestrator
@@ -74,6 +76,14 @@ def _phase0_notice(action: str) -> None:
@click.pass_context
def main(ctx: click.Context, log_dir: Path, log_level: str) -> None:
"""Cerbero Bite — rule-based ETH credit spread engine."""
# Load `.env` once at CLI entry, unless we are running under
# pytest (which sets ``PYTEST_CURRENT_TEST`` for the duration of
# the test). Existing env vars win over the file (override=False).
if "PYTEST_CURRENT_TEST" not in os.environ:
from dotenv import load_dotenv # noqa: PLC0415
load_dotenv(Path.cwd() / ".env", override=False)
configure_logging(log_dir=log_dir, level=log_level.upper())
ctx.ensure_object(dict)
ctx.obj["log_dir"] = log_dir
@@ -197,9 +207,14 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
show_default=True,
),
click.option(
"--token-file",
type=click.Path(dir_okay=False, path_type=Path),
"--token",
type=str,
default=None,
help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). "
"The server uses the token to choose between testnet "
"and mainnet upstream environments."
),
),
click.option(
"--db",
@@ -235,7 +250,7 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
def _build_orchestrator(
*,
strategy_path: Path,
token_file: Path | None,
token: str | None,
db: Path,
audit: Path,
environment: str,
@@ -243,7 +258,7 @@ def _build_orchestrator(
enforce_hash: bool = True,
) -> Orchestrator:
loaded = load_strategy(strategy_path, enforce_hash=enforce_hash)
token = load_token(path=token_file)
resolved_token = load_token(value=token)
# Strategy file values win over the CLI defaults; explicit overrides
# via env-style values (CLI flags) still apply when the user provides
# them — Click signals "default" via Click's resilient_parsing flag,
@@ -262,11 +277,13 @@ def _build_orchestrator(
return make_orchestrator(
cfg=loaded.config,
endpoints=load_endpoints(),
token=token,
token=resolved_token,
db_path=db,
audit_path=audit,
expected_environment=chosen_env, # type: ignore[arg-type]
eur_to_usd=chosen_fx,
bot_tag=load_bot_tag(),
flags=load_runtime_flags(),
)
@@ -274,7 +291,7 @@ def _build_orchestrator(
@_engine_options
def start(
strategy_path: Path,
token_file: Path | None,
token: str | None,
db: Path,
audit: Path,
environment: str,
@@ -284,7 +301,7 @@ def start(
try:
orch = _build_orchestrator(
strategy_path=strategy_path,
token_file=token_file,
token=token,
db=db,
audit=audit,
environment=environment,
@@ -314,7 +331,7 @@ def start(
)
def dry_run(
strategy_path: Path,
token_file: Path | None,
token: str | None,
db: Path,
audit: Path,
environment: str,
@@ -324,7 +341,7 @@ def dry_run(
"""Execute one cycle without starting the scheduler."""
orch = _build_orchestrator(
strategy_path=strategy_path,
token_file=token_file,
token=token,
db=db,
audit=audit,
environment=environment,
@@ -498,10 +515,13 @@ def kill_switch_status(db: Path) -> None:
@main.command()
@click.option(
"--token-file",
type=click.Path(dir_okay=False, path_type=Path),
"--token",
type=str,
default=None,
help="Path to the bearer token file (default: secrets/core_token).",
help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). The "
"server uses the token to choose between testnet and mainnet."
),
)
@click.option(
"--timeout",
@@ -510,16 +530,16 @@ def kill_switch_status(db: Path) -> None:
show_default=True,
help="Per-service timeout in seconds for the ping call.",
)
def ping(token_file: Path | None, timeout: float) -> None:
def ping(token: str | None, timeout: float) -> None:
"""Print health status for every MCP service Cerbero Bite uses."""
try:
token = load_token(path=token_file)
except (FileNotFoundError, ValueError) as exc:
resolved_token = load_token(value=token)
except ValueError as exc:
console.print(f"[red]token error[/red]: {exc}")
sys.exit(1)
endpoints = load_endpoints()
rows = asyncio.run(_ping_all(endpoints, token=token, timeout=timeout))
rows = asyncio.run(_ping_all(endpoints, token=resolved_token, timeout=timeout))
table = Table(title="MCP services")
table.add_column("service")
@@ -560,12 +580,6 @@ async def _ping_one(
if service == "hyperliquid":
await HyperliquidClient(http).funding_rate_annualized("ETH")
return "ok", "ETH-PERP reachable"
if service == "portfolio":
await PortfolioClient(http).total_equity_eur()
return "ok", "portfolio reachable"
if service == "telegram":
# Notify-only: no read tool. Skip without hitting the bot.
return "skipped", "notify-only client (no health probe)"
return "skipped", "no probe defined" # pragma: no cover
except McpError as exc:
return "fail", f"{type(exc).__name__}: {exc}"
@@ -587,9 +601,70 @@ async def _ping_all(
@main.command()
def gui() -> None:
"""Launch the Streamlit dashboard."""
_phase0_notice("gui command not yet implemented (will run streamlit on 127.0.0.1:8765).")
@click.option(
"--db",
type=click.Path(path_type=Path),
default=_DEFAULT_DB_PATH,
show_default=True,
help="SQLite state file the dashboard reads.",
)
@click.option(
"--audit",
type=click.Path(path_type=Path),
default=_DEFAULT_AUDIT_PATH,
show_default=True,
help="Audit log file the dashboard streams.",
)
@click.option(
"--port",
type=int,
default=8765,
show_default=True,
help="Local port to bind (always 127.0.0.1).",
)
@click.option(
"--headless/--no-headless",
default=True,
show_default=True,
help="When true, do not auto-open the browser.",
)
def gui(db: Path, audit: Path, port: int, headless: bool) -> None:
"""Launch the Streamlit dashboard (read-only, localhost only)."""
try:
import streamlit # noqa: F401, PLC0415
except ImportError:
click.echo(
"streamlit not installed. Run `uv sync --extra gui` first.",
err=True,
)
sys.exit(1)
main_path = Path(__file__).parent / "gui" / "main.py"
if not main_path.is_file():
click.echo(f"GUI entry point not found: {main_path}", err=True)
sys.exit(1)
env = os.environ.copy()
env["CERBERO_BITE_GUI_DB"] = str(db.resolve())
env["CERBERO_BITE_GUI_AUDIT"] = str(audit.resolve())
cmd = [
sys.executable,
"-m",
"streamlit",
"run",
str(main_path),
"--server.address",
"127.0.0.1",
"--server.port",
str(port),
"--server.headless",
"true" if headless else "false",
"--browser.gatherUsageStats",
"false",
]
click.echo(f"Launching GUI on http://127.0.0.1:{port}")
os.execvpe(cmd[0], cmd, env)
@main.command()
+31 -5
View File
@@ -1,10 +1,13 @@
"""HTTP tool client common to every MCP wrapper.
Each MCP service exposes ``POST <base_url>/tools/<tool_name>`` with a
JSON body and a ``Bearer <core_token>`` header. ``HttpToolClient`` is a
thin wrapper around :class:`httpx.AsyncClient` that:
JSON body, a ``Bearer <token>`` header (the token decides the upstream
environment, testnet or mainnet, on the Cerbero MCP V2 server), and an
``X-Bot-Tag`` header that identifies the calling bot in the audit log.
``HttpToolClient`` is a thin wrapper around :class:`httpx.AsyncClient`
that:
* Adds the auth header.
* Adds the auth and bot-tag headers.
* Applies the project-wide timeout (default 8 s, see
``docs/10-config-spec.md`` ``mcp.call_timeout_s``).
* Retries the call on transient failures with exponential backoff
@@ -44,7 +47,7 @@ from cerbero_bite.clients._exceptions import (
McpToolError,
)
__all__ = ["HttpToolClient"]
__all__ = ["DEFAULT_BOT_TAG", "HttpToolClient"]
_log = logging.getLogger("cerbero_bite.clients")
@@ -53,6 +56,12 @@ _RETRYABLE: tuple[type[BaseException], ...] = (
McpServerError,
)
# Bot identifier sent on every MCP call via the ``X-Bot-Tag`` header.
# The Cerbero MCP V2 server logs this value in the audit record so each
# write operation can be traced back to the originating bot.
DEFAULT_BOT_TAG = "BOT__CERBERO_BITE"
_BOT_TAG_MAX_LEN = 64
class HttpToolClient:
"""Async client for ``POST <base>/tools/<tool>`` style MCP services.
@@ -61,7 +70,14 @@ class HttpToolClient:
service: short service identifier (``"deribit"``, ``"macro"`` …).
base_url: e.g. ``"http://mcp-deribit:9011"``. Trailing slash
is stripped.
token: bearer token for the ``Authorization`` header.
token: bearer token for the ``Authorization`` header. On
Cerbero MCP V2 the value of the token decides whether the
upstream environment is testnet or mainnet; the bot does
not need to know which is which.
bot_tag: value of the ``X-Bot-Tag`` header. Defaults to
:data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``). The
server rejects requests with a missing/empty/over-long
value with HTTP 400.
timeout_s: per-request timeout, default 8 seconds.
retry_max: max number of attempts (1 = no retry).
retry_base_delay: base delay for exponential backoff.
@@ -74,15 +90,24 @@ class HttpToolClient:
service: str,
base_url: str,
token: str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0,
retry_max: int = 3,
retry_base_delay: float = 1.0,
sleep: Callable[[int | float], Awaitable[None] | None] | None = None,
client: httpx.AsyncClient | None = None,
) -> None:
cleaned_tag = bot_tag.strip()
if not cleaned_tag:
raise ValueError("bot_tag must be a non-empty string")
if len(cleaned_tag) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"bot_tag exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned_tag!r}"
)
self._service = service
self._base_url = base_url.rstrip("/")
self._token = token
self._bot_tag = cleaned_tag
self._timeout = httpx.Timeout(timeout_s)
self._retry_max = max(1, retry_max)
self._retry_base_delay = retry_base_delay
@@ -114,6 +139,7 @@ class HttpToolClient:
headers = {
"Authorization": f"Bearer {self._token}",
"Content-Type": "application/json",
"X-Bot-Tag": self._bot_tag,
}
payload = body or {}
+66 -3
View File
@@ -303,14 +303,15 @@ class DeribitClient:
return Decimal(str(entry["close"]))
return None
async def dealer_gamma_profile_eth(
async def dealer_gamma_profile(
self,
currency: str,
*,
expiry_from: datetime | None = None,
expiry_to: datetime | None = None,
top_n_strikes: int = 50,
) -> DealerGammaSnapshot:
"""Return the aggregated dealer net gamma snapshot for ETH options.
"""Return the aggregated dealer net gamma snapshot for ``currency``.
Long-gamma regime (``total_net_dealer_gamma > 0``) is associated
with vol-suppressing dealer hedging — the entry filter §2.8 uses
@@ -318,7 +319,7 @@ class DeribitClient:
(vol-amplifying dealer flow).
"""
body: dict[str, Any] = {
"currency": "ETH",
"currency": currency.upper(),
"top_n_strikes": top_n_strikes,
}
if expiry_from is not None:
@@ -347,6 +348,68 @@ class DeribitClient:
strikes_analyzed=int(raw.get("strikes_analyzed") or 0),
)
async def dealer_gamma_profile_eth(
self,
*,
expiry_from: datetime | None = None,
expiry_to: datetime | None = None,
top_n_strikes: int = 50,
) -> DealerGammaSnapshot:
"""Backwards-compatible alias of :py:meth:`dealer_gamma_profile`."""
return await self.dealer_gamma_profile(
"ETH",
expiry_from=expiry_from,
expiry_to=expiry_to,
top_n_strikes=top_n_strikes,
)
async def realized_vol(
self,
currency: str,
*,
windows: tuple[int, ...] = (14, 30),
) -> dict[str, Decimal | None]:
"""Annualised realised vol for ``currency`` plus IV-RV spread.
Returns ``{"rv_14d", "rv_30d", "iv_minus_rv_30d", "iv_current"}``
(``None`` for any missing field). Pure read-only — no side
effects on the engine.
"""
raw = await self._http.call(
"get_realized_vol",
{"currency": currency.upper(), "windows": list(windows)},
)
if not isinstance(raw, dict):
return {}
rv = raw.get("realized_vol_pct") or {}
spread = raw.get("iv_minus_rv_pct") or {}
return {
"rv_14d": _to_decimal(rv.get("14d")),
"rv_30d": _to_decimal(rv.get("30d")),
"iv_current": _to_decimal(raw.get("iv_current_pct")),
"iv_minus_rv_30d": _to_decimal(spread.get("30d")),
"iv_minus_rv_14d": _to_decimal(spread.get("14d")),
}
async def spot_perp_price(self, asset: str) -> Decimal:
"""Mark price of ``<ASSET>-PERPETUAL`` (cheap proxy for spot)."""
instrument = f"{asset.upper()}-PERPETUAL"
raw = await self._http.call("get_ticker", {"instrument": instrument})
if not isinstance(raw, dict):
raise McpDataAnomalyError(
f"get_ticker: unexpected shape for {instrument}",
service=self.SERVICE,
tool="get_ticker",
)
mark = raw.get("mark_price") or raw.get("last_price")
if mark is None:
raise McpDataAnomalyError(
f"get_ticker: missing mark_price for {instrument}",
service=self.SERVICE,
tool="get_ticker",
)
return Decimal(str(mark))
async def adx_14(
self,
*,
+23 -3
View File
@@ -1,13 +1,17 @@
"""Wrapper around ``mcp-hyperliquid``.
Cerbero Bite consumes a single tool: ``get_funding_rate`` for ETH-PERP,
used by entry filter §2.6 of ``docs/01-strategy-rules.md`` (cap on the
absolute annualised funding rate).
Cerbero Bite consumes:
* ``get_funding_rate`` — entry filter §2.6 cap on absolute annualised
funding rate (``docs/01-strategy-rules.md``).
* ``get_account_summary`` and ``get_positions`` — feed the in-process
portfolio aggregator (equity + ETH/BTC exposure on the perp side).
"""
from __future__ import annotations
from decimal import Decimal
from typing import Any
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError
@@ -47,3 +51,19 @@ class HyperliquidClient:
tool="get_funding_rate",
)
return Decimal(str(rate)) * Decimal(HOURLY_FUNDING_PERIODS_PER_YEAR)
async def get_account_summary(self) -> dict[str, Any]:
"""Account equity and balances (USD)."""
raw: Any = await self._http.call("get_account_summary", {})
return raw if isinstance(raw, dict) else {}
async def get_positions(self) -> list[dict[str, Any]]:
"""Open perp positions (list of dicts)."""
raw: Any = await self._http.call("get_positions", {})
if isinstance(raw, list):
return raw
if isinstance(raw, dict):
inner = raw.get("positions")
if isinstance(inner, list):
return inner
return []
+30
View File
@@ -9,11 +9,13 @@ the requested window. The orchestrator feeds the result straight into
from __future__ import annotations
from datetime import UTC, datetime
from decimal import Decimal
from typing import Any
from pydantic import BaseModel, ConfigDict
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError
__all__ = ["MacroClient", "MacroEvent"]
@@ -71,6 +73,34 @@ class MacroClient:
)
return out
async def get_asset_price(self, ticker: str) -> Decimal:
"""Return the latest cross-asset price for ``ticker`` (e.g. ``EURUSD``)."""
raw = await self._http.call("get_asset_price", {"ticker": ticker})
if not isinstance(raw, dict):
raise McpDataAnomalyError(
f"macro get_asset_price unexpected shape: {type(raw).__name__}",
service=self.SERVICE,
tool="get_asset_price",
)
if raw.get("error"):
raise McpDataAnomalyError(
f"macro get_asset_price error for {ticker}: {raw['error']}",
service=self.SERVICE,
tool="get_asset_price",
)
price = raw.get("price")
if price is None:
raise McpDataAnomalyError(
f"macro get_asset_price missing 'price' for {ticker}",
service=self.SERVICE,
tool="get_asset_price",
)
return Decimal(str(price))
async def eur_usd_rate(self) -> Decimal:
"""Return EUR→USD spot rate (i.e. ``EURUSD`` price)."""
return await self.get_asset_price("EURUSD")
async def next_high_severity_within(
self,
*,
+130 -65
View File
@@ -1,92 +1,157 @@
"""Wrapper around ``mcp-portfolio``.
"""In-process portfolio aggregator.
Cerbero Bite uses two pieces of information from this service:
Each Cerbero Suite bot now manages its own portfolio view: instead of
calling a shared ``mcp-portfolio`` service, this client composes the
account summaries and open positions from the exchanges the bot
actually uses (Deribit options + Hyperliquid perps) and converts them
to EUR via the macro service.
* total portfolio value (EUR) — fed to the sizing engine after FX
conversion to USD;
* exposure of a specific asset as percentage of the total portfolio —
used by entry filter §2.7 (``eth_holdings_pct_max``).
Two values are exposed:
The portfolio service stores everything in EUR. The orchestrator is
responsible for the EUR→USD conversion using a live FX rate.
* :py:meth:`total_equity_eur` — sum of USDC equity on Deribit and USD
equity on Hyperliquid, converted to EUR using the live ``EURUSD``
rate from ``mcp-macro``.
* :py:meth:`asset_pct_of_portfolio` — fraction (0..1) of total USD
equity exposed to a specific ticker via open positions on the two
exchanges. Used by entry filter §2.7 (``eth_holdings_pct_max``).
**Scope note**: this is the bot's own slice. Holdings on other
exchanges, in cold storage, or held by other bots in the suite are
*not* counted. The §2.7 limit is therefore a per-bot cap, not a
suite-wide one.
"""
from __future__ import annotations
import asyncio
from collections.abc import Iterable
from decimal import Decimal
from typing import Any
from typing import Any, cast
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
__all__ = ["PortfolioClient"]
class PortfolioClient:
SERVICE = "portfolio"
def _decimal_or_zero(value: Any) -> Decimal:
if value is None:
return Decimal(0)
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return Decimal(0)
def __init__(self, http: HttpToolClient) -> None:
if http.service != self.SERVICE:
raise ValueError(
f"PortfolioClient requires service '{self.SERVICE}', got '{http.service}'"
def _position_notional_usd(pos: dict[str, Any]) -> Decimal:
"""Best-effort USD notional of an open position.
Prefers an explicit ``notional_usd`` / ``size_usd`` / ``value_usd``
field. Falls back to ``|size × mark_price|`` (or ``index_price`` if
mark is missing). Returns 0 on malformed entries.
"""
for key in ("notional_usd", "size_usd", "value_usd", "position_value"):
v = pos.get(key)
if v is not None:
return abs(_decimal_or_zero(v))
size = _decimal_or_zero(pos.get("size") or pos.get("szi"))
mark = _decimal_or_zero(
pos.get("mark_price")
or pos.get("entry_price")
or pos.get("index_price")
)
self._http = http
return abs(size * mark)
def _instrument_label(pos: dict[str, Any]) -> str:
for key in ("instrument_name", "instrument", "symbol", "coin", "asset"):
v = pos.get(key)
if v is not None:
return str(v).upper()
return ""
class PortfolioClient:
"""Aggregates equity + asset exposure across the bot's exchange accounts."""
def __init__(
self,
*,
deribit: DeribitClient,
hyperliquid: HyperliquidClient,
macro: MacroClient,
) -> None:
self._deribit = deribit
self._hyperliquid = hyperliquid
self._macro = macro
async def _equity_usd_components(self) -> tuple[Decimal, Decimal]:
"""Concurrent fetch of (deribit_equity_usd, hyperliquid_equity_usd)."""
deribit_summary, hl_summary = await asyncio.gather(
self._deribit.get_account_summary(currency="USDC"),
self._hyperliquid.get_account_summary(),
)
deribit_eq = _decimal_or_zero(deribit_summary.get("equity"))
hl_eq = _decimal_or_zero(hl_summary.get("equity"))
return deribit_eq, hl_eq
async def total_equity_usd(self) -> Decimal:
"""Sum equity USD across the bot's exchange accounts."""
deribit_eq, hl_eq = await self._equity_usd_components()
return deribit_eq + hl_eq
async def total_equity_eur(self) -> Decimal:
"""Return the aggregate portfolio value in EUR."""
raw = await self._http.call(
"get_total_portfolio_value", {"currency": "EUR"}
)
if not isinstance(raw, dict):
"""Return aggregate bot equity in EUR.
Concurrent: account summaries × FX. Raises
:class:`McpDataAnomalyError` if the FX rate is non-positive.
"""
components_t = asyncio.create_task(self._equity_usd_components())
fx_t = asyncio.create_task(self._macro.eur_usd_rate())
await asyncio.gather(components_t, fx_t)
deribit_eq, hl_eq = components_t.result()
fx = fx_t.result()
if fx <= 0:
raise McpDataAnomalyError(
f"portfolio total_value_eur unexpected shape: {type(raw).__name__}",
service=self.SERVICE,
tool="get_total_portfolio_value",
f"non-positive EURUSD rate: {fx}",
service="macro",
tool="get_asset_price",
)
value = raw.get("total_value_eur")
if value is None:
raise McpDataAnomalyError(
"portfolio response missing 'total_value_eur'",
service=self.SERVICE,
tool="get_total_portfolio_value",
)
return Decimal(str(value))
usd_total = deribit_eq + hl_eq
return usd_total / fx
async def asset_pct_of_portfolio(self, ticker: str) -> Decimal:
"""Return the fraction (0..1) of the portfolio held in ``ticker``.
"""Fraction of bot equity (USD) exposed to ``ticker``.
Iterates the holdings list and aggregates ``current_value_eur``
for any holding whose ticker contains ``ticker`` (case-insensitive).
Empty portfolio → 0.
Sums absolute USD notional of open positions whose instrument
label contains ``ticker`` (case-insensitive) on Deribit and
Hyperliquid, divided by the bot's total USD equity. Returns 0
when there is no equity or no exposure.
"""
holdings = await self._http.call("get_holdings", {"min_value_eur": 0})
if not isinstance(holdings, list):
raise McpDataAnomalyError(
f"portfolio get_holdings unexpected shape: {type(holdings).__name__}",
service=self.SERVICE,
tool="get_holdings",
)
target = ticker.upper()
matching_value = Decimal("0")
total_value = Decimal("0")
for entry in holdings:
if not isinstance(entry, dict):
continue
value = entry.get("current_value_eur")
if value is None:
continue
value_dec = Decimal(str(value))
total_value += value_dec
entry_ticker = str(entry.get("ticker") or "").upper()
if target in entry_ticker:
matching_value += value_dec
deribit_pos_t = asyncio.create_task(
self._deribit.get_positions(currency="USDC")
)
hl_pos_t = asyncio.create_task(self._hyperliquid.get_positions())
equity_t = asyncio.create_task(self._equity_usd_components())
await asyncio.gather(deribit_pos_t, hl_pos_t, equity_t)
if total_value == 0:
return Decimal("0")
return matching_value / total_value
exposure_usd = Decimal(0)
for raw_pos in cast(Iterable[Any], deribit_pos_t.result()):
if not isinstance(raw_pos, dict):
continue
if target in _instrument_label(raw_pos):
exposure_usd += _position_notional_usd(raw_pos)
for raw_pos in cast(Iterable[Any], hl_pos_t.result()):
if not isinstance(raw_pos, dict):
continue
if target in _instrument_label(raw_pos):
exposure_usd += _position_notional_usd(raw_pos)
async def health(self) -> dict[str, Any]:
"""Lightweight call used by ``cerbero-bite ping``."""
result: Any = await self._http.call("get_last_update_info", {})
return result if isinstance(result, dict) else {}
deribit_eq, hl_eq = equity_t.result()
total_eq = deribit_eq + hl_eq
if total_eq <= 0:
return Decimal(0)
return exposure_usd / total_eq
+126 -49
View File
@@ -1,41 +1,115 @@
"""Wrapper around ``mcp-telegram`` (notify-only mode).
"""Direct Telegram Bot API client (notify-only).
Cerbero Bite during the testnet phase (and through the soft launch) is
fully autonomous: Telegram is used purely to *notify* Adriano of what
the engine has done, never to gate execution. As a consequence:
Cerbero Bite is fully autonomous: Telegram is used solely to *notify*
the operator of what the engine has done — there is no inbound queue
and no confirmation logic.
* No ``send_with_buttons`` and no callback queue.
* Confirmation timeouts are handled inside the orchestrator's own
state machine, not by waiting on Telegram replies.
* All notifications go through one of the typed endpoints
(``notify``, ``notify_position_opened``, ``notify_position_closed``,
``notify_alert``, ``notify_system_error``) — the formatting lives
on the server side.
Credentials are read from the environment:
* ``CERBERO_BITE_TELEGRAM_BOT_TOKEN`` — bot token from BotFather.
* ``CERBERO_BITE_TELEGRAM_CHAT_ID`` — destination chat id.
If either is missing the client runs in **disabled** mode: every
``notify_*`` becomes a no-op logged at DEBUG. This keeps unconfigured
deployments and the test environment harmless.
"""
from __future__ import annotations
import logging
import os
from decimal import Decimal
from typing import Any
from cerbero_bite.clients._base import HttpToolClient
import httpx
__all__ = ["TelegramClient"]
__all__ = [
"TELEGRAM_BOT_TOKEN_ENV",
"TELEGRAM_CHAT_ID_ENV",
"TelegramClient",
"TelegramError",
"load_telegram_credentials",
]
def _to_float(value: Decimal | float) -> float:
return float(value) if isinstance(value, Decimal) else value
TELEGRAM_BOT_TOKEN_ENV = "CERBERO_BITE_TELEGRAM_BOT_TOKEN"
TELEGRAM_CHAT_ID_ENV = "CERBERO_BITE_TELEGRAM_CHAT_ID"
_log = logging.getLogger("cerbero_bite.clients.telegram")
class TelegramError(RuntimeError):
"""Raised when the Telegram Bot API rejects a sendMessage call."""
def _to_float(value: Decimal | float | int) -> float:
return float(value)
def load_telegram_credentials(
env: dict[str, str] | None = None,
) -> tuple[str | None, str | None]:
"""Return ``(bot_token, chat_id)`` from env. Empty strings → ``None``."""
e = env if env is not None else os.environ
token = (e.get(TELEGRAM_BOT_TOKEN_ENV) or "").strip() or None
chat = (e.get(TELEGRAM_CHAT_ID_ENV) or "").strip() or None
return token, chat
class TelegramClient:
SERVICE = "telegram"
"""Notify-only client over the public Telegram Bot API."""
def __init__(self, http: HttpToolClient) -> None:
if http.service != self.SERVICE:
raise ValueError(
f"TelegramClient requires service '{self.SERVICE}', got '{http.service}'"
BASE_URL = "https://api.telegram.org"
def __init__(
self,
*,
bot_token: str | None,
chat_id: str | None,
http_client: httpx.AsyncClient | None = None,
timeout_s: float = 5.0,
parse_mode: str = "HTML",
) -> None:
self._token = (bot_token or "").strip() or None
self._chat_id = (str(chat_id).strip() if chat_id is not None else "") or None
self._client = http_client
self._timeout = timeout_s
self._parse_mode = parse_mode
@property
def enabled(self) -> bool:
return self._token is not None and self._chat_id is not None
async def _send(self, text: str) -> None:
if not self.enabled:
_log.debug("telegram disabled, dropping message: %s", text[:120])
return
url = f"{self.BASE_URL}/bot{self._token}/sendMessage"
payload: dict[str, Any] = {
"chat_id": self._chat_id,
"text": text,
"parse_mode": self._parse_mode,
"disable_web_page_preview": True,
}
client = self._client
owns = client is None
if client is None:
client = httpx.AsyncClient(timeout=self._timeout)
try:
resp = await client.post(url, json=payload, timeout=self._timeout)
finally:
if owns:
await client.aclose()
if resp.status_code != 200:
raise TelegramError(
f"telegram HTTP {resp.status_code}: {resp.text[:200]}"
)
self._http = http
data = resp.json()
if not isinstance(data, dict) or not data.get("ok", False):
desc = (
data.get("description", "?") if isinstance(data, dict) else str(data)
)
raise TelegramError(f"telegram api error: {desc}")
async def notify(
self,
@@ -44,10 +118,10 @@ class TelegramClient:
priority: str = "normal",
tag: str | None = None,
) -> None:
body: dict[str, Any] = {"message": message, "priority": priority}
if tag is not None:
body["tag"] = tag
await self._http.call("notify", body)
prefix = f"[{priority.upper()}]"
if tag:
prefix = f"{prefix}[{tag}]"
await self._send(f"{prefix} {message}")
async def notify_position_opened(
self,
@@ -59,17 +133,19 @@ class TelegramClient:
greeks: dict[str, Decimal | float] | None = None,
expected_pnl_usd: Decimal | float | None = None,
) -> None:
body: dict[str, Any] = {
"instrument": instrument,
"side": side,
"size": float(size),
"strategy": strategy,
}
if greeks is not None:
body["greeks"] = {k: _to_float(v) for k, v in greeks.items()}
lines = [
"<b>POSITION OPENED</b>",
f"instrument: <code>{instrument}</code>",
f"side: {side} | size: {size} | strategy: {strategy}",
]
if greeks:
joined = ", ".join(
f"{k}={_to_float(v):+.4f}" for k, v in greeks.items()
)
lines.append(f"greeks: {joined}")
if expected_pnl_usd is not None:
body["expected_pnl"] = _to_float(expected_pnl_usd)
await self._http.call("notify_position_opened", body)
lines.append(f"expected pnl: ${_to_float(expected_pnl_usd):+.2f}")
await self._send("\n".join(lines))
async def notify_position_closed(
self,
@@ -78,13 +154,12 @@ class TelegramClient:
realized_pnl_usd: Decimal | float,
reason: str,
) -> None:
await self._http.call(
"notify_position_closed",
{
"instrument": instrument,
"realized_pnl": _to_float(realized_pnl_usd),
"reason": reason,
},
pnl = _to_float(realized_pnl_usd)
await self._send(
"<b>POSITION CLOSED</b>\n"
f"instrument: <code>{instrument}</code>\n"
f"realized pnl: ${pnl:+.2f}\n"
f"reason: {reason}"
)
async def notify_alert(
@@ -94,9 +169,10 @@ class TelegramClient:
message: str,
priority: str = "high",
) -> None:
await self._http.call(
"notify_alert",
{"source": source, "message": message, "priority": priority},
await self._send(
f"<b>ALERT [{priority.upper()}]</b>\n"
f"source: {source}\n"
f"{message}"
)
async def notify_system_error(
@@ -106,7 +182,8 @@ class TelegramClient:
component: str | None = None,
priority: str = "critical",
) -> None:
body: dict[str, Any] = {"message": message, "priority": priority}
if component is not None:
body["component"] = component
await self._http.call("notify_system_error", body)
text = f"<b>SYSTEM ERROR [{priority.upper()}]</b>\n"
if component:
text += f"component: {component}\n"
text += message
await self._send(text)
+71 -34
View File
@@ -1,43 +1,55 @@
"""Resolve MCP service URLs and the bearer token.
"""Resolve MCP service URLs, the bearer token and the bot tag.
Cerbero Bite runs in its own Docker container that joins the
``cerbero-suite`` network: every MCP service is reachable by the
container DNS name plus its internal port (``mcp-deribit:9011`` etc.).
Cerbero MCP V2 (a single FastAPI image fronting Deribit, Hyperliquid,
Macro, Sentiment and friends) is deployed on a dedicated VPS and reached
through the public gateway at ``https://cerbero-mcp.tielogic.xyz``. The
server decides the upstream environment (testnet vs mainnet) entirely
from the bearer token attached to each request — Cerbero Bite does not
have to be told which is which: swapping the token in ``.env`` is enough
to switch environments.
The resolver supports two layers of override:
The resolver supports the following layers of override:
1. Per-service environment variables (``CERBERO_BITE_MCP_DERIBIT_URL``,
``CERBERO_BITE_MCP_MACRO_URL``…). Useful for dev when running
outside Docker — point at ``http://localhost:9011`` etc.
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var: path to the file that
stores the bearer token (default
``/run/secrets/core_token``). The file is read at boot, the
trailing whitespace is stripped, and the value is *not* logged.
1. Per-service URL env vars (``CERBERO_BITE_MCP_DERIBIT_URL``,
``CERBERO_BITE_MCP_HYPERLIQUID_URL``, ``CERBERO_BITE_MCP_MACRO_URL``,
``CERBERO_BITE_MCP_SENTIMENT_URL``). Useful for local dev when the
bot must talk to a same-host MCP server (``http://localhost:9000``)
instead of the public gateway.
2. ``CERBERO_BITE_MCP_TOKEN`` env var: the bearer token used on every
request. The token's value is *never* logged.
3. ``CERBERO_BITE_MCP_BOT_TAG`` env var: identifier sent on the
``X-Bot-Tag`` header (default ``BOT__CERBERO_BITE``). Must be a
non-empty string of at most 64 characters.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
from pathlib import Path
from cerbero_bite.clients._base import DEFAULT_BOT_TAG
__all__ = [
"DEFAULT_BOT_TAG",
"DEFAULT_ENDPOINTS",
"MCP_SERVICES",
"McpEndpoints",
"load_bot_tag",
"load_endpoints",
"load_token",
]
# Service identifier → (default Docker DNS host, default port, env var name)
#
# Telegram and Portfolio used to be shared MCP services; both are now
# in-process per bot (Telegram → public Bot API, Portfolio → aggregator
# over Deribit + Hyperliquid + Macro). They are no longer listed here.
MCP_SERVICES: dict[str, tuple[str, int, str]] = {
"deribit": ("mcp-deribit", 9011, "CERBERO_BITE_MCP_DERIBIT_URL"),
"hyperliquid": ("mcp-hyperliquid", 9012, "CERBERO_BITE_MCP_HYPERLIQUID_URL"),
"macro": ("mcp-macro", 9013, "CERBERO_BITE_MCP_MACRO_URL"),
"sentiment": ("mcp-sentiment", 9014, "CERBERO_BITE_MCP_SENTIMENT_URL"),
"telegram": ("mcp-telegram", 9017, "CERBERO_BITE_MCP_TELEGRAM_URL"),
"portfolio": ("mcp-portfolio", 9018, "CERBERO_BITE_MCP_PORTFOLIO_URL"),
}
@@ -58,8 +70,6 @@ class McpEndpoints:
hyperliquid: str
macro: str
sentiment: str
telegram: str
portfolio: str
def for_service(self, name: str) -> str:
try:
@@ -78,31 +88,58 @@ def load_endpoints(env: dict[str, str] | None = None) -> McpEndpoints:
return McpEndpoints(**resolved)
_DEFAULT_TOKEN_FILE = "/run/secrets/core_token"
_TOKEN_FILE_ENV = "CERBERO_BITE_CORE_TOKEN_FILE"
_TOKEN_ENV = "CERBERO_BITE_MCP_TOKEN"
_BOT_TAG_ENV = "CERBERO_BITE_MCP_BOT_TAG"
_BOT_TAG_MAX_LEN = 64
def load_token(
*,
path: str | Path | None = None,
value: str | None = None,
env: dict[str, str] | None = None,
) -> str:
"""Read the bearer token from disk and return it stripped.
"""Return the MCP bearer token, stripped of surrounding whitespace.
Resolution order:
1. explicit ``path`` argument;
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var;
3. ``/run/secrets/core_token`` (Docker secrets default).
1. explicit ``value`` argument (e.g. from a CLI flag);
2. ``CERBERO_BITE_MCP_TOKEN`` env var.
"""
e = env if env is not None else os.environ
target = (
Path(path)
if path is not None
else Path(e.get(_TOKEN_FILE_ENV, _DEFAULT_TOKEN_FILE))
)
if not target.is_file():
raise FileNotFoundError(f"core token file not found: {target}")
token = target.read_text(encoding="utf-8").strip()
if value is not None:
token = value.strip()
if not token:
raise ValueError(f"core token file is empty: {target}")
raise ValueError("explicit MCP token is empty")
return token
e = env if env is not None else os.environ
raw = e.get(_TOKEN_ENV, "")
token = raw.strip()
if not token:
raise ValueError(
f"{_TOKEN_ENV} is unset or empty; set it in .env to the testnet or "
"mainnet bearer issued by Cerbero MCP"
)
return token
def load_bot_tag(
*,
value: str | None = None,
env: dict[str, str] | None = None,
) -> str:
"""Return the ``X-Bot-Tag`` value, with the project default as fallback.
Resolution order:
1. explicit ``value`` argument;
2. ``CERBERO_BITE_MCP_BOT_TAG`` env var;
3. :data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``).
"""
raw = value if value is not None else (env if env is not None else os.environ).get(
_BOT_TAG_ENV, ""
)
cleaned = raw.strip() if raw else ""
if not cleaned:
return DEFAULT_BOT_TAG
if len(cleaned) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"{_BOT_TAG_ENV} exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned!r}"
)
return cleaned
+78
View File
@@ -0,0 +1,78 @@
"""Operational mode flags read from the environment.
Cerbero Bite supports two independent runtime switches:
* ``CERBERO_BITE_ENABLE_DATA_ANALYSIS`` — when ``true``, the periodic
market-snapshot job is scheduled and writes 15-minute snapshots to
``market_snapshots``; when ``false``, the bot still pings MCP for
health and reconciliation but does not record any market dataset.
* ``CERBERO_BITE_ENABLE_STRATEGY`` — when ``true``, the entry and
monitor cycles are scheduled and may propose/execute trades; when
``false``, no entry or monitor logic runs autonomously (the methods
remain callable from the CLI ``dry-run`` and via manual actions, so
the operator can still test code paths on demand).
The default profile is "analysis only": data analysis on, strategy off.
This is the mode used during the post-deploy soak window where the
team observes data quality before opening any position.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
__all__ = [
"DATA_ANALYSIS_ENV",
"STRATEGY_ENV",
"RuntimeFlags",
"load_runtime_flags",
]
DATA_ANALYSIS_ENV = "CERBERO_BITE_ENABLE_DATA_ANALYSIS"
STRATEGY_ENV = "CERBERO_BITE_ENABLE_STRATEGY"
_TRUE_TOKENS = frozenset({"1", "true", "yes", "on", "enabled"})
_FALSE_TOKENS = frozenset({"0", "false", "no", "off", "disabled"})
@dataclass(frozen=True)
class RuntimeFlags:
"""Boolean switches that gate optional cycles.
Both fields default to the canonical "analysis only" profile.
"""
data_analysis_enabled: bool = True
strategy_enabled: bool = False
def _parse_bool(raw: str, *, var: str, default: bool) -> bool:
cleaned = raw.strip().lower()
if not cleaned:
return default
if cleaned in _TRUE_TOKENS:
return True
if cleaned in _FALSE_TOKENS:
return False
raise ValueError(
f"{var}: expected one of "
f"{sorted(_TRUE_TOKENS | _FALSE_TOKENS)}, got {raw!r}"
)
def load_runtime_flags(env: dict[str, str] | None = None) -> RuntimeFlags:
"""Build a :class:`RuntimeFlags` from environment variables."""
e = env if env is not None else os.environ
return RuntimeFlags(
data_analysis_enabled=_parse_bool(
e.get(DATA_ANALYSIS_ENV, ""),
var=DATA_ANALYSIS_ENV,
default=True,
),
strategy_enabled=_parse_bool(
e.get(STRATEGY_ENV, ""),
var=STRATEGY_ENV,
default=False,
),
)
+94
View File
@@ -81,6 +81,17 @@ class EntryConfig(BaseModel):
# ---------------------------------------------------------------------------
class DeltaByDvolBand(BaseModel):
"""Banda della step function delta-target per regime DVOL (§3.2 A)."""
model_config = ConfigDict(frozen=True, extra="forbid")
dvol_under: Decimal
delta_target: Decimal
delta_min: Decimal
delta_max: Decimal
class ShortStrikeSpec(BaseModel):
model_config = ConfigDict(frozen=True, extra="forbid")
@@ -90,6 +101,16 @@ class ShortStrikeSpec(BaseModel):
distance_otm_pct_min: Decimal = Field(default=Decimal("0.15"))
distance_otm_pct_max: Decimal = Field(default=Decimal("0.25"))
# §3.2 enhancement (A): step function delta-target by DVOL regime.
# Empty list = behaviour invariato (delta_target sopra è il singolo
# valore). Quando popolato, il combo_builder sceglie la prima
# banda ordinata ascending su `dvol_under` con
# `dvol_now ≤ dvol_under`. Esempio:
# - dvol_under=50 → delta 0.15 (bassa vol → più premio)
# - dvol_under=70 → delta 0.12
# - dvol_under=90 → delta 0.10 (alta vol → più safety)
delta_by_dvol: list[DeltaByDvolBand] = Field(default_factory=list)
class SpreadWidthSpec(BaseModel):
model_config = ConfigDict(frozen=True, extra="forbid")
@@ -165,6 +186,25 @@ class SizingConfig(BaseModel):
# ---------------------------------------------------------------------------
class PartialProfitLevel(BaseModel):
"""Livello della scala di profit-take graduale (§7.1bis C).
`mark_at_pct_credit`: il livello è triggerato quando
`mark_combo ≤ mark_at_pct_credit × credito_iniziale` (es. 0.25 =
25% del credito = 75% di profitto sulla porzione chiusa).
`close_pct_of_initial_contracts`: frazione dei contratti aperti
INIZIALMENTE da chiudere a questo livello (es. 0.50 = chiudi metà).
Le frazioni sono cumulative; chiudere oltre i contratti residui
è no-op.
"""
model_config = ConfigDict(frozen=True, extra="forbid")
mark_at_pct_credit: Decimal
close_pct_of_initial_contracts: Decimal
class ExitConfig(BaseModel):
model_config = ConfigDict(frozen=True, extra="forbid")
@@ -176,6 +216,29 @@ class ExitConfig(BaseModel):
delta_breach_threshold: Decimal = Field(default=Decimal("0.30"))
adverse_move_4h_pct: Decimal = Field(default=Decimal("0.05"))
# §7.1ter (D): vol-collapse harvest. Esce in profit anche se il
# profit-take non è ancora colpito quando DVOL è scesa di tot
# punti rispetto all'entry (edge IV-RV catturato, vol attesa già
# rientrata). 0 = filtro disabilitato.
vol_harvest_dvol_decrease: Decimal = Field(default=Decimal("0"))
# §7.1bis (C): scala graduata di profit-take. Lista vuota =
# comportamento invariato (chiusura atomica al
# `profit_take_pct_of_credit`). Quando popolata, l'engine
# interpreta come "chiudi N% dei contratti iniziali al livello
# di mark M%×credito". Le entry sono ordinate dal mark più alto
# (più profit, livello triggerato prima) al più basso. Vedi
# `core/exit_decision.py` per la semantica esatta.
#
# ATTENZIONE: questa funzione richiede il supporto di chiusure
# parziali nel runtime (entry_cycle / repository / clients).
# Fino al merge della partial-close pipeline, l'engine la mappa
# a CLOSE_PROFIT atomico al primo livello triggerato (vedi
# commento in `evaluate`). Default vuoto = no-op.
profit_take_partial_levels: list[PartialProfitLevel] = Field(
default_factory=list
)
monitor_cron: str = "0 2,14 * * *"
user_confirmation_timeout_min: int = 30
escalate_on_timeout: list[str] = Field(
@@ -183,6 +246,36 @@ class ExitConfig(BaseModel):
)
# ---------------------------------------------------------------------------
# Auto-pause (F): circuit breaker su drawdown rolling
# ---------------------------------------------------------------------------
class AutoPauseConfig(BaseModel):
"""Configurazione del circuit breaker su drawdown.
Quando abilitato, il rule engine valuta — prima di ogni entry —
il P/L cumulato delle ultime `lookback_trades` posizioni chiuse
in proporzione al capitale attuale. Se la perdita supera la
soglia, l'engine si auto-mette in pausa per `pause_weeks`
settimane (skip-week). La pausa si annulla automaticamente alla
scadenza, oppure manualmente via comando dalla GUI.
Difende da regime change non rilevati dai filtri quant: se i
filtri stanno fallendo sistematicamente, vale la pena fermarsi
e attendere che le condizioni cambino, invece di continuare a
sanguinare. È un'estensione conservativa del kill switch
(che oggi reagisce solo a errori tecnici).
"""
model_config = ConfigDict(frozen=True, extra="forbid")
enabled: bool = False
lookback_trades: int = 5
max_drawdown_pct: Decimal = Field(default=Decimal("0.10"))
pause_weeks: int = 2
# ---------------------------------------------------------------------------
# Kelly recalibration
# ---------------------------------------------------------------------------
@@ -256,6 +349,7 @@ class StrategyConfig(BaseModel):
sizing: SizingConfig = Field(default_factory=SizingConfig)
exit: ExitConfig = Field(default_factory=ExitConfig)
kelly_recalibration: KellyConfig = Field(default_factory=KellyConfig)
auto_pause: AutoPauseConfig = Field(default_factory=AutoPauseConfig)
execution: ExecutionConfig = Field(default_factory=ExecutionConfig)
monitoring: MonitoringConfig = Field(default_factory=MonitoringConfig)
+27 -3
View File
@@ -83,26 +83,49 @@ def _pick_expiry(
return min(candidates, key=lambda exp: abs(candidates[exp] - sc.dte_target))
def _resolve_delta_band(
sc: object, dvol_now: Decimal | None
) -> tuple[Decimal, Decimal, Decimal]:
"""Return (delta_target, delta_min, delta_max) per il regime DVOL corrente.
Quando ``sc.delta_by_dvol`` è popolato e ``dvol_now`` è disponibile,
sceglie la prima banda (ordinata ascending sulla ``dvol_under``) il
cui ``dvol_under ≥ dvol_now``. Altrimenti torna ai valori statici di
``sc``.
"""
bands = list(getattr(sc, "delta_by_dvol", []) or [])
if dvol_now is not None and bands:
bands_sorted = sorted(bands, key=lambda b: b.dvol_under)
for band in bands_sorted:
if dvol_now <= band.dvol_under:
return band.delta_target, band.delta_min, band.delta_max
last = bands_sorted[-1]
return last.delta_target, last.delta_min, last.delta_max
return sc.delta_target, sc.delta_min, sc.delta_max
def _select_short(
quotes: list[OptionQuote],
*,
spot: Decimal,
cfg: StrategyConfig,
dvol_now: Decimal | None = None,
) -> OptionQuote | None:
"""Pick the short-leg quote with delta closest to target inside both bands."""
sc = cfg.structure.short_strike
delta_target, delta_min, delta_max = _resolve_delta_band(sc, dvol_now)
eligible: list[OptionQuote] = []
for q in quotes:
dist = (q.strike - spot).copy_abs() / spot
if not (sc.distance_otm_pct_min <= dist <= sc.distance_otm_pct_max):
continue
abs_delta = q.delta.copy_abs()
if not (sc.delta_min <= abs_delta <= sc.delta_max):
if not (delta_min <= abs_delta <= delta_max):
continue
eligible.append(q)
if not eligible:
return None
return min(eligible, key=lambda q: abs(q.delta.copy_abs() - sc.delta_target))
return min(eligible, key=lambda q: abs(q.delta.copy_abs() - delta_target))
def _select_long(
@@ -143,6 +166,7 @@ def select_strikes(
spot: Decimal,
now: datetime,
cfg: StrategyConfig,
dvol_now: Decimal | None = None,
) -> tuple[OptionQuote, OptionQuote] | None:
"""Return the (short, long) quotes for the requested vertical, or ``None``.
@@ -161,7 +185,7 @@ def select_strikes(
if not typed:
return None
short = _select_short(typed, spot=spot, cfg=cfg)
short = _select_short(typed, spot=spot, cfg=cfg, dvol_now=dvol_now)
if short is None:
return None
+18
View File
@@ -28,8 +28,10 @@ __all__ = ["ExitAction", "ExitDecisionResult", "PositionSnapshot", "evaluate"]
ExitAction = Literal[
"HOLD",
"CLOSE_PROFIT",
"CLOSE_PROFIT_PARTIAL",
"CLOSE_STOP",
"CLOSE_VOL",
"CLOSE_VOL_HARVEST",
"CLOSE_TIME",
"CLOSE_DELTA",
"CLOSE_AVERSE",
@@ -115,6 +117,22 @@ def evaluate(snapshot: PositionSnapshot, cfg: StrategyConfig) -> ExitDecisionRes
f"mark {debit}{ec.profit_take_pct_of_credit:.0%} of credit {credit}",
)
# 1bis. Vol-collapse harvest (D): siamo IN profit (debit < credit) e
# la DVOL è scesa di tot punti rispetto all'entry. Edge IV-RV già
# catturato, non c'è motivo di tenere fino a profit_take. Esce
# opportunisticamente quando il regime di vol che giustificava
# l'entry non c'è più.
if (
ec.vol_harvest_dvol_decrease > 0
and debit < credit
and snapshot.dvol_now <= snapshot.dvol_at_entry - ec.vol_harvest_dvol_decrease
):
return _result(
"CLOSE_VOL_HARVEST",
f"DVOL {snapshot.dvol_now} ≤ entry {snapshot.dvol_at_entry} "
f"{ec.vol_harvest_dvol_decrease}, harvest while in profit",
)
# 2. Stop loss
if debit >= stop_thresh:
return _result(
Binary file not shown.

After

Width:  |  Height:  |  Size: 668 KiB

+750
View File
@@ -0,0 +1,750 @@
"""Read-only data access for the Streamlit GUI.
The GUI MUST NOT import ``runtime/`` modules nor make MCP calls. Every
piece of information shown on screen is derived from:
* SQLite (``data/state.sqlite``) via :class:`Repository`.
* The audit log (``data/audit.log``) via the parsing helpers in
:mod:`cerbero_bite.safety.audit_log`.
The module exposes small frozen dataclasses purpose-built for rendering
so each Streamlit page can grab a snapshot in one call instead of
poking at the repository directly.
"""
from __future__ import annotations
import json
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from decimal import Decimal
from pathlib import Path
from typing import Literal
from uuid import UUID
from cerbero_bite.safety.audit_log import (
AuditChainError,
AuditEntry,
iter_entries,
verify_chain,
)
from cerbero_bite.state import Repository, connect, transaction
from cerbero_bite.state.models import (
DecisionRecord,
ManualAction,
MarketSnapshotRecord,
PositionRecord,
SystemStateRecord,
)
from cerbero_bite.state.repository import _row_to_manual
__all__ = [
"DEFAULT_AUDIT_PATH",
"DEFAULT_DB_PATH",
"AuditChainStatus",
"EngineHealth",
"EngineSnapshot",
"EquityPoint",
"MonthlyStats",
"PayoffCurve",
"PortfolioKpis",
"PositionDistanceMetrics",
"compute_distance_metrics",
"compute_equity_curve",
"compute_kpis",
"compute_monthly_stats",
"compute_payoff_curve",
"enqueue_arm_kill",
"enqueue_disarm_kill",
"enqueue_run_cycle",
"load_audit_chain_status",
"load_audit_tail",
"load_closed_positions",
"load_decisions_for_position",
"load_engine_snapshot",
"load_market_snapshots",
"load_open_positions",
"load_pending_manual_actions",
"load_position_by_id",
]
DEFAULT_DB_PATH = Path("data/state.sqlite")
DEFAULT_AUDIT_PATH = Path("data/audit.log")
EngineHealth = Literal["running", "degraded", "killed", "stopped", "unknown"]
@dataclass(frozen=True)
class EngineSnapshot:
"""One-shot snapshot used by the Status page."""
health: EngineHealth
kill_switch_armed: bool
kill_reason: str | None
kill_at: datetime | None
last_health_check: datetime | None
last_health_check_age_s: float | None
started_at: datetime | None
config_version: str | None
last_audit_hash: str | None
open_positions: int
@property
def health_label(self) -> str:
return {
"running": "ATTIVO",
"degraded": "DEGRADATO",
"killed": "KILL SWITCH ARMATO",
"stopped": "FERMO",
"unknown": "SCONOSCIUTO",
}[self.health]
@dataclass(frozen=True)
class AuditChainStatus:
"""Result of calling ``verify_chain`` on the audit log."""
ok: bool
entries_verified: int
error: str | None
def load_engine_snapshot(
*,
db_path: Path | str = DEFAULT_DB_PATH,
now: datetime | None = None,
stale_after_s: float = 600.0,
) -> EngineSnapshot:
"""Read system_state + open positions count and derive engine health.
Health rules:
* kill switch armed → ``killed``
* no system_state row → ``unknown`` (engine never started)
* last health check older than ``stale_after_s`` → ``stopped``
* last health check older than 2× cycle (10 min) but younger than
``stale_after_s`` → ``degraded``
* fresh health check → ``running``
"""
db_path = Path(db_path)
if not db_path.exists():
return EngineSnapshot(
health="unknown",
kill_switch_armed=False,
kill_reason=None,
kill_at=None,
last_health_check=None,
last_health_check_age_s=None,
started_at=None,
config_version=None,
last_audit_hash=None,
open_positions=0,
)
repo = Repository()
conn = connect(db_path)
try:
state: SystemStateRecord | None = repo.get_system_state(conn)
open_pos = len(repo.list_open_positions(conn))
finally:
conn.close()
if state is None:
return EngineSnapshot(
health="unknown",
kill_switch_armed=False,
kill_reason=None,
kill_at=None,
last_health_check=None,
last_health_check_age_s=None,
started_at=None,
config_version=None,
last_audit_hash=None,
open_positions=open_pos,
)
reference = (now or datetime.now(UTC)).astimezone(UTC)
last_check = state.last_health_check
age = (reference - last_check).total_seconds() if last_check else None
if state.kill_switch:
health: EngineHealth = "killed"
elif age is None:
health = "unknown"
elif age > stale_after_s:
health = "stopped"
elif age > 600: # over 10 minutes since last health probe
health = "degraded"
else:
health = "running"
return EngineSnapshot(
health=health,
kill_switch_armed=bool(state.kill_switch),
kill_reason=state.kill_reason,
kill_at=state.kill_at,
last_health_check=last_check,
last_health_check_age_s=age,
started_at=state.started_at,
config_version=state.config_version,
last_audit_hash=state.last_audit_hash,
open_positions=open_pos,
)
def load_open_positions(
*, db_path: Path | str = DEFAULT_DB_PATH
) -> list[PositionRecord]:
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_open_positions(conn)
finally:
conn.close()
def load_closed_positions(
*,
db_path: Path | str = DEFAULT_DB_PATH,
start: datetime | None = None,
end: datetime | None = None,
) -> list[PositionRecord]:
"""Return positions with status ``closed`` (sorted oldest → newest).
The optional ``start`` / ``end`` window filters by ``closed_at``.
Positions still in flight (open / awaiting_fill / closing /
cancelled) are excluded. ``cancelled`` positions are also excluded
since they never had P&L impact.
"""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
rows = repo.list_positions(conn, status="closed")
finally:
conn.close()
out: list[PositionRecord] = []
for r in rows:
if r.closed_at is None:
continue
if start is not None and r.closed_at < start:
continue
if end is not None and r.closed_at > end:
continue
out.append(r)
out.sort(key=lambda p: p.closed_at) # type: ignore[arg-type, return-value]
return out
# ---------------------------------------------------------------------------
# Analytics
# ---------------------------------------------------------------------------
@dataclass(frozen=True)
class EquityPoint:
"""One point on the cumulative-PnL curve."""
timestamp: datetime
realized_pnl_usd: Decimal
cumulative_pnl_usd: Decimal
drawdown_usd: Decimal
drawdown_pct: float
@dataclass(frozen=True)
class MonthlyStats:
"""Aggregated stats for a calendar month."""
year_month: str # "2026-04"
n_trades: int
n_wins: int
win_rate: float
pnl_usd: Decimal
avg_pnl_usd: Decimal
@dataclass(frozen=True)
class PortfolioKpis:
"""High-level KPI strip for the History/Equity pages."""
n_trades: int
n_wins: int
win_rate: float
total_pnl_usd: Decimal
avg_win_usd: Decimal
avg_loss_usd: Decimal
edge_per_trade_usd: Decimal
max_drawdown_usd: Decimal
max_drawdown_pct: float
def compute_equity_curve(positions: list[PositionRecord]) -> list[EquityPoint]:
"""Build a cumulative PnL series from closed positions.
Drawdown is measured against the running peak of cumulative PnL
(so it accounts for past wins). ``drawdown_pct`` is expressed
relative to the peak — undefined when peak ≤ 0 (returns 0.0).
"""
if not positions:
return []
points: list[EquityPoint] = []
cumulative = Decimal(0)
peak = Decimal(0)
for pos in positions:
if pos.pnl_usd is None or pos.closed_at is None:
continue
cumulative += pos.pnl_usd
peak = max(peak, cumulative)
dd_usd = peak - cumulative
dd_pct = float(dd_usd / peak) if peak > 0 else 0.0
points.append(
EquityPoint(
timestamp=pos.closed_at,
realized_pnl_usd=pos.pnl_usd,
cumulative_pnl_usd=cumulative,
drawdown_usd=dd_usd,
drawdown_pct=dd_pct,
)
)
return points
def compute_kpis(positions: list[PositionRecord]) -> PortfolioKpis:
"""Aggregate KPI strip across the supplied closed positions."""
pnls = [p.pnl_usd for p in positions if p.pnl_usd is not None]
n = len(pnls)
if n == 0:
zero = Decimal(0)
return PortfolioKpis(
n_trades=0,
n_wins=0,
win_rate=0.0,
total_pnl_usd=zero,
avg_win_usd=zero,
avg_loss_usd=zero,
edge_per_trade_usd=zero,
max_drawdown_usd=zero,
max_drawdown_pct=0.0,
)
wins = [p for p in pnls if p > 0]
losses = [p for p in pnls if p < 0]
total = sum(pnls, Decimal(0))
avg_win = sum(wins, Decimal(0)) / Decimal(len(wins)) if wins else Decimal(0)
avg_loss = sum(losses, Decimal(0)) / Decimal(len(losses)) if losses else Decimal(0)
curve = compute_equity_curve(positions)
if curve:
max_dd = max((p.drawdown_usd for p in curve), default=Decimal(0))
max_dd_pct = max((p.drawdown_pct for p in curve), default=0.0)
else: # pragma: no cover — defensive, curve is empty iff pnls empty
max_dd = Decimal(0)
max_dd_pct = 0.0
return PortfolioKpis(
n_trades=n,
n_wins=len(wins),
win_rate=len(wins) / n,
total_pnl_usd=total,
avg_win_usd=avg_win,
avg_loss_usd=avg_loss,
edge_per_trade_usd=total / Decimal(n),
max_drawdown_usd=max_dd,
max_drawdown_pct=max_dd_pct,
)
def compute_monthly_stats(positions: list[PositionRecord]) -> list[MonthlyStats]:
"""Aggregate per calendar month (UTC), oldest → newest."""
buckets: dict[str, list[Decimal]] = {}
for pos in positions:
if pos.pnl_usd is None or pos.closed_at is None:
continue
key = pos.closed_at.astimezone(UTC).strftime("%Y-%m")
buckets.setdefault(key, []).append(pos.pnl_usd)
out: list[MonthlyStats] = []
for key in sorted(buckets):
pnls = buckets[key]
n = len(pnls)
wins = sum(1 for p in pnls if p > 0)
total = sum(pnls, Decimal(0))
out.append(
MonthlyStats(
year_month=key,
n_trades=n,
n_wins=wins,
win_rate=wins / n if n else 0.0,
pnl_usd=total,
avg_pnl_usd=total / Decimal(n) if n else Decimal(0),
)
)
return out
def load_position_by_id(
proposal_id: UUID,
*,
db_path: Path | str = DEFAULT_DB_PATH,
) -> PositionRecord | None:
db_path = Path(db_path)
if not db_path.exists():
return None
repo = Repository()
conn = connect(db_path)
try:
return repo.get_position(conn, proposal_id)
finally:
conn.close()
def load_decisions_for_position(
proposal_id: UUID,
*,
db_path: Path | str = DEFAULT_DB_PATH,
limit: int = 200,
) -> list[DecisionRecord]:
"""Decisions for ``proposal_id`` newest-first."""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_decisions(conn, proposal_id=proposal_id, limit=limit)
finally:
conn.close()
# ---------------------------------------------------------------------------
# Payoff math (pure, no live data)
# ---------------------------------------------------------------------------
@dataclass(frozen=True)
class PayoffCurve:
"""At-expiry P&L curve for a credit spread."""
spreads_type: str # "bull_put" / "bear_call" / "iron_condor"
spot_grid: list[float]
pnl_grid_usd: list[float]
breakeven: float | None
max_profit_usd: float
max_loss_usd: float
short_strike: float
long_strike: float
spot_at_entry: float
def compute_payoff_curve(
position: PositionRecord,
*,
grid_points: int = 60,
margin_pct: float = 0.15,
) -> PayoffCurve:
"""Build the at-expiry payoff for a credit spread.
Supported spreads (Cerbero Bite scope):
* ``bull_put``: short put @ ``short_strike``, long put @
``long_strike`` (lower). Max profit = credit. Max loss = width
credit. Breakeven = short_strike credit_per_contract.
* ``bear_call``: short call @ ``short_strike``, long call @
``long_strike`` (higher). Symmetric to bull_put around the strikes.
* Other types fall back to a flat zero curve to avoid breaking the
page if/when iron condors are implemented later.
"""
short = float(position.short_strike)
long_ = float(position.long_strike)
n = position.n_contracts
width_usd = float(position.spread_width_usd)
credit_total_usd = float(position.credit_usd)
credit_per_contract = credit_total_usd / n if n > 0 else 0.0
spot = float(position.eth_price_at_entry)
lo = min(short, long_, spot) * (1 - margin_pct)
hi = max(short, long_, spot) * (1 + margin_pct)
step = (hi - lo) / max(grid_points - 1, 1)
grid = [lo + i * step for i in range(grid_points)]
if position.spread_type == "bull_put":
# short put at higher strike, long put at lower strike
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n # signed (negative)
breakeven = short - credit_per_contract
pnl = []
for s in grid:
if s >= short:
pnl.append(max_profit)
elif s <= long_:
pnl.append(max_loss)
else:
frac = (s - long_) / (short - long_)
pnl.append(max_loss + frac * (max_profit - max_loss))
elif position.spread_type == "bear_call":
# short call at lower strike, long call at higher strike
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n
breakeven = short + credit_per_contract
pnl = []
for s in grid:
if s <= short:
pnl.append(max_profit)
elif s >= long_:
pnl.append(max_loss)
else:
frac = (s - short) / (long_ - short)
pnl.append(max_profit + frac * (max_loss - max_profit))
else:
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n
breakeven = None
pnl = [0.0 for _ in grid]
return PayoffCurve(
spreads_type=position.spread_type,
spot_grid=grid,
pnl_grid_usd=pnl,
breakeven=breakeven,
max_profit_usd=max_profit,
max_loss_usd=max_loss,
short_strike=short,
long_strike=long_,
spot_at_entry=spot,
)
@dataclass(frozen=True)
class PositionDistanceMetrics:
"""Quick distance summary for the position drilldown."""
short_strike_otm_pct: float | None
days_to_expiry: int | None
days_held: int | None
delta_at_entry: float
width_pct_of_spot: float
def compute_distance_metrics(
position: PositionRecord,
*,
now: datetime | None = None,
) -> PositionDistanceMetrics:
spot = float(position.spot_at_entry)
short = float(position.short_strike)
if spot > 0:
if position.spread_type == "bull_put":
otm_pct = (spot - short) / spot
elif position.spread_type == "bear_call":
otm_pct = (short - spot) / spot
else:
otm_pct = None
else:
otm_pct = None
reference = (now or datetime.now(UTC)).astimezone(UTC)
days_to_expiry = (
(position.expiry - reference).days if position.expiry else None
)
days_held = (
(reference - position.opened_at).days if position.opened_at else None
)
return PositionDistanceMetrics(
short_strike_otm_pct=otm_pct,
days_to_expiry=days_to_expiry,
days_held=days_held,
delta_at_entry=float(position.delta_at_entry),
width_pct_of_spot=float(position.spread_width_pct),
)
# ---------------------------------------------------------------------------
# Manual actions queue (the GUI's only write path)
# ---------------------------------------------------------------------------
def _enqueue_action(
*,
db_path: Path | str,
kind: str,
payload: dict[str, object],
proposal_id: UUID | None = None,
) -> int:
"""Insert a row in ``manual_actions``. The engine consumer applies it."""
db_path = Path(db_path)
repo = Repository()
now = datetime.now(UTC)
conn = connect(db_path)
try:
with transaction(conn):
return repo.enqueue_manual_action(
conn,
ManualAction(
kind=kind, # type: ignore[arg-type]
proposal_id=proposal_id,
payload_json=json.dumps(payload),
created_at=now,
),
)
finally:
conn.close()
def enqueue_arm_kill(
*, reason: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue an ``arm_kill`` action for the engine consumer."""
if not reason or not reason.strip():
raise ValueError("reason is required")
return _enqueue_action(
db_path=db_path,
kind="arm_kill",
payload={"reason": reason.strip()},
)
def enqueue_disarm_kill(
*, reason: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue a ``disarm_kill`` action for the engine consumer."""
if not reason or not reason.strip():
raise ValueError("reason is required")
return _enqueue_action(
db_path=db_path,
kind="disarm_kill",
payload={"reason": reason.strip()},
)
def enqueue_run_cycle(
*, cycle: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue a ``run_cycle`` action — engine must be running.
``cycle`` must be one of ``entry``, ``monitor``, ``health``. The
engine consumer dispatches the corresponding ``Orchestrator.run_*``
method on the next minute tick.
"""
cycle_norm = cycle.strip().lower()
if cycle_norm not in {"entry", "monitor", "health", "market_snapshot"}:
raise ValueError(
f"cycle must be entry|monitor|health|market_snapshot, "
f"got '{cycle}'"
)
return _enqueue_action(
db_path=db_path,
kind="run_cycle",
payload={"cycle": cycle_norm},
)
def load_market_snapshots(
*,
asset: str,
db_path: Path | str = DEFAULT_DB_PATH,
start: datetime | None = None,
end: datetime | None = None,
limit: int = 5000,
) -> list[MarketSnapshotRecord]:
"""Return market_snapshots rows for the asset, newest-first."""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_market_snapshots(
conn, asset=asset, start=start, end=end, limit=limit
)
finally:
conn.close()
def load_pending_manual_actions(
*, db_path: Path | str = DEFAULT_DB_PATH
) -> list[ManualAction]:
"""All unconsumed actions, oldest first (used for the pending strip)."""
db_path = Path(db_path)
if not db_path.exists():
return []
conn = connect(db_path)
try:
rows = conn.execute(
"SELECT * FROM manual_actions WHERE consumed_at IS NULL "
"ORDER BY created_at ASC"
).fetchall()
finally:
conn.close()
return [_row_to_manual(row) for row in rows]
def load_audit_tail(
*,
audit_path: Path | str = DEFAULT_AUDIT_PATH,
limit: int = 100,
event_filter: str | None = None,
) -> list[AuditEntry]:
"""Return the most recent audit entries (newest first).
For the GUI we walk the entire file (the audit log is append-only and
bounded by daily rotation; reading 100 lines stays cheap). The
optional ``event_filter`` matches by exact event name.
"""
audit_path = Path(audit_path)
entries: list[AuditEntry] = []
if not audit_path.exists():
return entries
for entry in iter_entries(audit_path):
if event_filter and entry.event != event_filter:
continue
entries.append(entry)
entries.reverse() # newest first
return entries[:limit]
def load_audit_chain_status(
*, audit_path: Path | str = DEFAULT_AUDIT_PATH
) -> AuditChainStatus:
audit_path = Path(audit_path)
try:
n = verify_chain(audit_path)
except AuditChainError as exc:
return AuditChainStatus(ok=False, entries_verified=0, error=str(exc))
except Exception as exc: # pragma: no cover — surface unexpected IO errors
return AuditChainStatus(ok=False, entries_verified=0, error=str(exc))
return AuditChainStatus(ok=True, entries_verified=n, error=None)
def humanize_age(seconds: float | None) -> str:
if seconds is None:
return ""
if seconds < 60:
return f"{int(seconds)}s fa"
if seconds < 3600:
return f"{int(seconds / 60)}m fa"
if seconds < 86400:
return f"{seconds / 3600:.1f}h fa"
return f"{seconds / 86400:.1f}g fa"
def humanize_dt(value: datetime | None) -> str:
if value is None:
return ""
return value.astimezone(UTC).strftime("%Y-%m-%d %H:%M:%S UTC")
def humanize_timedelta(value: timedelta | None) -> str: # pragma: no cover
if value is None:
return ""
return f"{value.total_seconds() / 3600:.1f}h"
+230
View File
@@ -0,0 +1,230 @@
"""Live MCP fetch for the GUI (saldi exchange, FX rate).
The original architecture forbade the GUI from calling MCP services
(`docs/11-gui-streamlit.md`). For the "Saldi exchange" panel that
constraint is relaxed: the dashboard fetches balances on demand,
caches the result with Streamlit's TTL cache, and never holds the
async client open between renders. Every fetch is a one-shot:
* read endpoints + token from env (same path used by the CLI),
* spin up a short-lived ``httpx.AsyncClient``,
* query Deribit `get_account_summary` for both ``USDC`` and ``USDT``,
* query Hyperliquid `get_account_summary` (returns ``spot_usdc``,
``perps_equity`` etc.),
* query Macro `get_asset_price("EURUSD")` for FX,
* close the client and return a frozen dataclass to the page.
If a single exchange call fails the row is filled with ``error=...``
and the others are still rendered.
"""
from __future__ import annotations
import asyncio
from dataclasses import dataclass
from datetime import UTC, datetime
from decimal import Decimal
from typing import Any
import httpx
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.config.mcp_endpoints import load_endpoints, load_token
__all__ = [
"BalanceRow",
"BalancesSnapshot",
"fetch_balances_sync",
]
_DERIBIT_CURRENCIES = ("USDC", "USDT")
@dataclass(frozen=True)
class BalanceRow:
"""One row of the balances table."""
exchange: str
currency: str
equity: Decimal | None
available: Decimal | None
unrealized_pnl: Decimal | None
error: str | None = None
@dataclass(frozen=True)
class BalancesSnapshot:
"""Result of one fetch_balances call (rows + meta)."""
rows: list[BalanceRow]
eur_usd_rate: Decimal | None
fetched_at: datetime
fx_error: str | None = None
def total_usd(self) -> Decimal:
total = Decimal(0)
for r in self.rows:
if r.equity is not None:
total += r.equity
return total
def total_eur(self) -> Decimal | None:
if self.eur_usd_rate is None or self.eur_usd_rate <= 0:
return None
return self.total_usd() / self.eur_usd_rate
def _decimal_or_none(value: Any) -> Decimal | None:
if value is None:
return None
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return None
def _resolve_token() -> str:
"""Read the MCP bearer token from the environment.
The token is sourced from ``CERBERO_BITE_MCP_TOKEN``; on Cerbero MCP
V2 the same single token decides whether the upstream environment
is testnet or mainnet.
"""
return load_token()
async def _fetch_deribit_currency(
deribit: DeribitClient, currency: str
) -> BalanceRow:
try:
summary = await deribit.get_account_summary(currency=currency)
except Exception as exc:
return BalanceRow(
exchange="deribit",
currency=currency,
equity=None,
available=None,
unrealized_pnl=None,
error=f"{type(exc).__name__}: {exc}",
)
# Cerbero MCP V2 returns HTTP 200 with a soft ``error`` field when
# the upstream Deribit call failed (e.g. invalid credentials). Treat
# that as a row-level failure so the dashboard surfaces the cause
# instead of showing a misleading equity=0.
soft_error = summary.get("error")
if soft_error:
return BalanceRow(
exchange="deribit",
currency=currency,
equity=None,
available=None,
unrealized_pnl=None,
error=str(soft_error),
)
return BalanceRow(
exchange="deribit",
currency=currency,
equity=_decimal_or_none(summary.get("equity")),
available=_decimal_or_none(summary.get("available_funds")),
unrealized_pnl=_decimal_or_none(summary.get("unrealized_pnl")),
)
async def _fetch_hyperliquid(hl: HyperliquidClient) -> list[BalanceRow]:
try:
summary = await hl.get_account_summary()
except Exception as exc:
return [
BalanceRow(
exchange="hyperliquid",
currency="USDC",
equity=None,
available=None,
unrealized_pnl=None,
error=f"{type(exc).__name__}: {exc}",
)
]
rows: list[BalanceRow] = [
BalanceRow(
exchange="hyperliquid",
currency="USDC",
equity=_decimal_or_none(summary.get("equity")),
available=_decimal_or_none(summary.get("available_balance")),
unrealized_pnl=_decimal_or_none(summary.get("unrealized_pnl")),
)
]
# Hyperliquid spot may also hold USDT; the MCP server exposes it
# under spot_usdt when present. Add a row only if the field is there
# so we don't render a confusing "0.00" against an asset the account
# never held.
spot_usdt = summary.get("spot_usdt")
if spot_usdt is not None:
rows.append(
BalanceRow(
exchange="hyperliquid",
currency="USDT",
equity=_decimal_or_none(spot_usdt),
available=_decimal_or_none(spot_usdt),
unrealized_pnl=Decimal(0),
)
)
return rows
async def _fetch_balances_async(*, timeout_s: float = 8.0) -> BalancesSnapshot:
endpoints = load_endpoints()
token = _resolve_token()
async with httpx.AsyncClient(timeout=timeout_s) as http_client:
def _client(service: str) -> HttpToolClient:
return HttpToolClient(
service=service,
base_url=endpoints.for_service(service),
token=token,
timeout_s=timeout_s,
retry_max=1,
client=http_client,
)
deribit = DeribitClient(_client("deribit"))
hl = HyperliquidClient(_client("hyperliquid"))
macro = MacroClient(_client("macro"))
deribit_results, hl_rows, (fx_value, fx_error) = await asyncio.gather(
asyncio.gather(
*(
_fetch_deribit_currency(deribit, cur)
for cur in _DERIBIT_CURRENCIES
)
),
_fetch_hyperliquid(hl),
_fetch_eur_usd(macro),
)
deribit_rows = list(deribit_results)
return BalancesSnapshot(
rows=[*deribit_rows, *hl_rows],
eur_usd_rate=fx_value,
fetched_at=datetime.now(UTC),
fx_error=fx_error,
)
async def _fetch_eur_usd(
macro: MacroClient,
) -> tuple[Decimal | None, str | None]:
try:
rate = await macro.eur_usd_rate()
except Exception as exc:
return None, f"{type(exc).__name__}: {exc}"
return rate, None
def fetch_balances_sync(*, timeout_s: float = 8.0) -> BalancesSnapshot:
"""Sync wrapper for Streamlit pages (which run in a sync context)."""
return asyncio.run(_fetch_balances_async(timeout_s=timeout_s))
+148
View File
@@ -0,0 +1,148 @@
"""Streamlit entry point for the Cerbero Bite dashboard.
Launch with::
cerbero-bite gui
or directly::
uv run streamlit run src/cerbero_bite/gui/main.py \
--server.address 127.0.0.1 \
--server.port 8765 \
--server.headless true
The dashboard is **read-mostly**: it reads SQLite + the audit log and
never imports ``runtime/`` modules. Each Streamlit page is in
``gui/pages/`` and Streamlit auto-discovers them.
"""
from __future__ import annotations
import os
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
humanize_age,
humanize_dt,
load_engine_snapshot,
)
PAGE_TITLE = "Cerbero Bite — Cruscotto"
PAGE_ICON = str(Path(__file__).parent / "assets" / "cerbero_logo.png")
# ---------------------------------------------------------------------------
# Path resolution
# ---------------------------------------------------------------------------
def _resolve_paths() -> tuple[Path, Path]:
"""Read DB / audit paths from env (settable by ``cerbero-bite gui``)."""
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
# ---------------------------------------------------------------------------
# Sidebar
# ---------------------------------------------------------------------------
_HEALTH_BADGES: dict[str, tuple[str, str]] = {
"running": ("🟢", "ATTIVO"),
"degraded": ("🟡", "DEGRADATO"),
"killed": ("🔴", "KILL SWITCH"),
"stopped": ("", "FERMO"),
"unknown": ("", "SCONOSCIUTO"),
}
def _render_sidebar(db_path: Path, audit_path: Path) -> None:
snap = load_engine_snapshot(db_path=db_path)
icon, label = _HEALTH_BADGES.get(snap.health, ("", "SCONOSCIUTO"))
logo_path = Path(__file__).parent / "assets" / "cerbero_logo.png"
if logo_path.is_file():
st.sidebar.image(str(logo_path), use_container_width=True)
st.sidebar.markdown(f"### {icon} {label}")
if snap.kill_switch_armed:
st.sidebar.error(
f"**Kill switch armato**\n\n"
f"motivo: {snap.kill_reason or ''}\n\n"
f"da: {humanize_dt(snap.kill_at)}"
)
st.sidebar.metric(
"Ultimo health check",
humanize_age(snap.last_health_check_age_s),
)
st.sidebar.metric("Posizioni aperte", snap.open_positions)
st.sidebar.caption(f"config: `{snap.config_version or ''}`")
st.sidebar.divider()
st.sidebar.caption("Sola lettura • solo localhost")
st.sidebar.caption(f"db: `{db_path}`")
st.sidebar.caption(f"audit: `{audit_path}`")
# ---------------------------------------------------------------------------
# Home page
# ---------------------------------------------------------------------------
def main() -> None:
st.set_page_config(
page_title=PAGE_TITLE,
page_icon=PAGE_ICON,
layout="wide",
initial_sidebar_state="expanded",
)
db_path, audit_path = _resolve_paths()
_render_sidebar(db_path, audit_path)
logo_path = Path(__file__).parent / "assets" / "cerbero_logo.png"
header_cols = st.columns([1, 6])
if logo_path.is_file():
header_cols[0].image(str(logo_path), use_container_width=True)
header_cols[1].title("Cerbero Bite")
st.caption(
"Motore rule-based per credit spread su ETH — cruscotto in sola lettura"
)
st.markdown(
"""
Usa la barra laterale per navigare:
- **Stato** — salute del motore, kill switch, posizioni aperte, ancora audit
- **Audit** — streaming del registro audit + verifica integrità della catena
- **Equity** — P&L cumulato, drawdown, distribuzione per chiusura, statistiche mensili
- **Storico** — trade chiusi con filtri, KPI, esportazione CSV
- **Posizione** — drilldown sulla singola posizione con grafico payoff
Il cruscotto legge `data/state.sqlite` e `data/audit.log` direttamente;
non interroga mai i servizi MCP né il broker. L'unico canale di
scrittura è la coda `manual_actions` per arm/disarm del kill switch.
"""
)
snap = load_engine_snapshot(db_path=db_path)
cols = st.columns(4)
cols[0].metric("Salute motore", _HEALTH_BADGES[snap.health][1])
cols[1].metric(
"Kill switch",
"ARMATO" if snap.kill_switch_armed else "DISARMATO",
)
cols[2].metric("Posizioni aperte", snap.open_positions)
cols[3].metric(
"Ultimo health check",
humanize_age(snap.last_health_check_age_s),
)
if __name__ == "__main__":
main()
+347
View File
@@ -0,0 +1,347 @@
"""Status page — engine health at a glance."""
from __future__ import annotations
import os
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
EngineSnapshot,
enqueue_arm_kill,
enqueue_disarm_kill,
enqueue_run_cycle,
humanize_age,
humanize_dt,
load_engine_snapshot,
load_open_positions,
load_pending_manual_actions,
)
from cerbero_bite.gui.live_data import BalancesSnapshot, fetch_balances_sync
def _resolve_paths() -> tuple[Path, Path]:
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
_HEALTH_COLORS = {
"running": ("🟢", "success"),
"degraded": ("🟡", "warning"),
"killed": ("🔴", "error"),
"stopped": ("", "warning"),
"unknown": ("", "info"),
}
_TYPED_PHRASE = "confermo"
def _render_force_cycle_panel(db_path: Path) -> None:
st.subheader("Forza ciclo")
st.caption(
"Accoda una richiesta di esecuzione immediata di un ciclo. Funziona "
"solo se il motore è in esecuzione (`cerbero-bite start`); il job "
"`manual_actions` consuma la coda ogni minuto."
)
cols = st.columns(4)
if cols[0].button(
"▶ Forza entry",
use_container_width=True,
help="Esegue subito una valutazione del ciclo entry.",
):
aid = enqueue_run_cycle(cycle="entry", db_path=db_path)
st.success(
f"✅ ciclo entry accodato (id #{aid}). "
"Il motore lo eseguirà entro ~1 minuto."
)
if cols[1].button(
"🔍 Forza monitor",
use_container_width=True,
help="Esegue subito un giro del monitor sulle posizioni aperte.",
):
aid = enqueue_run_cycle(cycle="monitor", db_path=db_path)
st.success(f"✅ ciclo monitor accodato (id #{aid}).")
if cols[2].button(
"💓 Forza health",
use_container_width=True,
help="Esegue subito un health check completo.",
):
aid = enqueue_run_cycle(cycle="health", db_path=db_path)
st.success(f"✅ ciclo health accodato (id #{aid}).")
if cols[3].button(
"📐 Forza snapshot",
use_container_width=True,
help="Esegue subito una raccolta market_snapshot (alimenta Calibrazione).",
):
aid = enqueue_run_cycle(cycle="market_snapshot", db_path=db_path)
st.success(f"✅ snapshot accodato (id #{aid}).")
@st.cache_data(ttl=60, show_spinner=False)
def _cached_balances() -> BalancesSnapshot:
"""Fetch balances at most once per minute per Streamlit session."""
return fetch_balances_sync(timeout_s=10.0)
def _render_balances_panel() -> None:
st.subheader("Saldi exchange")
refresh = st.button("🔄 Aggiorna saldi", help="Forza un nuovo fetch dagli MCP.")
if refresh:
_cached_balances.clear()
try:
snap = _cached_balances()
except Exception as exc:
st.error(
f"Impossibile leggere i saldi: {type(exc).__name__}: {exc}"
)
return
rows = []
for r in snap.rows:
rows.append(
{
"exchange": r.exchange,
"valuta": r.currency,
"equity": (
f"{float(r.equity):,.2f}"
if r.equity is not None
else ""
),
"disponibile": (
f"{float(r.available):,.2f}"
if r.available is not None
else ""
),
"P&L non realizzato": (
f"{float(r.unrealized_pnl):+.2f}"
if r.unrealized_pnl is not None
else ""
),
"errore": r.error or "",
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
cols = st.columns(3)
cols[0].metric("Totale USD", f"${float(snap.total_usd()):,.2f}")
eur = snap.total_eur()
cols[1].metric(
"Totale EUR",
f"{float(eur):,.2f}" if eur is not None else "",
)
cols[2].metric(
"Cambio EUR/USD",
f"{float(snap.eur_usd_rate):.4f}"
if snap.eur_usd_rate is not None
else "",
)
if snap.fx_error:
st.warning(f"FX non disponibile: {snap.fx_error}")
age = (
f" · letti {humanize_dt(snap.fetched_at)}"
if snap.fetched_at is not None
else ""
)
st.caption(
f"Cache TTL 60s · saldi letti dal gateway MCP{age}"
)
def _render_kill_switch_panel(db_path: Path, snap: EngineSnapshot) -> None:
st.subheader("Comandi kill switch")
if snap.kill_switch_armed:
st.warning(
"Kill switch **armato**. Disarmandolo viene accodata una "
"azione `disarm_kill`; il consumer del motore la applica al "
"prossimo tick di un minuto e la transizione viene registrata "
"nella catena audit."
)
with st.form("kill_disarm_form", clear_on_submit=True):
reason = st.text_input(
"Motivo (obbligatorio)",
placeholder="es. finestra macro superata",
)
confirm = st.text_input(
f"Scrivi `{_TYPED_PHRASE}` per confermare",
placeholder=_TYPED_PHRASE,
)
submitted = st.form_submit_button(
"🟢 Accoda disarmo",
type="primary",
use_container_width=True,
)
if submitted:
if confirm.strip() != _TYPED_PHRASE:
st.error(
f"Scrivi esattamente `{_TYPED_PHRASE}` per confermare."
)
elif not reason.strip():
st.error("Il motivo è obbligatorio.")
else:
aid = enqueue_disarm_kill(reason=reason, db_path=db_path)
st.success(
f"✅ disarmo accodato (id #{aid}). "
"Il motore lo applicherà entro ~1 minuto."
)
else:
st.info(
"Kill switch **disarmato**. Armandolo viene accodata una "
"azione `arm_kill`; il consumer del motore la applica al "
"prossimo tick di un minuto."
)
with st.form("kill_arm_form", clear_on_submit=True):
reason = st.text_input(
"Motivo (obbligatorio)",
placeholder="es. shock macro — sospendi trading",
)
confirm = st.text_input(
f"Scrivi `{_TYPED_PHRASE}` per confermare",
placeholder=_TYPED_PHRASE,
)
submitted = st.form_submit_button(
"🔴 Accoda armamento",
type="secondary",
use_container_width=True,
)
if submitted:
if confirm.strip() != _TYPED_PHRASE:
st.error(
f"Scrivi esattamente `{_TYPED_PHRASE}` per confermare."
)
elif not reason.strip():
st.error("Il motivo è obbligatorio.")
else:
aid = enqueue_arm_kill(reason=reason, db_path=db_path)
st.success(
f"✅ armamento accodato (id #{aid}). "
"Il motore lo applicherà entro ~1 minuto."
)
def render() -> None:
st.title("📊 Stato")
st.caption(
"Salute del motore, kill switch, posizioni aperte e ancora audit."
)
db_path, _ = _resolve_paths()
snap = load_engine_snapshot(db_path=db_path)
icon, level = _HEALTH_COLORS.get(snap.health, ("", "info"))
banner = f"{icon} **{snap.health_label}**"
if level == "success":
st.success(banner)
elif level == "warning":
st.warning(banner)
elif level == "error":
st.error(banner)
else:
st.info(banner)
if snap.kill_switch_armed:
st.error(
f"**Kill switch armato** — il motore rifiuterà nuove entrate.\n\n"
f"- motivo: `{snap.kill_reason or ''}`\n"
f"- da: `{humanize_dt(snap.kill_at)}`"
)
# Top metrics
cols = st.columns(4)
cols[0].metric("Posizioni aperte", snap.open_positions)
cols[1].metric(
"Ultimo health check", humanize_age(snap.last_health_check_age_s)
)
cols[2].metric("Avviato il", humanize_dt(snap.started_at))
cols[3].metric("Versione config", snap.config_version or "")
st.divider()
# Saldi exchange (live MCP fetch, TTL 60s)
_render_balances_panel()
st.divider()
# Forza ciclo
_render_force_cycle_panel(db_path)
st.divider()
# Kill switch controls
_render_kill_switch_panel(db_path, snap)
st.divider()
# Azioni manuali pendenti
pending = load_pending_manual_actions(db_path=db_path)
if pending:
st.subheader("Azioni manuali pendenti")
st.caption(
"Accodate da questo cruscotto, non ancora consumate. Il motore "
"drena la coda ogni minuto tramite il job `manual_actions`."
)
rows_pending = [
{
"id": a.id,
"tipo": a.kind,
"payload": a.payload_json or "",
"creata il": humanize_dt(a.created_at),
}
for a in pending
]
st.dataframe(rows_pending, use_container_width=True, hide_index=True)
st.divider()
# Ancora audit
st.subheader("Ancora audit")
if snap.last_audit_hash is None:
st.info("Nessuna ancora registrata.")
else:
short = (
f"{snap.last_audit_hash[:12]}{snap.last_audit_hash[-12:]}"
if len(snap.last_audit_hash) > 24
else snap.last_audit_hash
)
st.code(short, language="text")
st.caption(
"Ultima testa della catena hash persistita in "
"`system_state.last_audit_hash`. All'avvio l'orchestrator la "
"confronta con la coda del file audit; un mismatch arma il "
"kill switch (CRITICAL)."
)
st.divider()
# Tabella posizioni aperte
st.subheader("Posizioni aperte")
positions = load_open_positions(db_path=db_path)
if not positions:
st.info("Nessuna posizione aperta.")
else:
rows = [
{
"proposal_id": str(p.proposal_id)[:8],
"spread": p.spread_type,
"asset": p.asset,
"n. contratti": p.n_contracts,
"credito (USD)": f"{p.credit_usd:.2f}",
"max perdita (USD)": f"{p.max_loss_usd:.2f}",
"strike short": f"{p.short_strike}",
"strike long": f"{p.long_strike}",
"stato": p.status,
"aperta il": humanize_dt(p.opened_at),
"scadenza": humanize_dt(p.expiry),
}
for p in positions
]
st.dataframe(rows, use_container_width=True)
render()
+122
View File
@@ -0,0 +1,122 @@
"""Audit page — live audit log stream + chain integrity verification."""
from __future__ import annotations
import json
import os
from collections import Counter
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
humanize_dt,
load_audit_chain_status,
load_audit_tail,
)
def _resolve_paths() -> tuple[Path, Path]:
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
def render() -> None:
st.title("🔍 Audit")
st.caption(
"Registro audit append-only con hash chain "
"(`data/audit.log`). La lettura non modifica nulla."
)
_, audit_path = _resolve_paths()
col_l, col_r = st.columns([1, 2])
with col_l:
st.subheader("Integrità catena")
if st.button("Verifica catena", type="primary"):
with st.spinner("Sto percorrendo la catena…"):
status = load_audit_chain_status(audit_path=audit_path)
if status.ok:
st.success(
f"✅ catena integra fino a {status.entries_verified} eventi"
)
else:
st.error(
f"❌ tampering rilevato\n\n```\n{status.error}\n```"
)
else:
st.caption(
"Premi per ricalcolare l'hash di ogni riga e verificare il "
"collegamento prev-hash. Mismatch → alert CRITICAL in "
"produzione."
)
with col_r:
st.subheader("Filtri")
limit = st.slider(
"Ultimi N eventi",
min_value=10,
max_value=500,
value=100,
step=10,
)
# Build event list from the available tail
all_recent = load_audit_tail(audit_path=audit_path, limit=limit)
events_present = sorted({e.event for e in all_recent})
event_filter = st.selectbox(
"Filtro per evento",
options=["(tutti)", *events_present],
index=0,
)
st.divider()
# Statistics strip
counter: Counter[str] = Counter(e.event for e in all_recent)
if counter:
cols = st.columns(min(len(counter), 6))
for col, (event, count) in zip(cols, counter.most_common(6), strict=False):
col.metric(event, count)
st.divider()
# Tail filtrata
filtered = (
all_recent
if event_filter == "(tutti)"
else [e for e in all_recent if e.event == event_filter]
)
st.subheader(f"Ultimi eventi ({len(filtered)} mostrati)")
if not filtered:
st.info("Nessun evento corrisponde ai filtri.")
return
rows = []
for entry in filtered:
try:
payload_pretty = json.dumps(
entry.payload, ensure_ascii=False, sort_keys=True
)
except (TypeError, ValueError):
payload_pretty = str(entry.payload)
rows.append(
{
"timestamp": humanize_dt(entry.timestamp),
"evento": entry.event,
"payload": payload_pretty,
"hash": (
f"{entry.hash[:8]}{entry.hash[-8:]}"
if len(entry.hash) > 16
else entry.hash
),
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
render()
+178
View File
@@ -0,0 +1,178 @@
"""Equity page — cumulative PnL, drawdown, distributions."""
from __future__ import annotations
import os
from collections import Counter
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_equity_curve,
compute_kpis,
compute_monthly_stats,
load_closed_positions,
)
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _date_window(label: str) -> tuple[datetime | None, datetime | None]:
"""Selettore della finestra temporale per l'analitica."""
options = {
"Tutto lo storico": (None, None),
"Ultimi 30 giorni": (datetime.now(UTC) - timedelta(days=30), None),
"Ultimi 90 giorni": (datetime.now(UTC) - timedelta(days=90), None),
"Da inizio anno": (
datetime(datetime.now(UTC).year, 1, 1, tzinfo=UTC),
None,
),
}
pick = st.selectbox(label, list(options.keys()), index=0)
return options[pick]
def render() -> None:
st.title("📈 Equity")
st.caption(
"P&L realizzato cumulato, drawdown e distribuzione per trade. "
"Calcolato dalle posizioni chiuse in `data/state.sqlite`."
)
start, end = _date_window("Finestra")
db_path = _resolve_db()
positions = load_closed_positions(db_path=db_path, start=start, end=end)
if not positions:
st.info(
"Nessuna posizione chiusa nella finestra selezionata. "
"La curva equity si popolerà non appena il motore chiuderà "
"il primo trade."
)
return
# Striscia KPI
kpis = compute_kpis(positions)
cols = st.columns(5)
cols[0].metric("Trade chiusi", kpis.n_trades)
cols[1].metric("Win rate", f"{kpis.win_rate:.0%}")
cols[2].metric("P&L totale", f"${float(kpis.total_pnl_usd):+.2f}")
cols[3].metric("Edge / trade", f"${float(kpis.edge_per_trade_usd):+.2f}")
cols[4].metric(
"Max drawdown",
f"${float(kpis.max_drawdown_usd):.2f}",
delta=f"{kpis.max_drawdown_pct:.1%}",
delta_color="inverse",
)
st.divider()
# Equity curve + drawdown
curve = compute_equity_curve(positions)
df = pd.DataFrame(
{
"timestamp": [p.timestamp for p in curve],
"cumulative_pnl_usd": [float(p.cumulative_pnl_usd) for p in curve],
"drawdown_usd": [float(p.drawdown_usd) for p in curve],
"realized_pnl_usd": [float(p.realized_pnl_usd) for p in curve],
}
)
st.subheader("P&L cumulato (USD)")
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=df["timestamp"],
y=df["cumulative_pnl_usd"],
mode="lines+markers",
name="P&L cumulato",
line={"color": "#2ecc71", "width": 2},
)
)
fig.add_hline(y=0, line_dash="dot", line_color="grey", opacity=0.5)
fig.update_layout(
height=320,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=None,
yaxis_title="USD",
)
st.plotly_chart(fig, use_container_width=True)
st.subheader("Drawdown (USD)")
dd_fig = go.Figure()
dd_fig.add_trace(
go.Scatter(
x=df["timestamp"],
y=-df["drawdown_usd"],
mode="lines",
fill="tozeroy",
name="drawdown",
line={"color": "#e74c3c", "width": 1.5},
)
)
dd_fig.update_layout(
height=220,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=None,
yaxis_title="USD",
)
st.plotly_chart(dd_fig, use_container_width=True)
# Distribuzione P&L
st.subheader("Distribuzione P&L per motivo di chiusura")
by_reason: dict[str, list[float]] = {}
for pos in positions:
if pos.pnl_usd is None:
continue
by_reason.setdefault(pos.close_reason or "(sconosciuto)", []).append(
float(pos.pnl_usd)
)
counts = Counter(
(pos.close_reason or "(sconosciuto)") for pos in positions
)
cols = st.columns(min(len(counts), 6) or 1)
for col, (reason, count) in zip(cols, counts.most_common(6), strict=False):
col.metric(reason, count)
hist_fig = go.Figure()
for reason, pnls in by_reason.items():
hist_fig.add_trace(
go.Histogram(x=pnls, name=reason, opacity=0.6, nbinsx=30)
)
hist_fig.update_layout(
barmode="overlay",
height=320,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title="P&L (USD)",
yaxis_title="numero trade",
legend={"orientation": "h", "y": 1.1},
)
st.plotly_chart(hist_fig, use_container_width=True)
# Tabella mensile
st.subheader("Statistiche mensili")
months = compute_monthly_stats(positions)
rows = [
{
"mese": m.year_month,
"trade": m.n_trades,
"vittorie": m.n_wins,
"win rate": f"{m.win_rate:.0%}",
"P&L (USD)": f"{float(m.pnl_usd):+.2f}",
"media / trade": f"{float(m.avg_pnl_usd):+.2f}",
}
for m in months
]
st.dataframe(rows, use_container_width=True, hide_index=True)
render()
@@ -0,0 +1,135 @@
"""History page — closed-trade table with filters and CSV export."""
from __future__ import annotations
import io
import os
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_kpis,
humanize_dt,
load_closed_positions,
)
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _date_window() -> tuple[datetime | None, datetime | None]:
presets = {
"Tutto lo storico": (None, None),
"Ultimi 7 giorni": (datetime.now(UTC) - timedelta(days=7), None),
"Ultimi 30 giorni": (datetime.now(UTC) - timedelta(days=30), None),
"Ultimi 90 giorni": (datetime.now(UTC) - timedelta(days=90), None),
"Da inizio anno": (
datetime(datetime.now(UTC).year, 1, 1, tzinfo=UTC),
None,
),
}
pick = st.selectbox("Finestra", list(presets.keys()), index=0)
return presets[pick]
def render() -> None:
st.title("📜 Storico")
st.caption(
"Trade chiusi con filtri, striscia KPI ed esportazione CSV."
)
db_path = _resolve_db()
start, end = _date_window()
positions = load_closed_positions(db_path=db_path, start=start, end=end)
# Sotto-filtri per motivo di chiusura e segno P&L
reason_options = sorted(
{p.close_reason or "(sconosciuto)" for p in positions}
)
chosen_reasons = st.multiselect(
"Motivi di chiusura",
options=reason_options,
default=reason_options,
)
pnl_filter = st.radio(
"Filtro P&L",
options=["tutti", "vincenti", "perdenti"],
horizontal=True,
index=0,
)
filtered = []
for p in positions:
reason = p.close_reason or "(sconosciuto)"
if reason not in chosen_reasons:
continue
if pnl_filter == "vincenti" and (p.pnl_usd is None or p.pnl_usd <= 0):
continue
if pnl_filter == "perdenti" and (p.pnl_usd is None or p.pnl_usd >= 0):
continue
filtered.append(p)
# Striscia KPI
kpis = compute_kpis(filtered)
cols = st.columns(6)
cols[0].metric("Trade", kpis.n_trades)
cols[1].metric("Win rate", f"{kpis.win_rate:.0%}")
cols[2].metric("P&L totale", f"${float(kpis.total_pnl_usd):+.2f}")
cols[3].metric("Vittoria media", f"${float(kpis.avg_win_usd):+.2f}")
cols[4].metric("Perdita media", f"${float(kpis.avg_loss_usd):+.2f}")
cols[5].metric("Edge / trade", f"${float(kpis.edge_per_trade_usd):+.2f}")
st.divider()
if not filtered:
st.info("Nessun trade corrisponde ai filtri correnti.")
return
# DataFrame per visualizzazione + esportazione
rows = []
for p in filtered:
days_held = (
(p.closed_at - p.opened_at).days
if p.opened_at and p.closed_at
else None
)
rows.append(
{
"proposal_id": str(p.proposal_id)[:8],
"spread": p.spread_type,
"asset": p.asset,
"n. contratti": p.n_contracts,
"strike short": float(p.short_strike),
"strike long": float(p.long_strike),
"credito (USD)": float(p.credit_usd),
"max perdita (USD)": float(p.max_loss_usd),
"P&L (USD)": (
float(p.pnl_usd) if p.pnl_usd is not None else None
),
"motivo chiusura": p.close_reason or "(sconosciuto)",
"giorni tenuta": days_held,
"aperta il": humanize_dt(p.opened_at),
"chiusa il": humanize_dt(p.closed_at),
"scadenza": humanize_dt(p.expiry),
}
)
df = pd.DataFrame(rows)
st.dataframe(df, use_container_width=True, hide_index=True)
# Esportazione CSV
buf = io.StringIO()
df.to_csv(buf, index=False)
st.download_button(
"⬇ Scarica CSV",
data=buf.getvalue(),
file_name=f"cerbero_bite_storico_{datetime.now(UTC).date()}.csv",
mime="text/csv",
)
render()
@@ -0,0 +1,244 @@
"""Position page — drilldown on a single open or recently-closed trade."""
from __future__ import annotations
import json
import os
from pathlib import Path
from uuid import UUID
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_distance_metrics,
compute_payoff_curve,
humanize_dt,
load_closed_positions,
load_decisions_for_position,
load_open_positions,
load_position_by_id,
)
from cerbero_bite.state.models import PositionRecord
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _position_label(p: PositionRecord) -> str:
short = (
f"{int(p.short_strike)}/{int(p.long_strike)}"
if p.short_strike and p.long_strike
else ""
)
return f"{str(p.proposal_id)[:8]} · {p.spread_type} · {short} · {p.status}"
def _render_header(position: PositionRecord) -> None:
cols = st.columns(4)
cols[0].metric("stato", position.status)
cols[1].metric("spread", position.spread_type)
cols[2].metric("contratti", position.n_contracts)
cols[3].metric("credito (USD)", f"${float(position.credit_usd):+.2f}")
st.caption(
f"`{position.proposal_id}` · aperta il "
f"{humanize_dt(position.opened_at)} · scadenza "
f"{humanize_dt(position.expiry)}"
)
def _render_legs(position: PositionRecord) -> None:
st.subheader("Gambe (snapshot all'entrata)")
rows = [
{
"gamba": "short",
"strumento": position.short_instrument,
"strike": float(position.short_strike),
"lato": "VENDI",
"size": position.n_contracts,
"delta all'entrata": float(position.delta_at_entry),
},
{
"gamba": "long",
"strumento": position.long_instrument,
"strike": float(position.long_strike),
"lato": "COMPRA",
"size": position.n_contracts,
"delta all'entrata": "",
},
]
st.dataframe(rows, use_container_width=True, hide_index=True)
st.caption(
"Mid e greche live non vengono richiesti agli MCP dal cruscotto. "
"Il refresh è demandato al motore: visibile nella pagina Audit."
)
def _render_distance(position: PositionRecord) -> None:
metrics = compute_distance_metrics(position)
cols = st.columns(5)
cols[0].metric(
"Short OTM %",
f"{metrics.short_strike_otm_pct:.1%}"
if metrics.short_strike_otm_pct is not None
else "",
)
cols[1].metric(
"Giorni a scadenza",
metrics.days_to_expiry if metrics.days_to_expiry is not None else "",
)
cols[2].metric(
"Giorni in tenuta",
metrics.days_held if metrics.days_held is not None else "",
)
cols[3].metric("Δ all'entrata", f"{metrics.delta_at_entry:+.3f}")
cols[4].metric("Larghezza % spot", f"{metrics.width_pct_of_spot:.1%}")
def _render_payoff(position: PositionRecord) -> None:
st.subheader("Payoff a scadenza")
curve = compute_payoff_curve(position)
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=curve.spot_grid,
y=curve.pnl_grid_usd,
mode="lines",
line={"color": "#3498db", "width": 2.5},
name="P&L a scadenza",
fill="tozeroy",
fillcolor="rgba(52,152,219,0.10)",
)
)
fig.add_hline(y=0, line_dash="dot", line_color="grey", opacity=0.5)
fig.add_vline(
x=curve.short_strike,
line_dash="dash",
line_color="#27ae60",
opacity=0.7,
annotation_text=f"short {curve.short_strike:.0f}",
annotation_position="top",
)
fig.add_vline(
x=curve.long_strike,
line_dash="dash",
line_color="#c0392b",
opacity=0.7,
annotation_text=f"long {curve.long_strike:.0f}",
annotation_position="top",
)
if curve.breakeven is not None:
fig.add_vline(
x=curve.breakeven,
line_dash="dot",
line_color="orange",
opacity=0.7,
annotation_text=f"BE {curve.breakeven:.2f}",
annotation_position="bottom",
)
fig.add_vline(
x=curve.spot_at_entry,
line_dash="solid",
line_color="#7f8c8d",
opacity=0.4,
annotation_text=f"spot all'entrata {curve.spot_at_entry:.0f}",
annotation_position="bottom",
)
fig.update_layout(
height=380,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title="ETH spot a scadenza (USD)",
yaxis_title="P&L (USD)",
legend={"orientation": "h", "y": 1.1},
)
st.plotly_chart(fig, use_container_width=True)
cols = st.columns(3)
cols[0].metric("Profitto massimo", f"${curve.max_profit_usd:+.2f}")
cols[1].metric("Perdita massima", f"${curve.max_loss_usd:+.2f}")
cols[2].metric(
"Breakeven",
f"{curve.breakeven:.2f}" if curve.breakeven is not None else "",
)
def _render_decisions(position: PositionRecord) -> None:
st.subheader("Storico decisioni")
decisions = load_decisions_for_position(position.proposal_id)
if not decisions:
st.info("Nessuna decisione registrata per questa posizione.")
return
rows = []
for d in decisions:
try:
outputs = json.loads(d.outputs_json)
except (TypeError, ValueError):
outputs = {}
rows.append(
{
"timestamp": humanize_dt(d.timestamp),
"tipo decisione": d.decision_type,
"azione": d.action_taken or "",
"note": d.notes or "",
"output": json.dumps(outputs, sort_keys=True) if outputs else "",
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
def render() -> None:
st.title("💼 Posizione")
st.caption(
"Drilldown sul trade: gambe, payoff a scadenza, storico decisioni. "
"Tutti i dati arrivano da SQLite — nessuna chiamata MCP live."
)
db_path = _resolve_db()
open_pos = load_open_positions(db_path=db_path)
closed_recent = load_closed_positions(db_path=db_path)[-10:]
candidates: list[PositionRecord] = list(open_pos) + list(reversed(closed_recent))
if not candidates:
st.info(
"Nessuna posizione da mostrare. La pagina si popolerà non "
"appena il motore aprirà il primo trade."
)
return
labels = {_position_label(p): p for p in candidates}
pick = st.selectbox(
"Posizione",
options=list(labels.keys()),
index=0,
)
position = labels[pick]
# Deep-link via ?proposal_id=…
qp = st.query_params.get("proposal_id")
if qp:
try:
qp_uuid = UUID(qp)
override = load_position_by_id(qp_uuid, db_path=db_path)
if override is not None:
position = override
except ValueError:
st.warning(f"Parametro proposal_id non valido: {qp}")
st.divider()
_render_header(position)
st.divider()
_render_distance(position)
st.divider()
_render_legs(position)
st.divider()
_render_payoff(position)
st.divider()
_render_decisions(position)
render()
@@ -0,0 +1,309 @@
"""Calibrazione page — distribuzioni storiche dei segnali per tarare le soglie.
Legge dalla tabella ``market_snapshots`` (popolata dal job dedicato cron
``*/15``). Per ogni metrica osservabile mostra:
* istogramma + linea verticale della soglia attuale di config,
* percentili P5/P10/P25/P50/P75/P90/P95,
* percentuale di tick che la soglia attuale avrebbe filtrato.
L'idea è scegliere le soglie sui percentili reali del proprio
ambiente (testnet o mainnet), invece di valori fissati a istinto.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.config.loader import load_strategy
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
humanize_dt,
load_market_snapshots,
)
from cerbero_bite.state.models import MarketSnapshotRecord
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
@dataclass(frozen=True)
class MetricSpec:
"""Descrittore della metrica da plottare."""
field: str
title: str
unit: str
threshold_label: str | None
threshold_value: float | None
threshold_direction: str # "below" o "above" (filtra se valore è X soglia)
def _metric_specs(strategy: object | None) -> list[MetricSpec]:
"""Costruisce gli spec leggendo le soglie correnti da strategy.yaml."""
funding_max: float | None = None
dealer_min: float | None = None
dvol_min: float | None = None
if strategy is not None:
try:
funding_max = float(strategy.entry.funding_max_abs_annualized) # type: ignore[attr-defined]
except Exception:
funding_max = None
try:
dealer_min = float(strategy.entry.dealer_gamma_min) # type: ignore[attr-defined]
except Exception:
dealer_min = None
try:
dvol_min = float(strategy.entry.dvol_min) # type: ignore[attr-defined]
except Exception:
dvol_min = None
specs: list[MetricSpec] = [
MetricSpec(
field="dvol",
title="DVOL",
unit="%",
threshold_label=(
f"DVOL min={dvol_min:.0f}" if dvol_min is not None else None
),
threshold_value=dvol_min,
threshold_direction="below",
),
MetricSpec(
field="realized_vol_30d",
title="Realized vol 30d",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
MetricSpec(
field="iv_minus_rv",
title="IV RV (30d)",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
MetricSpec(
field="funding_perp_annualized",
title="Funding perp annualized",
unit="frazione",
threshold_label=(
f"|funding| max={funding_max:.2f}"
if funding_max is not None
else None
),
threshold_value=funding_max,
threshold_direction="above_abs",
),
MetricSpec(
field="funding_cross_annualized",
title="Funding cross median annualized",
unit="frazione",
threshold_label=None,
threshold_value=None,
threshold_direction="above_abs",
),
MetricSpec(
field="dealer_net_gamma",
title="Dealer net gamma",
unit="USD",
threshold_label=(
f"min={dealer_min:.0f}"
if dealer_min is not None
else None
),
threshold_value=dealer_min,
threshold_direction="below",
),
MetricSpec(
field="oi_delta_pct_4h",
title="OI delta % (4h)",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
]
return specs
def _series(records: list[MarketSnapshotRecord], field: str) -> pd.Series:
values: list[float] = []
for r in records:
v = getattr(r, field, None)
if v is None:
continue
try:
values.append(float(v))
except (TypeError, ValueError):
continue
return pd.Series(values, dtype="float64")
def _percent_blocked(s: pd.Series, spec: MetricSpec) -> float | None:
if spec.threshold_value is None or s.empty:
return None
if spec.threshold_direction == "below":
return float((s < spec.threshold_value).mean())
if spec.threshold_direction == "above_abs":
return float((s.abs() > spec.threshold_value).mean())
if spec.threshold_direction == "above":
return float((s > spec.threshold_value).mean())
return None
def _percentiles_strip(s: pd.Series) -> None:
if s.empty:
st.caption("(nessun dato)")
return
quantiles = [0.05, 0.10, 0.25, 0.50, 0.75, 0.90, 0.95]
cols = st.columns(len(quantiles))
for col, q in zip(cols, quantiles, strict=False):
col.metric(f"P{int(q * 100)}", f"{s.quantile(q):.4g}")
def _render_metric(spec: MetricSpec, records: list[MarketSnapshotRecord]) -> None:
s = _series(records, spec.field)
if s.empty:
st.subheader(f"{spec.title}")
st.info(
f"Nessun valore disponibile per `{spec.field}`. "
"Avvia il job `market_snapshot` (engine attivo, cron */15) per "
"popolare la tabella."
)
return
st.subheader(f"{spec.title} ({spec.unit})")
pct_blocked = _percent_blocked(s, spec)
cols = st.columns(4)
cols[0].metric("Tick raccolti", len(s))
cols[1].metric("Min", f"{s.min():.4g}")
cols[2].metric("Max", f"{s.max():.4g}")
cols[3].metric(
"% bloccato dalla soglia",
f"{pct_blocked:.0%}" if pct_blocked is not None else "",
help=(
"Frazione di tick che la soglia di config avrebbe filtrato"
f" se applicata a questa serie ({spec.threshold_direction})."
),
)
fig = go.Figure()
fig.add_trace(go.Histogram(x=s, nbinsx=40, opacity=0.85, name="distrib."))
if spec.threshold_value is not None:
fig.add_vline(
x=spec.threshold_value,
line_dash="dash",
line_color="red",
line_width=2,
annotation_text=spec.threshold_label or f"soglia {spec.threshold_value}",
annotation_position="top",
)
if spec.threshold_direction == "above_abs":
# Disegna anche il bound negativo per i filtri simmetrici.
fig.add_vline(
x=-spec.threshold_value,
line_dash="dash",
line_color="red",
line_width=2,
annotation_text=None,
)
fig.update_layout(
height=280,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=spec.unit,
yaxis_title="numero tick",
)
st.plotly_chart(fig, use_container_width=True)
_percentiles_strip(s)
def render() -> None:
st.title("📐 Calibrazione")
st.caption(
"Distribuzioni storiche dei segnali raccolti dal job "
"`market_snapshot` (cron */15). Usa i percentili reali per "
"tarare le soglie in `strategy.yaml` invece di valori a istinto."
)
db_path = _resolve_db()
col_a, col_b = st.columns(2)
asset = col_a.selectbox("Asset", options=["ETH", "BTC"], index=0)
window = col_b.selectbox(
"Finestra",
options=[
"Tutto lo storico",
"Ultime 24h",
"Ultimi 7 giorni",
"Ultimi 30 giorni",
],
index=0,
)
now = datetime.now(UTC)
start: datetime | None = None
if window == "Ultime 24h":
start = now - timedelta(hours=24)
elif window == "Ultimi 7 giorni":
start = now - timedelta(days=7)
elif window == "Ultimi 30 giorni":
start = now - timedelta(days=30)
records = load_market_snapshots(
asset=asset, db_path=db_path, start=start, limit=5000
)
if not records:
st.info(
"Nessun snapshot disponibile in questa finestra per "
f"`{asset}`. Avvia l'engine (`cerbero-bite start`) e attendi "
"almeno un tick del job `market_snapshot` (cron */15)."
)
return
st.caption(
f"{len(records)} snapshot · primo {humanize_dt(records[-1].timestamp)} "
f"· ultimo {humanize_dt(records[0].timestamp)}"
)
# Conteggio fetch_ok per qualità delle serie
n_ok = sum(1 for r in records if r.fetch_ok)
cols = st.columns(3)
cols[0].metric("Snapshot totali", len(records))
cols[1].metric("fetch_ok = true", n_ok)
cols[2].metric(
"Tasso ok",
f"{n_ok / len(records):.0%}" if records else "",
)
st.divider()
# Carica strategy.yaml per leggere le soglie correnti
try:
strategy = load_strategy(Path("strategy.yaml"))
except Exception as exc:
st.warning(
f"Impossibile leggere `strategy.yaml`: {type(exc).__name__}: {exc}"
)
strategy = None
specs = _metric_specs(strategy)
for spec in specs:
_render_metric(spec, records)
st.divider()
render()
@@ -0,0 +1,846 @@
"""Strategia page — documento operativo + lettura live dei segnali.
Renderizza il documento canonico ``docs/13-strategia-spiegata.md`` e
sopra di esso un pannello che mostra l'ultimo tick di
``market_snapshots`` confrontato con le soglie di ``strategy.yaml``.
Lo scopo è far vedere subito, ogni volta che si apre la pagina:
"a cosa serve il dato che il bot sta raccogliendo adesso".
La pagina è di sola lettura: non chiama MCP, non scrive sul DB.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
from pathlib import Path
import streamlit as st
from cerbero_bite.config.loader import load_strategy
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
humanize_dt,
load_market_snapshots,
)
from cerbero_bite.state.models import MarketSnapshotRecord
_DOC_FILENAME = "13-strategia-spiegata.md"
_DOC_CANDIDATES: tuple[Path, ...] = (
Path("/app/docs") / _DOC_FILENAME, # in-container shipped via Dockerfile
Path(__file__).resolve().parents[4] / "docs" / _DOC_FILENAME, # repo dev
Path(__file__).resolve().parents[3] / "docs" / _DOC_FILENAME,
)
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _load_doc() -> str | None:
for candidate in _DOC_CANDIDATES:
if candidate.is_file():
try:
return candidate.read_text(encoding="utf-8")
except OSError:
continue
return None
@dataclass(frozen=True)
class _GateRow:
label: str
value: str
threshold: str
status: str # "pass" | "fail" | "n/a"
note: str = ""
def _fmt_decimal(v: object, *, fmt: str = "{:.4g}", suffix: str = "") -> str:
if v is None:
return ""
try:
return fmt.format(float(v)) + suffix
except (TypeError, ValueError):
return ""
def _build_gates(
snap: MarketSnapshotRecord, strategy: object
) -> list[_GateRow]:
"""Costruisce le righe del pannello live dai gate §2 della strategia."""
rows: list[_GateRow] = []
entry = getattr(strategy, "entry", None)
structure = getattr(strategy, "structure", None)
# --- DVOL band -------------------------------------------------
dvol_min = float(getattr(entry, "dvol_min", 35.0)) if entry else 35.0
dvol_max = float(getattr(entry, "dvol_max", 90.0)) if entry else 90.0
dvol_v = float(snap.dvol) if snap.dvol is not None else None
if dvol_v is None:
rows.append(
_GateRow(
"DVOL in banda 3590",
"",
f"{dvol_min:.0f} ≤ DVOL ≤ {dvol_max:.0f}",
"n/a",
"Dato non disponibile in questo tick.",
)
)
else:
ok = dvol_min <= dvol_v <= dvol_max
rows.append(
_GateRow(
"DVOL in banda",
f"{dvol_v:.2f}",
f"{dvol_min:.0f}{dvol_max:.0f}",
"pass" if ok else "fail",
"Premio adeguato e regime non-stress."
if ok
else "Sotto banda = premio magro; sopra = stress, no entry.",
)
)
# --- Funding perp annualized ----------------------------------
fund_max = (
float(getattr(entry, "funding_perp_abs_max_annualized", 0.80))
if entry
else 0.80
)
fp = (
float(snap.funding_perp_annualized)
if snap.funding_perp_annualized is not None
else None
)
if fp is None:
rows.append(
_GateRow(
"Funding perp |·| ≤ soglia",
"",
f"|f| ≤ {fund_max:.0%}",
"n/a",
)
)
else:
ok = abs(fp) <= fund_max
rows.append(
_GateRow(
"Funding perp |·|",
f"{fp:+.2%}",
f"{fund_max:.0%}",
"pass" if ok else "fail",
"Filtra regimi di liquidazioni a cascata imminenti.",
)
)
# --- Cross-exchange funding (bias) ---------------------------
bull_th = (
float(getattr(entry, "funding_bull_threshold_annualized", 0.20))
if entry
else 0.20
)
bear_th = (
float(getattr(entry, "funding_bear_threshold_annualized", -0.20))
if entry
else -0.20
)
fc = (
float(snap.funding_cross_annualized)
if snap.funding_cross_annualized is not None
else None
)
if fc is None:
bias_funding = ""
rows.append(
_GateRow(
"Funding cross (bias)",
"",
f"bull ≥ {bull_th:+.0%} · bear ≤ {bear_th:+.0%}",
"n/a",
)
)
else:
if fc >= bull_th:
bias_funding = "BULL"
elif fc <= bear_th:
bias_funding = "BEAR"
else:
bias_funding = "NEUTRO"
rows.append(
_GateRow(
"Funding cross (bias)",
f"{fc:+.2%}{bias_funding}",
f"bull ≥ {bull_th:+.0%} · bear ≤ {bear_th:+.0%}",
"pass" if bias_funding != "NEUTRO" else "fail",
"Mediana 4 maggiori exchange. Discordante col trend = no entry.",
)
)
# --- Macro days to event --------------------------------------
dte_target = (
int(getattr(structure, "dte_target", 18)) if structure else 18
)
macro_d = snap.macro_days_to_event
if macro_d is None:
rows.append(
_GateRow(
"Macro fuori finestra DTE",
"nessun evento",
f"> {dte_target}g",
"pass",
"Nessun evento ad alta severità entro la scadenza target.",
)
)
else:
ok = macro_d > dte_target
rows.append(
_GateRow(
"Macro fuori finestra DTE",
f"{macro_d} g al prossimo",
f"> {dte_target} g",
"pass" if ok else "fail",
"FOMC/CPI/NFP/ECB/Powell entro DTE = no entry.",
)
)
# --- Dealer gamma ---------------------------------------------
gamma_min = (
float(getattr(entry, "dealer_gamma_min", 0.0)) if entry else 0.0
)
gamma_enabled = (
bool(getattr(entry, "dealer_gamma_filter_enabled", True))
if entry
else True
)
g = (
float(snap.dealer_net_gamma)
if snap.dealer_net_gamma is not None
else None
)
if not gamma_enabled:
rows.append(
_GateRow(
"Dealer gamma filter",
_fmt_decimal(g, fmt="{:,.0f}", suffix=" USD")
if g is not None
else "",
"filtro DISABILITATO",
"n/a",
)
)
elif g is None:
rows.append(
_GateRow(
"Dealer net gamma > soglia",
"",
f"> {gamma_min:,.0f} USD",
"n/a",
)
)
else:
ok = g > gamma_min
rows.append(
_GateRow(
"Dealer net gamma",
f"{g:,.0f} USD",
f"> {gamma_min:,.0f} USD",
"pass" if ok else "fail",
"Long-gamma regime sopprime la vol → ideale per vendere spread.",
)
)
# --- Liquidation risks ----------------------------------------
liq_enabled = (
bool(getattr(entry, "liquidation_filter_enabled", True))
if entry
else True
)
long_r = snap.liquidation_long_risk or ""
short_r = snap.liquidation_short_risk or ""
lr_status = "n/a"
if liq_enabled and snap.liquidation_long_risk and snap.liquidation_short_risk:
worst = max(
("low", "med", "high").index(snap.liquidation_long_risk)
if snap.liquidation_long_risk in ("low", "med", "high")
else 0,
("low", "med", "high").index(snap.liquidation_short_risk)
if snap.liquidation_short_risk in ("low", "med", "high")
else 0,
)
lr_status = "fail" if worst == 2 else "pass"
rows.append(
_GateRow(
"Liquidation risk (long / short)",
f"{long_r} / {short_r}",
"non `high`" if liq_enabled else "filtro DISABILITATO",
lr_status,
"Densità liquidazioni vicine al spot. `high` su un lato = scarta setup.",
)
)
# --- IV RV (richness) — solo informativo --------------------
rv = (
float(snap.realized_vol_30d) if snap.realized_vol_30d is not None else None
)
iv_minus_rv = (
float(snap.iv_minus_rv) if snap.iv_minus_rv is not None else None
)
rows.append(
_GateRow(
"IV RV (richness)",
(
f"{iv_minus_rv:+.2f} pt vol"
if iv_minus_rv is not None
else ""
),
"info, > 0 = premio ricco",
"pass" if (iv_minus_rv is not None and iv_minus_rv > 0) else "n/a",
f"RV30={rv:.2f}" if rv is not None else "",
)
)
return rows
def _render_gates(rows: list[_GateRow]) -> None:
icons = {"pass": "", "fail": "", "n/a": ""}
for r in rows:
icon = icons.get(r.status, "")
col1, col2, col3 = st.columns([4, 4, 4])
col1.markdown(f"{icon} **{r.label}**")
col2.markdown(f"`{r.value}`")
col3.markdown(f"_{r.threshold}_")
if r.note:
st.caption(r.note)
st.divider()
def _profile_caps(strategy: object | None) -> dict[str, float]:
"""Estrae le sole leve di sizing da una strategia (o usa default conservativi)."""
out = {
"cap_pertrade_eur": 200.0,
"cap_aggregate_eur": 1000.0,
"kelly": 0.13,
"max_n": 4.0,
"max_concurrent": 1.0,
"width_pct": 0.04,
"credit_ratio": 0.30,
"profit_take": 0.50,
"stop_mult": 2.50,
}
if strategy is None:
return out
try:
out["cap_pertrade_eur"] = float(strategy.sizing.cap_per_trade_eur) # type: ignore[attr-defined]
out["cap_aggregate_eur"] = float(strategy.sizing.cap_aggregate_open_eur) # type: ignore[attr-defined]
out["kelly"] = float(strategy.sizing.kelly_fraction) # type: ignore[attr-defined]
out["max_n"] = float(strategy.sizing.max_contracts_per_trade) # type: ignore[attr-defined]
out["max_concurrent"] = float(strategy.sizing.max_concurrent_positions) # type: ignore[attr-defined]
out["width_pct"] = float(strategy.structure.spread_width.target_pct_of_spot) # type: ignore[attr-defined]
out["credit_ratio"] = float(strategy.structure.credit_to_width_ratio_min) # type: ignore[attr-defined]
out["profit_take"] = float(strategy.exit.profit_take_pct_of_credit) # type: ignore[attr-defined]
out["stop_mult"] = float(strategy.exit.stop_loss_mark_x_credit) # type: ignore[attr-defined]
except Exception:
pass
return out
def _detect_features(strategy: object | None) -> dict[str, bool]:
"""Quali miglioramenti del PR FDAC sono ATTIVI in questa strategia.
- **A** (delta dinamico): `short_strike.delta_by_dvol` non vuoto.
- **D** (vol-harvest): `exit.vol_harvest_dvol_decrease > 0`.
- **F** (auto-pause): `auto_pause.enabled = true`.
- **IV** (IV-richness gate, dal PR precedente): `entry.iv_minus_rv_filter_enabled`.
"""
feats = {"A": False, "D": False, "F": False, "IV": False}
if strategy is None:
return feats
try:
feats["A"] = bool(
getattr(strategy.structure.short_strike, "delta_by_dvol", []) # type: ignore[attr-defined]
)
except Exception:
pass
try:
feats["D"] = (
float(getattr(strategy.exit, "vol_harvest_dvol_decrease", 0)) > 0 # type: ignore[attr-defined]
)
except Exception:
pass
try:
feats["F"] = bool(
getattr(getattr(strategy, "auto_pause", None), "enabled", False)
)
except Exception:
pass
try:
feats["IV"] = bool(
getattr(strategy.entry, "iv_minus_rv_filter_enabled", False) # type: ignore[attr-defined]
)
except Exception:
pass
return feats
def _compute_pl(
caps: dict[str, float],
*,
capital: float,
spot: float,
win_rate: float,
trades_per_year: int,
eur_to_usd: float = 1.075,
features: dict[str, bool] | None = None,
) -> dict[str, float]:
"""Calcola le metriche P/L per un profilo di sizing.
Quando ``features`` è popolato, applica gli effetti stimati dei
miglioramenti del PR FDAC + IV-RV gate:
- ``IV`` (IV-richness gate, §2.9): +5 pp win-rate, 25% trade/anno.
- ``A`` (delta dinamico, §3.2): +1.5 pp win-rate, sl_loss × 0.95.
- ``D`` (vol-harvest, §7-bis): 5% delle would-be-loss diventano
harvest exit a +0.20 × credito.
- ``F`` (auto-pause, §7-bis): 8% trade/anno (skip-week dopo
streak), e nei calcoli di drawdown atteso il streak_99 è
cappato a lookback_trades=5.
Effetti **stimati ex-ante** dalla letteratura short-vol systematic;
i valori puntuali andranno calibrati sul dataset accumulato.
"""
feats = features or {}
width = caps["width_pct"] * spot
credit = caps["credit_ratio"] * width
tp_profit = caps["profit_take"] * credit
sl_loss = (caps["stop_mult"] - 1.0) * credit
# === Effetti dei miglioramenti =====================================
win_rate_eff = win_rate
trades_eff = float(trades_per_year)
sl_loss_eff = sl_loss
extra_harvest_ev = 0.0
prob_harvest = 0.0
if feats.get("IV"):
# Skip più aggressivo + qualità migliore: +5 pp win, 25% trade.
win_rate_eff = min(0.95, win_rate_eff + 0.05)
trades_eff *= 0.75
if feats.get("A"):
# Migliore strike picking → +1.5 pp win-rate; riduzione del
# tail della perdita (5%) per le bande high-DVOL.
win_rate_eff = min(0.95, win_rate_eff + 0.015)
sl_loss_eff *= 0.95
if feats.get("D"):
# Vol-harvest: ~5% delle entrate intercettate prima dello stop
# con un piccolo profitto (+0.20×credit). Sottrae lo stesso
# volume dalle prob_loss.
prob_harvest = 0.05
extra_harvest_ev = 0.20 * credit
# F (auto-pause) agisce su streak_99 più sotto, e sul trades_eff.
if feats.get("F"):
trades_eff *= 0.92
cap_pertrade_usd = caps["cap_pertrade_eur"] * eur_to_usd
risk_target = min(caps["kelly"] * capital, cap_pertrade_usd)
n_kelly = int(risk_target // width) if width > 0 else 0
n_per_trade = max(0, min(n_kelly, int(caps["max_n"])))
prob_time_stop = 0.07
prob_other_stop = 0.03
prob_loss = max(
0.0,
1.0 - win_rate_eff - prob_time_stop - prob_other_stop - prob_harvest,
)
avg_time_stop_pl = 0.10 * credit
e_trade_gross = (
win_rate_eff * tp_profit
- prob_loss * sl_loss_eff
+ prob_time_stop * avg_time_stop_pl
+ prob_harvest * extra_harvest_ev
)
fees = 0.0003 * spot * 2
slippage = 0.03 * credit
e_trade_net = e_trade_gross - fees - slippage
concurrency = max(1.0, caps["max_concurrent"])
annual_pl = trades_eff * n_per_trade * concurrency * e_trade_net
apr = (annual_pl / capital) if capital > 0 else 0.0
return {
"width": width,
"credit": credit,
"tp_profit": tp_profit,
"sl_loss": sl_loss_eff,
"risk_target": risk_target,
"n_per_trade": float(n_per_trade),
"concurrency": concurrency,
"e_trade_net": e_trade_net,
"annual_pl": annual_pl,
"apr": apr,
"fees": fees,
"slippage": slippage,
"win_rate_eff": win_rate_eff,
"trades_eff": trades_eff,
"prob_loss": prob_loss,
"prob_harvest": prob_harvest,
}
def _render_profile_card(
label: str,
caps: dict[str, float],
metrics: dict[str, float],
badge: str,
features: dict[str, bool] | None = None,
metrics_base: dict[str, float] | None = None,
) -> None:
"""Rendering di un profilo (conservativo o aggressivo) in una colonna."""
st.markdown(f"### {label} {badge}")
st.caption(
f"cap/trade {caps['cap_pertrade_eur']:.0f} EUR · "
f"cap aggreg. {caps['cap_aggregate_eur']:.0f} EUR · "
f"max {caps['max_n']:.0f} contratti × "
f"{caps['max_concurrent']:.0f} pos. concorrenti"
)
if features:
active = [k for k, v in features.items() if v]
if active:
st.caption(
"🟢 Miglioramenti attivi: "
+ " · ".join(
{
"IV": "**IV-RV gate**",
"A": "**A** delta dinamico",
"D": "**D** vol-harvest",
"F": "**F** auto-pause",
}.get(k, k)
for k in active
)
)
else:
st.caption("⚪ Nessun miglioramento attivo (formula base)")
cols = st.columns(2)
cols[0].metric("Contratti per trade", f"{metrics['n_per_trade']:.0f}")
cols[1].metric("Posizioni concorrenti", f"{metrics['concurrency']:.0f}")
cols = st.columns(2)
e_delta = (
f"{metrics['e_trade_net'] - metrics_base['e_trade_net']:+.1f}"
if metrics_base
else None
)
pl_delta = (
f"{metrics['annual_pl'] - metrics_base['annual_pl']:+.0f} USD vs base"
if metrics_base
else f"{metrics['apr']:+.1%} APR"
)
cols[0].metric(
"E[trade] netto",
f"{metrics['e_trade_net']:+.1f} USD",
delta=e_delta,
help=(
f"win_rate effettivo={metrics['win_rate_eff']:.0%}, "
f"prob_loss={metrics['prob_loss']:.0%}, "
f"trade/anno={metrics['trades_eff']:.0f}"
),
)
cols[1].metric(
"P/L annuo stimato",
f"{metrics['annual_pl']:+.0f} USD",
delta=f"{metrics['apr']:+.1%} APR" + (
f" ({metrics['annual_pl'] - metrics_base['annual_pl']:+.0f} vs base)"
if metrics_base
else ""
),
)
if metrics["n_per_trade"] == 0:
st.warning(
"Sizing 0 contratti: capitale insufficiente per i cap di "
"questo profilo."
)
def _render_pl_panel(
strategy_main: object | None,
strategy_conservativa: object | None,
strategy_aggressiva: object | None,
) -> None:
"""Pannello P/L: confronto Conservativa vs Aggressiva sugli stessi slider."""
st.subheader("💰 P/L atteso — Conservativa vs Aggressiva")
st.caption(
"Stessi slider, due profili di sizing. **Conservativa** = la "
"golden config attuale (`strategy.yaml`). **Aggressiva** = "
"`strategy.aggressiva.yaml` con cap_per_trade 4×, max contratti "
"4×, 2 posizioni concorrenti. Le regole §2-§9 sono identiche; "
"cambiano SOLO le leve di sizing — quello che il P/L "
"conservativo lascia sul tavolo."
)
col_a, col_b, col_c, col_d = st.columns(4)
capital = col_a.slider(
"Capitale (USD)", 720, 50_000, value=10_000, step=100
)
spot = col_b.slider("Spot ETH (USD)", 1500, 6000, value=3000, step=100)
win_rate = col_c.slider(
"Win rate atteso", 0.50, 0.90, value=0.75, step=0.01,
help=(
"Senza filtri quant ≈ 0.650.70. CON filtri (dealer gamma>0, "
"no macro, IVRV>0, liquidation_*_risk≠high) sale a 0.750.80."
),
)
trades_per_year = col_d.slider(
"Trade / anno (post-filtri)", 8, 30, value=18, step=1,
help="52 lunedì × probabilità di superare i filtri (3050%).",
)
cons_caps = _profile_caps(strategy_conservativa or strategy_main)
aggr_caps = _profile_caps(strategy_aggressiva)
cons_feats = _detect_features(strategy_conservativa or strategy_main)
aggr_feats = _detect_features(strategy_aggressiva)
apply_features = st.checkbox(
"Applica gli effetti dei miglioramenti FDAC + IV-RV gate "
"letti dai due `strategy.*.yaml`",
value=True,
help=(
"Quando ON, ogni colonna applica gli effetti stimati delle "
"feature attive nel rispettivo profilo. OFF = formula base "
"(senza miglioramenti) per confronto pulito."
),
)
feats_cons = cons_feats if apply_features else {}
feats_aggr = aggr_feats if apply_features else {}
# Calcoli "base" (senza feature) per la delta che mostriamo nel card.
cons_base = _compute_pl(
cons_caps,
capital=capital,
spot=spot,
win_rate=win_rate,
trades_per_year=trades_per_year,
)
aggr_base = _compute_pl(
aggr_caps,
capital=capital,
spot=spot,
win_rate=win_rate,
trades_per_year=trades_per_year,
)
cons = _compute_pl(
cons_caps,
capital=capital,
spot=spot,
win_rate=win_rate,
trades_per_year=trades_per_year,
features=feats_cons,
)
aggr = _compute_pl(
aggr_caps,
capital=capital,
spot=spot,
win_rate=win_rate,
trades_per_year=trades_per_year,
features=feats_aggr,
)
col_cons, col_aggr = st.columns(2)
with col_cons:
_render_profile_card(
"🛡️ Conservativa",
cons_caps,
cons,
"_(golden config v1.2.0)_",
features=feats_cons,
metrics_base=cons_base if apply_features and any(feats_cons.values()) else None,
)
with col_aggr:
_render_profile_card(
"🔥 Aggressiva",
aggr_caps,
aggr,
"_(deroga §11, richiede paper trading)_",
features=feats_aggr,
metrics_base=aggr_base if apply_features and any(feats_aggr.values()) else None,
)
if aggr["annual_pl"] > 0 and cons["annual_pl"] > 0:
ratio = aggr["annual_pl"] / cons["annual_pl"]
st.success(
f"Profilo aggressivo: P/L atteso ≈ **{ratio:.1f}× il "
f"conservativo** ({aggr['apr']:+.1%} vs {cons['apr']:+.1%} "
"APR). Drawdown atteso scala con lo stesso fattore."
)
if win_rate < 0.72:
st.error(
"**Win rate sotto 0.72: entrambi i profili perdono soldi.** "
"Selling vol nudo è strutturalmente neutro qui. L'edge della "
"strategia sono i FILTRI (dealer gamma>0, no macro, "
"liquidation≠high, bias chiaro) che alzano il win rate sopra "
"il 0.75. Senza filtri attivi nessuno dei due profili è "
"viable."
)
# === Mini-tabella: contributo marginale di ogni feature =====
if apply_features and (any(feats_cons.values()) or any(feats_aggr.values())):
st.markdown("**Contributo marginale di ogni feature** (profilo aggressivo)")
contrib_rows = []
for label, key in [
("IV — IV-richness gate", "IV"),
("A — Delta dinamico", "A"),
("D — Vol-harvest", "D"),
("F — Auto-pause", "F"),
]:
single_feat = {key: True}
m = _compute_pl(
aggr_caps,
capital=capital,
spot=spot,
win_rate=win_rate,
trades_per_year=trades_per_year,
features=single_feat,
)
delta_pl = m["annual_pl"] - aggr_base["annual_pl"]
delta_apr = m["apr"] - aggr_base["apr"]
active = "" if aggr_feats.get(key) else ""
contrib_rows.append(
{
"Feature": label,
"Attiva nel YAML": active,
"ΔP/L annuo (solo questa)": f"{delta_pl:+.0f} USD",
"ΔAPR": f"{delta_apr:+.1%}",
}
)
st.table(contrib_rows)
st.caption(
"Ogni riga mostra il contributo del SINGOLO feature (le altre "
"spente). Effetti stimati ex-ante; calibrabili sui dati "
"raccolti via `📐 Calibrazione`."
)
# Sensibilità win-rate per il profilo aggressivo (più informativo)
st.markdown("**Sensibilità al win rate** (profilo aggressivo)")
sens_rows = []
for wr in (0.65, 0.70, 0.72, 0.75, 0.78, 0.80, 0.82):
m_a = _compute_pl(
aggr_caps,
capital=capital,
spot=spot,
win_rate=wr,
trades_per_year=trades_per_year,
features=feats_aggr,
)
m_c = _compute_pl(
cons_caps,
capital=capital,
spot=spot,
win_rate=wr,
trades_per_year=trades_per_year,
features=feats_cons,
)
sens_rows.append(
{
"Win rate": f"{wr:.0%}",
"Conservativa P/L": f"{m_c['annual_pl']:+.0f} USD",
"Conservativa APR": f"{m_c['apr']:+.1%}",
"Aggressiva P/L": f"{m_a['annual_pl']:+.0f} USD",
"Aggressiva APR": f"{m_a['apr']:+.1%}",
}
)
st.table(sens_rows)
st.caption(
"Costi: fee 0.03% notional × 2 leg, slippage 3% del credito "
"(combo limit GTC al mid). Distribuzione esiti: profit-take = "
"win_rate, time-stop ≈ 7%, altri-stop ≈ 3%, stop-loss = il resto. "
"**Multi-asset (ETH+BTC) non è incluso nei numeri**: richiede "
"modifiche di codice (single-asset attuale). Il moltiplicatore "
"2× citato nel doc è la stima ex-ante di cosa otterresti DOPO."
)
def render() -> None:
st.title("📚 Strategia")
st.caption(
"Documento operativo che lega ogni regola del rule engine al "
"dato osservabile da cui dipende. Il pannello live in alto mostra "
"l'ultimo tick di `market_snapshots` confrontato con le soglie di "
"`strategy.yaml`."
)
db_path = _resolve_db()
asset = st.selectbox("Asset", options=["ETH", "BTC"], index=0)
records = load_market_snapshots(asset=asset, db_path=db_path, limit=1)
def _try_load(name: str) -> object | None:
for base in (Path("/app"), Path.cwd(), Path(__file__).resolve().parents[4]):
path = base / name
if path.is_file():
try:
# `_profile_caps` legge `.sizing.*` direttamente sul
# `StrategyConfig`, non sul wrapper `LoadedConfig`.
return load_strategy(path).config
except Exception as exc:
st.warning(
f"`{name}`: {type(exc).__name__}: {exc}"
)
return None
return None
strategy = _try_load("strategy.yaml")
strategy_conservativa = _try_load("strategy.conservativa.yaml")
strategy_aggressiva = _try_load("strategy.aggressiva.yaml")
st.divider()
st.subheader("📡 Stato live dei gate di entry §2")
if not records:
st.info(
"Nessuno snapshot disponibile per "
f"`{asset}`. Il job `market_snapshot` (cron `*/15`) deve "
"girare almeno una volta. Engine attivo? Controlla la pagina "
"`📊 Status`."
)
else:
latest = records[0]
st.caption(
f"Ultimo tick: {humanize_dt(latest.timestamp)} · "
f"asset {latest.asset} · "
f"fetch_ok = {'' if latest.fetch_ok else '⚠️'}"
)
if strategy is None:
st.warning(
"Senza `strategy.yaml` non posso valutare i gate; mostro "
"solo i valori grezzi."
)
st.json(latest.model_dump(mode="json"))
else:
rows = _build_gates(latest, strategy)
_render_gates(rows)
st.divider()
_render_pl_panel(strategy, strategy_conservativa, strategy_aggressiva)
st.divider()
st.subheader("📖 Documento esteso")
doc = _load_doc()
if doc is None:
st.error(
"Documento `docs/13-strategia-spiegata.md` non trovato. In "
"locale verifica il path; in container assicurati che il "
"Dockerfile copi `docs/` in `/app/docs/`."
)
else:
st.markdown(doc, unsafe_allow_html=False)
render()
+4 -1
View File
@@ -71,8 +71,11 @@ class AlertManager:
return
if severity == Severity.MEDIUM:
# The TelegramClient already prefixes [PRIORITY][tag] in the
# rendered text, so we pass the raw message and let the
# client compose the final form.
await self._telegram.notify(
f"[{source}] {message}", priority="high", tag=source
message, priority="high", tag=source
)
return
+175
View File
@@ -0,0 +1,175 @@
"""Auto-pause circuit breaker (§7-bis F).
Pure-function evaluation that consults `system_state.auto_pause_until`
and the rolling P/L of the last N closed positions to decide whether
the engine should skip an entry cycle.
Two responsibilities, both deterministic at call time:
* :func:`is_paused` — returns ``True`` when the persisted
``auto_pause_until`` is in the future. Independent from the kill
switch, which targets technical errors.
* :func:`evaluate_drawdown_breach` — given the last N closed P/Ls and
the current capital, returns whether the rolling drawdown breached
the configured ``max_drawdown_pct`` threshold. The orchestrator
layer is the one that flips the persisted state on breach (this
module stays I/O-free for testability).
The two are separated on purpose: ``is_paused`` is the cheap,
read-only gate consulted at the start of every entry cycle; the
breach evaluation runs once per cycle right after the entry
filtering, before the entry is actually placed.
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime, timedelta
from decimal import Decimal
from cerbero_bite.config.schema import AutoPauseConfig
from cerbero_bite.state.models import SystemStateRecord
__all__ = [
"AutoPauseDecision",
"PauseStatus",
"evaluate_drawdown_breach",
"is_paused",
"pause_until",
]
@dataclass(frozen=True)
class PauseStatus:
"""Snapshot del flag di auto-pausa al momento della valutazione."""
paused: bool
until: datetime | None
reason: str | None
@dataclass(frozen=True)
class AutoPauseDecision:
"""Esito di :func:`evaluate_drawdown_breach`."""
should_pause: bool
cumulative_pnl_usd: Decimal
drawdown_pct: Decimal
threshold_pct: Decimal
reason: str | None
def is_paused(
state: SystemStateRecord | None, *, now: datetime
) -> PauseStatus:
"""Restituisce lo stato della pausa rispetto a ``now``.
``state == None`` o ``auto_pause_until == None`` o
``auto_pause_until <= now`` ⇒ engine attivo.
"""
if state is None or state.auto_pause_until is None:
return PauseStatus(paused=False, until=None, reason=None)
until = state.auto_pause_until
if until.tzinfo is not None and now.tzinfo is None:
# Coerenza: se il valore persistito è tz-aware, normalizziamo.
return PauseStatus(
paused=until > now.replace(tzinfo=until.tzinfo),
until=until,
reason=state.auto_pause_reason,
)
return PauseStatus(
paused=until > now,
until=until,
reason=state.auto_pause_reason,
)
def pause_until(now: datetime, weeks: int) -> datetime:
"""Calcola la scadenza della pausa (``now + weeks``).
Estratto in funzione separata per facilitare i test e per ricordare
che la pausa è espressa in **settimane** (la strategia ha cron
settimanale; pause più corte non avrebbero modo di evitare una
settimana di entry).
"""
return now + timedelta(weeks=max(1, weeks))
def evaluate_drawdown_breach(
*,
cfg: AutoPauseConfig,
recent_pnl_usd: list[Decimal],
capital_usd: Decimal,
) -> AutoPauseDecision:
"""Decide se la pausa va armata ora dato il rolling P/L.
Regola: se la somma dei P/L delle ultime ``cfg.lookback_trades``
posizioni chiuse è negativa e in valore assoluto eccede
``cfg.max_drawdown_pct × capital_usd``, ritorna
``should_pause=True``. Tutte le altre condizioni → False.
``cfg.enabled=False`` → ritorna sempre False (filtro disabilitato).
Lookback insufficiente → ritorna False (non scattiamo finché non
abbiamo abbastanza storia per giudicare).
"""
threshold_pct = cfg.max_drawdown_pct
cumulative = sum((p for p in recent_pnl_usd), start=Decimal("0"))
if not cfg.enabled:
return AutoPauseDecision(
should_pause=False,
cumulative_pnl_usd=cumulative,
drawdown_pct=Decimal("0"),
threshold_pct=threshold_pct,
reason=None,
)
if len(recent_pnl_usd) < cfg.lookback_trades:
return AutoPauseDecision(
should_pause=False,
cumulative_pnl_usd=cumulative,
drawdown_pct=Decimal("0"),
threshold_pct=threshold_pct,
reason=None,
)
if capital_usd <= 0:
return AutoPauseDecision(
should_pause=False,
cumulative_pnl_usd=cumulative,
drawdown_pct=Decimal("0"),
threshold_pct=threshold_pct,
reason=None,
)
# Solo perdite ci interessano: vincite cumulate non scattano la pausa.
if cumulative >= 0:
return AutoPauseDecision(
should_pause=False,
cumulative_pnl_usd=cumulative,
drawdown_pct=cumulative / capital_usd,
threshold_pct=threshold_pct,
reason=None,
)
drawdown_pct = (-cumulative) / capital_usd
if drawdown_pct >= threshold_pct:
return AutoPauseDecision(
should_pause=True,
cumulative_pnl_usd=cumulative,
drawdown_pct=drawdown_pct,
threshold_pct=threshold_pct,
reason=(
f"rolling DD {drawdown_pct:.2%}{threshold_pct:.2%} "
f"(last {cfg.lookback_trades} trades, "
f"cumulative {cumulative} USD)"
),
)
return AutoPauseDecision(
should_pause=False,
cumulative_pnl_usd=cumulative,
drawdown_pct=drawdown_pct,
threshold_pct=threshold_pct,
reason=None,
)
+24 -8
View File
@@ -16,13 +16,13 @@ from pathlib import Path
import httpx
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._base import DEFAULT_BOT_TAG, HttpToolClient
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.clients.sentiment import SentimentClient
from cerbero_bite.clients.telegram import TelegramClient
from cerbero_bite.clients.telegram import TelegramClient, load_telegram_credentials
from cerbero_bite.config.mcp_endpoints import McpEndpoints
from cerbero_bite.config.schema import StrategyConfig
from cerbero_bite.runtime.alert_manager import AlertManager
@@ -78,6 +78,7 @@ def build_runtime(
token: str,
db_path: Path | str,
audit_path: Path | str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0,
retry_max: int = 3,
clock: Callable[[], datetime] | None = None,
@@ -140,16 +141,31 @@ def build_runtime(
service=service,
base_url=endpoints.for_service(service),
token=token,
bot_tag=bot_tag,
timeout_s=timeout_s,
retry_max=retry_max,
client=http_client,
)
telegram = TelegramClient(_client("telegram"))
bot_token, chat_id = load_telegram_credentials()
telegram = TelegramClient(
bot_token=bot_token,
chat_id=chat_id,
http_client=http_client,
timeout_s=timeout_s,
)
alert_manager = AlertManager(
telegram=telegram, audit_log=audit_log, kill_switch=kill_switch
)
deribit = DeribitClient(_client("deribit"))
macro = MacroClient(_client("macro"))
sentiment = SentimentClient(_client("sentiment"))
hyperliquid = HyperliquidClient(_client("hyperliquid"))
portfolio = PortfolioClient(
deribit=deribit, hyperliquid=hyperliquid, macro=macro
)
return RuntimeContext(
cfg=cfg,
db_path=db_path,
@@ -158,11 +174,11 @@ def build_runtime(
audit_log=audit_log,
kill_switch=kill_switch,
alert_manager=alert_manager,
deribit=DeribitClient(_client("deribit")),
macro=MacroClient(_client("macro")),
sentiment=SentimentClient(_client("sentiment")),
hyperliquid=HyperliquidClient(_client("hyperliquid")),
portfolio=PortfolioClient(_client("portfolio")),
deribit=deribit,
macro=macro,
sentiment=sentiment,
hyperliquid=hyperliquid,
portfolio=portfolio,
telegram=telegram,
http_client=http_client,
clock=clk,
+68 -1
View File
@@ -38,6 +38,7 @@ from cerbero_bite.core.entry_validator import (
from cerbero_bite.core.liquidity_gate import InstrumentSnapshot, check
from cerbero_bite.core.sizing_engine import SizingContext, compute_contracts
from cerbero_bite.core.types import OptionQuote
from cerbero_bite.runtime import auto_pause as auto_pause_module
from cerbero_bite.runtime.alert_manager import AlertManager
from cerbero_bite.runtime.dependencies import RuntimeContext
from cerbero_bite.state import (
@@ -64,6 +65,7 @@ _STATUS_NO_ENTRY = "no_entry"
_STATUS_BROKER_REJECT = "broker_reject"
_STATUS_KILL_SWITCH = "kill_switch_armed"
_STATUS_HAS_OPEN = "has_open_position"
_STATUS_AUTO_PAUSED = "auto_paused"
@dataclass(frozen=True)
@@ -322,6 +324,28 @@ async def run_entry_cycle(
)
return EntryCycleResult(status=_STATUS_KILL_SWITCH, reason="kill_switch")
# §7-bis (F): auto-pause circuit breaker. Read-only consultation
# of the persisted state — the breach evaluation runs later, after
# capital is known.
conn = connect_state(ctx.db_path)
try:
sys_state = ctx.repository.get_system_state(conn)
finally:
conn.close()
pause_status = auto_pause_module.is_paused(sys_state, now=when)
if pause_status.paused:
await alert.low(
source="entry_cycle",
message=(
f"auto-paused until {pause_status.until} "
f"({pause_status.reason or 'no reason'}) — skipping"
),
)
return EntryCycleResult(
status=_STATUS_AUTO_PAUSED,
reason=pause_status.reason or "auto_paused",
)
# Has open position?
conn = connect_state(ctx.db_path)
try:
@@ -344,6 +368,44 @@ async def run_entry_cycle(
)
capital_usd = snap.portfolio_eur * eur_to_usd_rate
# §7-bis (F): rolling drawdown breach evaluation. Se le ultime N
# posizioni chiuse hanno cumulato perdite oltre la soglia, armiamo
# la pausa e usciamo subito (l'entry di questo ciclo è saltata).
auto_cfg = cfg.auto_pause
if auto_cfg.enabled:
conn = connect_state(ctx.db_path)
try:
recent_pnls = ctx.repository.recent_closed_position_pnls_usd(
conn, limit=auto_cfg.lookback_trades
)
finally:
conn.close()
breach = auto_pause_module.evaluate_drawdown_breach(
cfg=auto_cfg,
recent_pnl_usd=recent_pnls,
capital_usd=capital_usd,
)
if breach.should_pause:
until = auto_pause_module.pause_until(when, auto_cfg.pause_weeks)
conn = connect_state(ctx.db_path)
try:
with transaction(conn):
ctx.repository.set_auto_pause(
conn, until=until, reason=breach.reason
)
finally:
conn.close()
await alert.high(
source="entry_cycle",
message=(
f"auto-pause armed: {breach.reason} — paused until {until}"
),
)
return EntryCycleResult(
status=_STATUS_AUTO_PAUSED,
reason=breach.reason or "auto_paused",
)
# 2. Entry filters
entry_ctx = EntryContext(
capital_usd=capital_usd,
@@ -436,7 +498,12 @@ async def run_entry_cycle(
)
quotes = await _build_quotes(ctx.deribit, chain_meta)
selection = select_strikes(
chain=quotes, bias=bias, spot=snap.spot_eth_usd, now=when, cfg=cfg
chain=quotes,
bias=bias,
spot=snap.spot_eth_usd,
now=when,
cfg=cfg,
dvol_now=snap.dvol, # §3.2 (A) — strike picker dipendente dal regime DVOL
)
if selection is None:
await _record_decision(
-1
View File
@@ -66,7 +66,6 @@ class HealthCheck:
_probe("macro", self._ctx.macro.get_calendar(days=1)),
_probe("sentiment", self._probe_sentiment()),
_probe("hyperliquid", self._ctx.hyperliquid.funding_rate_annualized("ETH")),
_probe("portfolio", self._ctx.portfolio.total_equity_eur()),
)
# SQLite health: lightweight transaction.
@@ -0,0 +1,138 @@
"""Consumer of the ``manual_actions`` queue.
The GUI (and other out-of-band tooling) records operator intent in the
SQLite ``manual_actions`` table; this consumer pulls those rows and
dispatches them through the same primitives the engine uses internally
(``KillSwitch.arm`` / ``disarm``, ``Orchestrator.run_*``) so the audit
chain remains the single source of truth for state transitions.
Supported kinds:
* ``arm_kill`` — payload ``{"reason": str}``; arms the kill switch.
* ``disarm_kill`` — payload ``{"reason": str}``; disarms it.
* ``run_cycle`` — payload ``{"cycle": "entry"|"monitor"|"health"}``;
forces an immediate run of the named cycle. Only available when the
consumer is invoked with a ``cycle_runners`` mapping (the orchestrator
populates it at scheduler-install time).
Future kinds (``force_close``, ``approve_proposal``,
``reject_proposal``) are recognised by the ``ManualAction`` schema but
not yet wired up — the consumer marks them as
``result="not_supported"`` so they don't sit in the queue forever.
"""
from __future__ import annotations
import json
import logging
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime
from typing import TYPE_CHECKING
from cerbero_bite.safety.kill_switch import KillSwitchError
from cerbero_bite.state import connect, transaction
if TYPE_CHECKING:
from cerbero_bite.runtime.dependencies import RuntimeContext
__all__ = ["CycleRunner", "consume_manual_actions"]
CycleRunner = Callable[[], Awaitable[object]]
_log = logging.getLogger("cerbero_bite.runtime.manual_actions")
_CONSUMER_ID = "engine"
def _parse_payload(raw: str | None) -> dict[str, object]:
if not raw:
return {}
try:
parsed = json.loads(raw)
except (TypeError, ValueError):
return {}
return parsed if isinstance(parsed, dict) else {}
async def consume_manual_actions(
ctx: RuntimeContext,
*,
cycle_runners: dict[str, CycleRunner] | None = None,
now: datetime | None = None,
) -> int:
"""Drain the queue. Return the number of actions processed.
The function is synchronous at heart (SQLite + KillSwitch), but kept
``async def`` so the orchestrator can register it as an APScheduler
coroutine without an extra wrapper. Each iteration fetches the next
unconsumed row and processes it; the loop terminates when the queue
is empty so a single tick can catch up after a long pause.
"""
reference = (now or datetime.now(UTC)).astimezone(UTC)
processed = 0
while True:
conn = connect(ctx.db_path)
try:
action = ctx.repository.next_unconsumed_action(conn)
finally:
conn.close()
if action is None:
break
if action.id is None:
_log.warning("manual_action without id, skipping")
break
payload = _parse_payload(action.payload_json)
result = "ok"
try:
if action.kind == "arm_kill":
reason = str(payload.get("reason", "manual via GUI"))
ctx.kill_switch.arm(reason=reason, source="manual_gui")
elif action.kind == "disarm_kill":
reason = str(payload.get("reason", "manual via GUI"))
ctx.kill_switch.disarm(reason=reason, source="manual_gui")
elif action.kind == "run_cycle":
cycle = str(payload.get("cycle", "")).strip().lower()
if cycle_runners is None:
result = "not_supported"
_log.warning(
"run_cycle dispatched without cycle_runners; "
"falling back to not_supported"
)
elif cycle not in cycle_runners:
result = f"error: unknown cycle '{cycle}'"
else:
await cycle_runners[cycle]()
result = f"ok: ran {cycle}"
else:
result = "not_supported"
_log.warning(
"manual_action kind=%s not supported yet", action.kind
)
except KillSwitchError as exc:
_log.exception("kill switch transition failed")
result = f"error: {type(exc).__name__}: {exc}"
except Exception as exc: # pragma: no cover — defensive
_log.exception("manual_action dispatch failed")
result = f"error: {type(exc).__name__}: {exc}"
conn = connect(ctx.db_path)
try:
with transaction(conn):
ctx.repository.mark_action_consumed(
conn,
action.id,
consumed_by=_CONSUMER_ID,
result=result,
now=reference,
)
finally:
conn.close()
processed += 1
if processed:
_log.info("processed %d manual_actions", processed)
return processed
@@ -0,0 +1,192 @@
"""Periodic market-snapshot collector.
Drives the ``market_snapshots`` table populated by the scheduler job
``market_snapshot`` (cron */15 by default). For every traded asset the
collector calls the same MCP feeds the entry/monitor cycles consume,
but in **best-effort mode**: a single failure leaves the corresponding
column NULL and the row is still persisted, with an error map in
``fetch_errors_json`` for debugging. This keeps the time series
continuous even when one of the feeds is briefly down — the
distributions are what matters for threshold calibration, not the
real-time correctness of any single tick.
"""
from __future__ import annotations
import json
import logging
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime
from decimal import Decimal
from typing import TYPE_CHECKING, Any
from cerbero_bite.clients._exceptions import McpError
from cerbero_bite.state import connect, transaction
from cerbero_bite.state.models import MarketSnapshotRecord
if TYPE_CHECKING:
from cerbero_bite.runtime.dependencies import RuntimeContext
__all__ = ["DEFAULT_ASSETS", "collect_market_snapshot"]
_log = logging.getLogger("cerbero_bite.runtime.market_snapshot")
DEFAULT_ASSETS: tuple[str, ...] = ("ETH", "BTC")
async def _safe_call(
label: str,
factory: Callable[[], Awaitable[Any]],
errors: dict[str, str],
) -> Any:
try:
return await factory()
except (McpError, Exception) as exc: # pragma: no branch — best-effort
errors[label] = f"{type(exc).__name__}: {exc}"
return None
def _decimal_or_none(value: Any) -> Decimal | None:
if value is None:
return None
if isinstance(value, Decimal):
return value
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return None
async def _collect_one(
ctx: RuntimeContext, asset: str, *, when: datetime
) -> MarketSnapshotRecord:
errors: dict[str, str] = {}
asset_upper = asset.upper()
spot = await _safe_call(
"spot",
lambda: ctx.deribit.spot_perp_price(asset_upper),
errors,
)
dvol_value = await _safe_call(
"dvol",
lambda: ctx.deribit.latest_dvol(currency=asset_upper, now=when),
errors,
)
rv = await _safe_call(
"realized_vol",
lambda: ctx.deribit.realized_vol(asset_upper),
errors,
)
gamma = await _safe_call(
"dealer_gamma",
lambda: ctx.deribit.dealer_gamma_profile(asset_upper),
errors,
)
funding_perp = await _safe_call(
"funding_perp",
lambda: ctx.hyperliquid.funding_rate_annualized(asset_upper),
errors,
)
funding_cross = await _safe_call(
"funding_cross",
lambda: ctx.sentiment.funding_cross_median_annualized(asset_upper),
errors,
)
heatmap = await _safe_call(
"liquidation",
lambda: ctx.sentiment.liquidation_heatmap(asset_upper),
errors,
)
macro_days = await _safe_call(
"macro",
lambda: ctx.macro.next_high_severity_within(
days=ctx.cfg.structure.dte_target,
countries=list(ctx.cfg.entry.exclude_macro_countries),
now=when,
),
errors,
)
rv_30 = (rv or {}).get("rv_30d") if isinstance(rv, dict) else None
iv_minus_rv_30 = (
(rv or {}).get("iv_minus_rv_30d") if isinstance(rv, dict) else None
)
return MarketSnapshotRecord(
timestamp=when,
asset=asset_upper,
spot=_decimal_or_none(spot),
dvol=_decimal_or_none(dvol_value),
realized_vol_30d=_decimal_or_none(rv_30),
iv_minus_rv=_decimal_or_none(iv_minus_rv_30),
funding_perp_annualized=_decimal_or_none(funding_perp),
funding_cross_annualized=_decimal_or_none(funding_cross),
dealer_net_gamma=(
_decimal_or_none(gamma.total_net_dealer_gamma)
if gamma is not None
else None
),
gamma_flip_level=(
_decimal_or_none(gamma.gamma_flip_level)
if gamma is not None
else None
),
oi_delta_pct_4h=(
_decimal_or_none(heatmap.oi_delta_pct_4h)
if heatmap is not None
else None
),
liquidation_long_risk=(
heatmap.long_squeeze_risk if heatmap is not None else None
),
liquidation_short_risk=(
heatmap.short_squeeze_risk if heatmap is not None else None
),
macro_days_to_event=(
int(macro_days) if isinstance(macro_days, int) else None
),
fetch_ok=not errors,
fetch_errors_json=(json.dumps(errors) if errors else None),
)
async def collect_market_snapshot(
ctx: RuntimeContext,
*,
assets: tuple[str, ...] = DEFAULT_ASSETS,
now: datetime | None = None,
) -> int:
"""Collect + persist one snapshot per asset. Returns count persisted.
The function is sync at heart (sequential per asset to keep MCP
load light) but kept ``async def`` so APScheduler can schedule it
directly. A single asset failing does not abort the loop — the
other assets are still snapshotted.
"""
when = (now or datetime.now(UTC)).astimezone(UTC)
persisted = 0
for asset in assets:
try:
record = await _collect_one(ctx, asset, when=when)
except Exception: # pragma: no cover — defensive
_log.exception("snapshot for %s failed catastrophically", asset)
continue
try:
conn = connect(ctx.db_path)
try:
with transaction(conn):
ctx.repository.record_market_snapshot(conn, record)
finally:
conn.close()
persisted += 1
except Exception: # pragma: no cover — defensive
_log.exception("persist snapshot for %s failed", asset)
if persisted:
_log.info("market_snapshot persisted %d row(s)", persisted)
return persisted
+100 -13
View File
@@ -23,11 +23,17 @@ import structlog
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from cerbero_bite.config.mcp_endpoints import McpEndpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.config.schema import StrategyConfig
from cerbero_bite.runtime.dependencies import RuntimeContext, build_runtime
from cerbero_bite.runtime.entry_cycle import EntryCycleResult, run_entry_cycle
from cerbero_bite.runtime.health_check import HealthCheck, HealthCheckResult
from cerbero_bite.runtime.lockfile import EngineLock
from cerbero_bite.runtime.manual_actions_consumer import consume_manual_actions
from cerbero_bite.runtime.market_snapshot_cycle import (
DEFAULT_ASSETS,
collect_market_snapshot,
)
from cerbero_bite.runtime.monitor_cycle import MonitorCycleResult, run_monitor_cycle
from cerbero_bite.runtime.recovery import recover_state
from cerbero_bite.runtime.scheduler import JobSpec, build_scheduler
@@ -45,6 +51,8 @@ _CRON_ENTRY = "0 14 * * MON"
_CRON_MONITOR = "0 2,14 * * *"
_CRON_HEALTH = "*/5 * * * *"
_CRON_BACKUP = "0 * * * *"
_CRON_MANUAL_ACTIONS = "*/1 * * * *"
_CRON_MARKET_SNAPSHOT = "*/15 * * * *"
_BACKUP_RETENTION_DAYS = 30
@@ -63,10 +71,12 @@ class Orchestrator:
*,
expected_environment: Environment,
eur_to_usd: Decimal,
flags: RuntimeFlags | None = None,
) -> None:
self._ctx = ctx
self._expected_env = expected_environment
self._eur_to_usd = eur_to_usd
self._flags = flags or RuntimeFlags()
self._health = HealthCheck(ctx, expected_environment=expected_environment)
self._scheduler: AsyncIOScheduler | None = None
@@ -78,6 +88,10 @@ class Orchestrator:
def expected_environment(self) -> Environment:
return self._expected_env
@property
def flags(self) -> RuntimeFlags:
return self._flags
# ------------------------------------------------------------------
# Boot
# ------------------------------------------------------------------
@@ -106,9 +120,18 @@ class Orchestrator:
"environment": info.environment,
"health": health.state,
"config_version": self._ctx.cfg.config_version,
"data_analysis_enabled": self._flags.data_analysis_enabled,
"strategy_enabled": self._flags.strategy_enabled,
},
now=when,
)
_log.info(
"engine started: env=%s health=%s data_analysis=%s strategy=%s",
info.environment,
health.state,
self._flags.data_analysis_enabled,
self._flags.strategy_enabled,
)
return _BootResult(environment=info.environment, health=health)
# ------------------------------------------------------------------
@@ -191,6 +214,9 @@ class Orchestrator:
monitor_cron: str = _CRON_MONITOR,
health_cron: str = _CRON_HEALTH,
backup_cron: str = _CRON_BACKUP,
manual_actions_cron: str = _CRON_MANUAL_ACTIONS,
market_snapshot_cron: str = _CRON_MARKET_SNAPSHOT,
market_snapshot_assets: tuple[str, ...] = DEFAULT_ASSETS,
backup_dir: Path | None = None,
backup_retention_days: int = _BACKUP_RETENTION_DAYS,
) -> AsyncIOScheduler:
@@ -229,14 +255,67 @@ class Orchestrator:
await _safe("backup", _do)
self._scheduler = build_scheduler(
[
JobSpec(name="entry", cron=entry_cron, coro_factory=_entry),
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor),
async def _run_market_snapshot_via_action() -> None:
await collect_market_snapshot(
self._ctx, assets=market_snapshot_assets
)
async def _manual_actions() -> None:
async def _do() -> None:
await consume_manual_actions(
self._ctx,
cycle_runners={
"entry": self.run_entry,
"monitor": self.run_monitor,
"health": self.run_health,
"market_snapshot": _run_market_snapshot_via_action,
},
)
await _safe("manual_actions", _do)
async def _market_snapshot() -> None:
async def _do() -> None:
await collect_market_snapshot(
self._ctx, assets=market_snapshot_assets
)
await _safe("market_snapshot", _do)
jobs: list[JobSpec] = [
JobSpec(name="health", cron=health_cron, coro_factory=_health),
JobSpec(name="backup", cron=backup_cron, coro_factory=_backup),
JobSpec(
name="manual_actions",
cron=manual_actions_cron,
coro_factory=_manual_actions,
),
]
if self._flags.strategy_enabled:
jobs.append(JobSpec(name="entry", cron=entry_cron, coro_factory=_entry))
jobs.append(
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor)
)
else:
_log.warning(
"strategy disabled (CERBERO_BITE_ENABLE_STRATEGY=false): "
"entry and monitor cycles are NOT scheduled"
)
if self._flags.data_analysis_enabled:
jobs.append(
JobSpec(
name="market_snapshot",
cron=market_snapshot_cron,
coro_factory=_market_snapshot,
)
)
else:
_log.warning(
"data analysis disabled (CERBERO_BITE_ENABLE_DATA_ANALYSIS="
"false): market_snapshot job is NOT scheduled"
)
self._scheduler = build_scheduler(jobs)
return self._scheduler
async def run_forever(self, *, lock_path: Path | None = None) -> None:
@@ -329,17 +408,25 @@ def make_orchestrator(
audit_path: Path,
expected_environment: Environment,
eur_to_usd: Decimal,
bot_tag: str | None = None,
flags: RuntimeFlags | None = None,
clock: Callable[[], datetime] | None = None,
) -> Orchestrator:
"""Build a fresh :class:`Orchestrator` ready for ``boot``/``run_*``."""
ctx = build_runtime(
cfg=cfg,
endpoints=endpoints,
token=token,
db_path=db_path,
audit_path=audit_path,
clock=clock or (lambda: datetime.now(UTC)),
)
build_kwargs: dict[str, object] = {
"cfg": cfg,
"endpoints": endpoints,
"token": token,
"db_path": db_path,
"audit_path": audit_path,
"clock": clock or (lambda: datetime.now(UTC)),
}
if bot_tag is not None:
build_kwargs["bot_tag"] = bot_tag
ctx = build_runtime(**build_kwargs) # type: ignore[arg-type]
return Orchestrator(
ctx, expected_environment=expected_environment, eur_to_usd=eur_to_usd
ctx,
expected_environment=expected_environment,
eur_to_usd=eur_to_usd,
flags=flags,
)
@@ -0,0 +1,38 @@
-- 0003_market_snapshots.sql — periodic market snapshot table.
--
-- Populated by the `market_snapshot` scheduler job (cron */15) for
-- every asset traded by the engine (ETH primary, BTC as benchmark).
-- The table backs the "Calibrazione" GUI page: histograms, percentiles
-- and "% of ticks the current threshold would have blocked" let the
-- operator pick filter thresholds from observed distributions instead
-- of guessing.
--
-- Every column except (timestamp, asset, fetch_ok) is NULL-able: a
-- single MCP call may fail and we still want to keep the row so the
-- time series stays continuous. fetch_errors_json carries the per-feed
-- error messages for offline debugging.
CREATE TABLE market_snapshots (
timestamp TEXT NOT NULL,
asset TEXT NOT NULL,
spot NUMERIC,
dvol NUMERIC,
realized_vol_30d NUMERIC,
iv_minus_rv NUMERIC,
funding_perp_annualized NUMERIC,
funding_cross_annualized NUMERIC,
dealer_net_gamma NUMERIC,
gamma_flip_level NUMERIC,
oi_delta_pct_4h NUMERIC,
liquidation_long_risk TEXT,
liquidation_short_risk TEXT,
macro_days_to_event INTEGER,
fetch_ok INTEGER NOT NULL,
fetch_errors_json TEXT,
PRIMARY KEY (timestamp, asset)
);
CREATE INDEX idx_market_snapshots_asset_ts
ON market_snapshots(asset, timestamp DESC);
PRAGMA user_version = 3;
@@ -0,0 +1,14 @@
-- 0004_auto_pause.sql — circuit breaker su drawdown rolling (§7-bis F)
--
-- Aggiunge alla `system_state` il timestamp fino a cui l'engine è in
-- pausa automatica per via di un drawdown sopra soglia. NULL = engine
-- attivo. Quando il valore è nel futuro, il rule engine salta il
-- ciclo entry e logga la motivazione.
--
-- Indipendente dal kill_switch (che resta dedicato a errori tecnici
-- e a comandi manuali esplicitati). Le due tutele coesistono.
ALTER TABLE system_state ADD COLUMN auto_pause_until TEXT;
ALTER TABLE system_state ADD COLUMN auto_pause_reason TEXT;
PRAGMA user_version = 4;
+33
View File
@@ -21,6 +21,7 @@ __all__ = [
"DvolSnapshot",
"InstructionRecord",
"ManualAction",
"MarketSnapshotRecord",
"PositionRecord",
"PositionStatus",
"SystemStateRecord",
@@ -118,6 +119,35 @@ class DvolSnapshot(BaseModel):
eth_spot: Decimal
class MarketSnapshotRecord(BaseModel):
"""Row of the ``market_snapshots`` table.
Single point in time, single asset. Every numeric field is
optional because the ``market_snapshot`` collector is best-effort:
a single MCP failure NULLs the affected metric without dropping
the row.
"""
model_config = ConfigDict(extra="forbid")
timestamp: datetime
asset: str # "ETH", "BTC"
spot: Decimal | None = None
dvol: Decimal | None = None
realized_vol_30d: Decimal | None = None
iv_minus_rv: Decimal | None = None
funding_perp_annualized: Decimal | None = None
funding_cross_annualized: Decimal | None = None
dealer_net_gamma: Decimal | None = None
gamma_flip_level: Decimal | None = None
oi_delta_pct_4h: Decimal | None = None
liquidation_long_risk: str | None = None
liquidation_short_risk: str | None = None
macro_days_to_event: int | None = None
fetch_ok: bool
fetch_errors_json: str | None = None
class ManualAction(BaseModel):
"""Row of the ``manual_actions`` table."""
@@ -130,6 +160,7 @@ class ManualAction(BaseModel):
"force_close",
"arm_kill",
"disarm_kill",
"run_cycle",
]
proposal_id: UUID | None = None
payload_json: str | None = None
@@ -153,3 +184,5 @@ class SystemStateRecord(BaseModel):
config_version: str
started_at: datetime
last_audit_hash: str | None = None
auto_pause_until: datetime | None = None
auto_pause_reason: str | None = None
+133
View File
@@ -23,6 +23,7 @@ from cerbero_bite.state.models import (
DvolSnapshot,
InstructionRecord,
ManualAction,
MarketSnapshotRecord,
PositionRecord,
PositionStatus,
SystemStateRecord,
@@ -346,6 +347,66 @@ class Repository:
),
)
# ------------------------------------------------------------------
# market_snapshots
# ------------------------------------------------------------------
def record_market_snapshot(
self, conn: sqlite3.Connection, snapshot: MarketSnapshotRecord
) -> None:
conn.execute(
"INSERT OR REPLACE INTO market_snapshots("
"timestamp, asset, spot, dvol, realized_vol_30d, iv_minus_rv, "
"funding_perp_annualized, funding_cross_annualized, "
"dealer_net_gamma, gamma_flip_level, oi_delta_pct_4h, "
"liquidation_long_risk, liquidation_short_risk, "
"macro_days_to_event, fetch_ok, fetch_errors_json) "
"VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
(
_enc_dt(snapshot.timestamp),
snapshot.asset,
_enc_dec(snapshot.spot),
_enc_dec(snapshot.dvol),
_enc_dec(snapshot.realized_vol_30d),
_enc_dec(snapshot.iv_minus_rv),
_enc_dec(snapshot.funding_perp_annualized),
_enc_dec(snapshot.funding_cross_annualized),
_enc_dec(snapshot.dealer_net_gamma),
_enc_dec(snapshot.gamma_flip_level),
_enc_dec(snapshot.oi_delta_pct_4h),
snapshot.liquidation_long_risk,
snapshot.liquidation_short_risk,
snapshot.macro_days_to_event,
1 if snapshot.fetch_ok else 0,
snapshot.fetch_errors_json,
),
)
def list_market_snapshots(
self,
conn: sqlite3.Connection,
*,
asset: str,
start: datetime | None = None,
end: datetime | None = None,
limit: int = 5000,
) -> list[MarketSnapshotRecord]:
clauses: list[str] = ["asset = ?"]
params: list[Any] = [asset]
if start is not None:
clauses.append("timestamp >= ?")
params.append(_enc_dt(start))
if end is not None:
clauses.append("timestamp <= ?")
params.append(_enc_dt(end))
params.append(int(limit))
rows = conn.execute(
f"SELECT * FROM market_snapshots WHERE {' AND '.join(clauses)} "
f"ORDER BY timestamp DESC LIMIT ?",
params,
).fetchall()
return [_row_to_market_snapshot(r) for r in rows]
# ------------------------------------------------------------------
# manual_actions
# ------------------------------------------------------------------
@@ -427,6 +488,16 @@ class Repository:
last_audit_hash=(
row["last_audit_hash"] if "last_audit_hash" in keys else None
),
auto_pause_until=(
_dec_dt(row["auto_pause_until"])
if "auto_pause_until" in keys
else None
),
auto_pause_reason=(
row["auto_pause_reason"]
if "auto_pause_reason" in keys
else None
),
)
def set_last_audit_hash(
@@ -465,6 +536,43 @@ class Repository:
(_enc_dt(now),),
)
def set_auto_pause(
self,
conn: sqlite3.Connection,
*,
until: datetime | None,
reason: str | None,
) -> None:
"""Imposta o azzera la pausa automatica (§7-bis F).
``until = None`` annulla la pausa (l'engine torna attivo).
Il setter è idempotente: chiamarlo con un until già nel passato
è equivalente a clear.
"""
conn.execute(
"UPDATE system_state SET auto_pause_until = ?, "
"auto_pause_reason = ? WHERE id = 1",
(_enc_dt(until) if until is not None else None, reason),
)
def recent_closed_position_pnls_usd(
self, conn: sqlite3.Connection, *, limit: int
) -> list[Decimal]:
"""Ritorna la lista dei pnl_usd delle ultime ``limit`` posizioni chiuse,
ordinate dalla più recente alla più vecchia. Posizioni con
``pnl_usd`` ``NULL`` (es. chiuse di emergenza senza P/L noto)
sono saltate. Usato dal circuit breaker §7-bis F.
"""
if limit <= 0:
return []
rows = conn.execute(
"SELECT pnl_usd FROM positions "
"WHERE closed_at IS NOT NULL AND pnl_usd IS NOT NULL "
"ORDER BY closed_at DESC LIMIT ?",
(limit,),
).fetchall()
return [Decimal(row["pnl_usd"]) for row in rows]
# ---------------------------------------------------------------------------
# Row → model converters
@@ -559,6 +667,31 @@ def _row_to_manual(row: sqlite3.Row) -> ManualAction:
)
def _row_to_market_snapshot(row: sqlite3.Row) -> MarketSnapshotRecord:
return MarketSnapshotRecord(
timestamp=_dec_dt_required(row["timestamp"]),
asset=row["asset"],
spot=_dec_dec(row["spot"]),
dvol=_dec_dec(row["dvol"]),
realized_vol_30d=_dec_dec(row["realized_vol_30d"]),
iv_minus_rv=_dec_dec(row["iv_minus_rv"]),
funding_perp_annualized=_dec_dec(row["funding_perp_annualized"]),
funding_cross_annualized=_dec_dec(row["funding_cross_annualized"]),
dealer_net_gamma=_dec_dec(row["dealer_net_gamma"]),
gamma_flip_level=_dec_dec(row["gamma_flip_level"]),
oi_delta_pct_4h=_dec_dec(row["oi_delta_pct_4h"]),
liquidation_long_risk=row["liquidation_long_risk"],
liquidation_short_risk=row["liquidation_short_risk"],
macro_days_to_event=(
int(row["macro_days_to_event"])
if row["macro_days_to_event"] is not None
else None
),
fetch_ok=bool(int(row["fetch_ok"])),
fetch_errors_json=row["fetch_errors_json"],
)
def _dec_dec_required(value: Any) -> Decimal:
out = _dec_dec(value)
if out is None:
+204
View File
@@ -0,0 +1,204 @@
# strategy.aggressiva.yaml — Cerbero Bite, profilo AGGRESSIVO
#
# Profilo "crescita: rendimenti significativi, drawdown a doppia cifra,
# complessità più alta". DEROGA esplicitamente alla sezione §11
# "Riepilogo soglie" di docs/01-strategy-rules.md (cap_per_trade_eur,
# cap_aggregate_open_eur, max_concurrent_positions). NON va deployato
# senza:
# 1. backtest dedicato sui dati raccolti
# 2. paper trading per almeno 4 settimane
# 3. autorizzazione esplicita scritta nel commit
#
# Caratteristiche operative attese (vs. profilo conservativo):
# * Cap per trade 800 EUR (~860 USD) → 4× la size
# * Cap aggregato 3 200 EUR (~3 440 USD) → 4× il rischio aggregato
# * Max 2 posizioni concorrenti (era 1)
# * Max 16 contratti per trade (era 4)
# * P/L stimato: +5% / +20% APR sul capitale impiegato
# * Drawdown atteso: 2540% del capitale impiegato in streak
# * Adatto a: capitale "growth", non parcheggio
#
# CAVEAT MULTI-ASSET. Il rule engine attuale è single-asset
# (`asset.symbol`). Per estendere a ETH+BTC servono modifiche di
# codice in:
# * cerbero_bite/runtime/entry_cycle.py (loop su lista asset)
# * cerbero_bite/state/repository.py (multi-position per asset)
# * cerbero_bite/runtime/orchestrator.py (scheduler one-asset → N)
# Nel frattempo il file resta single-asset ETH; il moltiplicatore
# 2× via "ETH + BTC" indicato in `📚 Strategia` è una **stima ex-ante**
# di cosa otterresti DOPO quel lavoro di codice.
config_version: "1.2.0-aggressiva"
config_hash: "e3a583cabfaa4781cd0ebcc8b62fc8f200648153738f93ab8726b062e46cacef"
last_review: "2026-04-26"
last_reviewer: "Adriano"
asset:
symbol: "ETH"
exchange: "deribit"
entry:
cron: "0 14 * * MON"
skip_holidays_country: "IT"
capital_min_usd: "2880" # 4× del minimo conservativo (720)
dvol_min: "35"
dvol_max: "90"
funding_perp_abs_max_annualized: "0.80"
eth_holdings_pct_max: "0.30"
no_position_concurrent: false # consenti N posizioni concorrenti
exclude_macro_severity: ["high"]
exclude_macro_countries: ["US", "EU"]
trend_window_days: 30
trend_bull_threshold_pct: "0.05"
trend_bear_threshold_pct: "-0.05"
funding_bull_threshold_annualized: "0.20"
funding_bear_threshold_annualized: "-0.20"
iron_condor_dvol_min: "55"
iron_condor_adx_max: "20"
iron_condor_trend_neutral_band_pct: "0.05"
# Filtri quant invariati: l'edge della strategia E' qui, non
# serve allentarli per "guadagnare di più" — anzi sarebbe
# controproducente.
dealer_gamma_min: "0"
dealer_gamma_filter_enabled: true
liquidation_filter_enabled: true
structure:
dte_target: 18
dte_min: 14
dte_max: 21
short_strike:
delta_target: "0.12"
delta_min: "0.10"
delta_max: "0.15"
distance_otm_pct_min: "0.15"
distance_otm_pct_max: "0.25"
# §3.2 (A): step-function delta-target per regime DVOL.
# DVOL bassa (≤50) → più premio; alta (>70) → più safety.
delta_by_dvol:
- {dvol_under: "50", delta_target: "0.15", delta_min: "0.13", delta_max: "0.17"}
- {dvol_under: "70", delta_target: "0.12", delta_min: "0.10", delta_max: "0.15"}
- {dvol_under: "90", delta_target: "0.10", delta_min: "0.08", delta_max: "0.12"}
spread_width:
target_pct_of_spot: "0.04"
min_pct_of_spot: "0.03"
max_pct_of_spot: "0.05"
credit_to_width_ratio_min: "0.30"
liquidity:
open_interest_min: 100
volume_24h_min: 20
bid_ask_spread_pct_max: "0.15"
book_depth_top3_min: 5
slippage_pct_of_credit_max: "0.08"
sizing:
kelly_fraction: "0.13" # disciplina Kelly invariata
# Le tre leve dominanti:
cap_per_trade_eur: "800" # era 200 → 4×
cap_aggregate_open_eur: "3200" # era 1000 → 4× (proporzionato a 2 posizioni × cap_per_trade × 2 ruote)
max_concurrent_positions: 2 # era 1
max_contracts_per_trade: 16 # era 4 → 4×
dvol_adjustment:
- {dvol_under: "45", multiplier: "1.00"}
- {dvol_under: "60", multiplier: "0.85"}
- {dvol_under: "80", multiplier: "0.65"}
dvol_no_entry_threshold: "80"
exit:
profit_take_pct_of_credit: "0.50"
stop_loss_mark_x_credit: "2.50"
vol_stop_dvol_increase: "10"
time_stop_dte_remaining: 7
time_stop_skip_if_close_to_profit_pct: "0.70"
delta_breach_threshold: "0.30"
adverse_move_4h_pct: "0.05"
# §7-bis (D): vol-harvest abilitato a 15 punti vol di crollo.
vol_harvest_dvol_decrease: "15"
# §7.1bis (C): scala graduata di profit-take. Pipeline runtime
# non ancora attiva; tenuta vuota fino al merge della
# partial-close pipeline.
profit_take_partial_levels: []
monitor_cron: "0 2,14 * * *"
user_confirmation_timeout_min: 30
escalate_on_timeout:
- "CLOSE_STOP"
- "CLOSE_VOL"
- "CLOSE_DELTA"
# §7-bis (F): circuit breaker abilitato. Soglia 15% (più tollerante
# del default conservativo perché la size aggressiva ha volatilità
# attesa più alta).
auto_pause:
enabled: true
lookback_trades: 5
max_drawdown_pct: "0.15"
pause_weeks: 2
execution:
environment: "testnet"
eur_to_usd: "1.075"
combo_only: true
initial_limit: "mid"
reprice_step_ticks: 1
reprice_max_steps: 3
reprice_max_steps_urgent: 5
order_tif: "GTC"
order_expiry_min: 30
ack_timeout_s: 300
monitoring:
health_check_interval_s: 300
health_failures_before_kill: 3
health_failures_before_restart: 5
daily_digest_cron: "0 8 * * *"
monthly_report_cron: "0 12 1 * *"
storage:
sqlite_path: "data/state.sqlite"
log_path: "data/log/"
log_retention_days: 365
backup_path: "data/backups/"
backup_retention_days: 30
mcp:
config_file: "~/.config/cerbero-suite/mcp.json"
call_timeout_s: 8
retry_max: 3
retry_base_delay_s: 1
required_versions:
cerbero-deribit: "^2.0.0"
cerbero-hyperliquid: "^1.5.0"
cerbero-memory: "^4.0.0"
cerbero-portfolio: "^1.2.0"
cerbero-macro: "^1.0.0"
cerbero-sentiment: "^1.0.0"
cerbero-telegram: "^1.0.0"
cerbero-brain-bridge: "^1.0.0"
telegram:
parse_mode: "MarkdownV2"
confirmation_timeout_min: 60
exit_confirmation_timeout_min: 30
backup_channel_on_critical: true
kelly_recalibration:
lookback_days: 365
min_sample_low_confidence: 30
min_sample_high_confidence: 100
weight_when_medium_confidence: "0.50"
+173
View File
@@ -0,0 +1,173 @@
# strategy.conservativa.yaml — Cerbero Bite, profilo CONSERVATIVO
#
# Profilo "premio sopra T-bill, drawdown contenuto, complessità minima".
# È identico per regole alla golden config v1.0.0; serve come ancora di
# riferimento per il confronto con `strategy.aggressiva.yaml`.
#
# Caratteristiche operative attese:
# * Cap per trade 200 EUR (~215 USD)
# * Max 1 posizione concorrente
# * P/L stimato: +1.5% / +5% APR sul capitale totale
# * Drawdown atteso: 1020% del capitale impiegato in streak
# * Adatto a: parcheggio capitale, premio modesto, niente sorprese
#
# Ricorda: dopo ogni edit, rigenerare il config_hash con
# cerbero-bite config hash --file strategy.conservativa.yaml
# e bumpare config_version.
config_version: "1.2.0-conservativa"
config_hash: "fa09dad9cfa40a8ab006ec85157635603e0c4b6381ecd5d721504e00c4119a1b"
last_review: "2026-04-26"
last_reviewer: "Adriano"
asset:
symbol: "ETH"
exchange: "deribit"
entry:
cron: "0 14 * * MON"
skip_holidays_country: "IT"
capital_min_usd: "720"
dvol_min: "35"
dvol_max: "90"
funding_perp_abs_max_annualized: "0.80"
eth_holdings_pct_max: "0.30"
no_position_concurrent: true
exclude_macro_severity: ["high"]
exclude_macro_countries: ["US", "EU"]
trend_window_days: 30
trend_bull_threshold_pct: "0.05"
trend_bear_threshold_pct: "-0.05"
funding_bull_threshold_annualized: "0.20"
funding_bear_threshold_annualized: "-0.20"
iron_condor_dvol_min: "55"
iron_condor_adx_max: "20"
iron_condor_trend_neutral_band_pct: "0.05"
dealer_gamma_min: "0"
dealer_gamma_filter_enabled: true
liquidation_filter_enabled: true
structure:
dte_target: 18
dte_min: 14
dte_max: 21
short_strike:
delta_target: "0.12"
delta_min: "0.10"
delta_max: "0.15"
distance_otm_pct_min: "0.15"
distance_otm_pct_max: "0.25"
spread_width:
target_pct_of_spot: "0.04"
min_pct_of_spot: "0.03"
max_pct_of_spot: "0.05"
credit_to_width_ratio_min: "0.30"
liquidity:
open_interest_min: 100
volume_24h_min: 20
bid_ask_spread_pct_max: "0.15"
book_depth_top3_min: 5
slippage_pct_of_credit_max: "0.08"
sizing:
kelly_fraction: "0.13"
cap_per_trade_eur: "200"
cap_aggregate_open_eur: "1000"
max_concurrent_positions: 1
max_contracts_per_trade: 4
dvol_adjustment:
- {dvol_under: "45", multiplier: "1.00"}
- {dvol_under: "60", multiplier: "0.85"}
- {dvol_under: "80", multiplier: "0.65"}
dvol_no_entry_threshold: "80"
exit:
profit_take_pct_of_credit: "0.50"
stop_loss_mark_x_credit: "2.50"
vol_stop_dvol_increase: "10"
time_stop_dte_remaining: 7
time_stop_skip_if_close_to_profit_pct: "0.70"
delta_breach_threshold: "0.30"
adverse_move_4h_pct: "0.05"
vol_harvest_dvol_decrease: "0"
profit_take_partial_levels: []
monitor_cron: "0 2,14 * * *"
user_confirmation_timeout_min: 30
escalate_on_timeout:
- "CLOSE_STOP"
- "CLOSE_VOL"
- "CLOSE_DELTA"
auto_pause:
enabled: false
lookback_trades: 5
max_drawdown_pct: "0.10"
pause_weeks: 2
execution:
environment: "testnet"
eur_to_usd: "1.075"
combo_only: true
initial_limit: "mid"
reprice_step_ticks: 1
reprice_max_steps: 3
reprice_max_steps_urgent: 5
order_tif: "GTC"
order_expiry_min: 30
ack_timeout_s: 300
monitoring:
health_check_interval_s: 300
health_failures_before_kill: 3
health_failures_before_restart: 5
daily_digest_cron: "0 8 * * *"
monthly_report_cron: "0 12 1 * *"
storage:
sqlite_path: "data/state.sqlite"
log_path: "data/log/"
log_retention_days: 365
backup_path: "data/backups/"
backup_retention_days: 30
mcp:
config_file: "~/.config/cerbero-suite/mcp.json"
call_timeout_s: 8
retry_max: 3
retry_base_delay_s: 1
required_versions:
cerbero-deribit: "^2.0.0"
cerbero-hyperliquid: "^1.5.0"
cerbero-memory: "^4.0.0"
cerbero-portfolio: "^1.2.0"
cerbero-macro: "^1.0.0"
cerbero-sentiment: "^1.0.0"
cerbero-telegram: "^1.0.0"
cerbero-brain-bridge: "^1.0.0"
telegram:
parse_mode: "MarkdownV2"
confirmation_timeout_min: 60
exit_confirmation_timeout_min: 30
backup_channel_on_critical: true
kelly_recalibration:
lookback_days: 365
min_sample_low_confidence: 30
min_sample_high_confidence: 100
weight_when_medium_confidence: "0.50"
+17 -2
View File
@@ -6,8 +6,8 @@
# config hash), and lands as a separate commit with the motivation in
# the commit message.
config_version: "1.0.0"
config_hash: "4c2be4c51c849ed58fa22ec2b302016c453894dd0964b6d05445ab1b723e2d10"
config_version: "1.2.0"
config_hash: "33263a313b26b24b41269f93f93783784451ac9b4b6460005b95c2fb3624fcdc"
last_review: "2026-04-26"
last_reviewer: "Adriano"
@@ -96,6 +96,13 @@ exit:
delta_breach_threshold: "0.30"
adverse_move_4h_pct: "0.05"
# §7-bis (D): vol-collapse harvest. 0 = disabilitato.
vol_harvest_dvol_decrease: "0"
# §7.1bis (C): scala graduata di profit-take. Vuoto = chiusura
# atomica. Pipeline runtime non ancora attiva (hook futuro).
profit_take_partial_levels: []
monitor_cron: "0 2,14 * * *"
user_confirmation_timeout_min: 30
@@ -104,6 +111,14 @@ exit:
- "CLOSE_VOL"
- "CLOSE_DELTA"
# §7-bis (F): circuit breaker su drawdown rolling. Disabilitato di
# default — abilitarlo solo dopo abbastanza posizioni chiuse.
auto_pause:
enabled: false
lookback_trades: 5
max_drawdown_pct: "0.10"
pause_weeks: 2
execution:
environment: "testnet" # testnet|mainnet — kill switch on broker mismatch
eur_to_usd: "1.075" # default FX rate for sizing engine; override at boot
-11
View File
@@ -71,11 +71,6 @@ def _wire_boot_dependencies(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
@pytest.mark.asyncio
@@ -115,11 +110,5 @@ async def test_boot_detects_audit_truncation(
orch = _build(tmp_path)
_wire_boot_dependencies(httpx_mock)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await orch.boot()
assert orch.context.kill_switch.is_armed() is True
+32 -30
View File
@@ -154,18 +154,39 @@ def _wire_market_snapshot(
json={"events": macro_events or []},
is_reusable=True,
)
# In-process portfolio aggregator: wire the underlying exchange and
# macro endpoints so total_equity_eur and asset_pct_of_portfolio
# produce the requested ``portfolio_eur`` and ``eth_pct``.
# FX rate fixed at 1.0 → EUR amount equals USD amount in tests.
portfolio_eur_f = float(portfolio_eur)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_holdings",
url="http://mcp-macro:9013/tools/get_asset_price",
json={"ticker": "EURUSD", "price": 1.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/get_account_summary",
json={"equity": portfolio_eur_f, "currency": "USDC"},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/get_positions",
json=[
{"ticker": "AAPL", "current_value_eur": portfolio_eur_f * (1 - eth_pct)},
{"ticker": "ETH-USD", "current_value_eur": portfolio_eur_f * eth_pct},
{
"instrument_name": "ETH-15MAY26-2475-P",
"notional_usd": portfolio_eur_f * eth_pct,
}
],
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": portfolio_eur_f},
url="http://mcp-hyperliquid:9012/tools/get_account_summary",
json={"equity": 0.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-hyperliquid:9012/tools/get_positions",
json=[],
is_reusable=True,
)
@@ -262,11 +283,12 @@ def _wire_combo_order(
def _wire_telegram_notify_position_opened(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_opened",
json={"ok": True},
is_reusable=True,
)
"""No-op: Telegram is now an in-process client with disabled mode in tests.
Kept for call-site compatibility; the function used to register an MCP
notify mock but post-refactor there is no HTTP endpoint to mock when
the bot has no Telegram credentials configured.
"""
# ---------------------------------------------------------------------------
@@ -355,11 +377,6 @@ async def test_below_capital_minimum_returns_no_entry(
now: datetime,
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
# 500 EUR × 1.075 = 537 USD < 720 cfg minimum
_wire_market_snapshot(httpx_mock, portfolio_eur=500.0)
ctx = _ctx(cfg, runtime_paths, now)
@@ -377,11 +394,6 @@ async def test_macro_event_within_dte_blocks_entry(
now: datetime,
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
macro_events = [
{
"name": "FOMC",
@@ -406,11 +418,6 @@ async def test_no_bias_returns_no_entry(
now: datetime,
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
# Funding cross neutral (=0) and DVOL 40 → no IC, no directional;
# entry validates clean otherwise.
_wire_market_snapshot(
@@ -507,11 +514,6 @@ async def test_broker_reject_marks_position_cancelled(
},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_alert",
json={"ok": True},
is_reusable=True,
)
bull_cfg = golden_config(
entry=type(cfg.entry)(
**{**cfg.entry.model_dump(), "trend_bull_threshold_pct": Decimal("0")}
-26
View File
@@ -60,11 +60,6 @@ def _wire_all_ok(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
@pytest.mark.asyncio
@@ -112,11 +107,6 @@ async def test_environment_mismatch_counts_as_failure(
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
res = await hc.run()
assert res.state == "degraded"
assert any("environment mismatch" in r for _s, r in res.failures)
@@ -149,17 +139,6 @@ async def test_three_consecutive_failures_arm_kill_switch(
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_alert",
json={"ok": True},
is_reusable=True,
)
for _ in range(2):
await hc.run()
assert ctx.kill_switch.is_armed() is False
@@ -197,11 +176,6 @@ async def test_recovered_run_resets_counter(
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
res = await hc.run()
assert res.state == "degraded"
assert res.consecutive_failures == 1
-10
View File
@@ -231,11 +231,6 @@ async def test_monitor_closes_position_on_profit_take(
},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_closed",
json={"ok": True},
is_reusable=True,
)
res = await run_monitor_cycle(ctx, now=now)
assert len(res.outcomes) == 1
@@ -296,11 +291,6 @@ async def test_monitor_uses_dvol_history_for_return_4h(
},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_closed",
json={"ok": True},
is_reusable=True,
)
res = await run_monitor_cycle(ctx, now=now)
assert res.outcomes[0].action == "CLOSE_AVERSE"
+55 -13
View File
@@ -11,6 +11,7 @@ from pytest_httpx import HTTPXMock
from cerbero_bite.config import golden_config
from cerbero_bite.config.mcp_endpoints import load_endpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.runtime import Orchestrator
from cerbero_bite.runtime.dependencies import build_runtime
@@ -56,14 +57,14 @@ def _wire_health_probes(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator:
def _build_orch(
tmp_path: Path,
*,
expected: str = "testnet",
flags: RuntimeFlags | None = None,
) -> Orchestrator:
ctx = build_runtime(
cfg=golden_config(),
endpoints=load_endpoints(env={}),
@@ -77,6 +78,8 @@ def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator:
ctx,
expected_environment=expected, # type: ignore[arg-type]
eur_to_usd=Decimal("1.075"),
flags=flags
or RuntimeFlags(data_analysis_enabled=True, strategy_enabled=True),
)
@@ -110,12 +113,6 @@ async def test_boot_arms_kill_switch_on_environment_mismatch(
json=[],
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
orch = _build_orch(tmp_path, expected="testnet")
await orch.boot()
assert orch.context.kill_switch.is_armed() is True
@@ -125,4 +122,49 @@ def test_install_scheduler_registers_canonical_jobs(tmp_path: Path) -> None:
orch = _build_orch(tmp_path)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert job_ids == {"entry", "monitor", "health", "backup"}
assert job_ids == {
"entry",
"monitor",
"health",
"backup",
"manual_actions",
"market_snapshot",
}
def test_install_scheduler_skips_strategy_jobs_when_disabled(tmp_path: Path) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=True, strategy_enabled=False),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
# data analysis stays on, plus the always-on infra jobs.
assert {"health", "backup", "manual_actions", "market_snapshot"}.issubset(job_ids)
def test_install_scheduler_skips_market_snapshot_when_data_analysis_off(
tmp_path: Path,
) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=False, strategy_enabled=True),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "market_snapshot" not in job_ids
assert {"entry", "monitor", "health", "backup", "manual_actions"}.issubset(
job_ids
)
def test_install_scheduler_analysis_only_default(tmp_path: Path) -> None:
"""The default RuntimeFlags profile (analysis only) drops entry/monitor."""
orch = _build_orch(tmp_path, flags=RuntimeFlags())
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
assert "market_snapshot" in job_ids
-10
View File
@@ -115,11 +115,6 @@ async def test_recovery_cancels_awaiting_fill_when_broker_lacks_legs(
url="http://mcp-deribit:9011/tools/get_positions",
json=[],
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await recover_state(ctx, now=_now())
@@ -154,11 +149,6 @@ async def test_recovery_alerts_on_open_position_missing_on_broker(
url="http://mcp-deribit:9011/tools/get_positions",
json=[],
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await recover_state(ctx, now=_now())
assert ctx.kill_switch.is_armed() is True
+14 -31
View File
@@ -9,13 +9,14 @@ from pathlib import Path
import pytest
from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients.telegram import TelegramClient
from cerbero_bite.runtime.alert_manager import AlertManager, Severity
from cerbero_bite.safety import AuditLog, iter_entries
from cerbero_bite.safety.kill_switch import KillSwitch
from cerbero_bite.state import Repository, connect, run_migrations, transaction
SEND_URL = "https://api.telegram.org/botTOK/sendMessage"
def _make_alert_manager(tmp_path: Path) -> tuple[AlertManager, Path, Path, KillSwitch]:
db_path = tmp_path / "state.sqlite"
@@ -39,14 +40,7 @@ def _make_alert_manager(tmp_path: Path) -> tuple[AlertManager, Path, Path, KillS
audit_log=audit,
clock=lambda: next(times),
)
telegram = TelegramClient(
HttpToolClient(
service="telegram",
base_url="http://mcp-telegram:9017",
token="t",
retry_max=1,
)
)
telegram = TelegramClient(bot_token="TOK", chat_id="42")
return AlertManager(telegram=telegram, audit_log=audit, kill_switch=ks), audit_path, db_path, ks
@@ -65,17 +59,13 @@ async def test_low_emits_audit_only(tmp_path: Path, httpx_mock: HTTPXMock) -> No
@pytest.mark.asyncio
async def test_medium_calls_telegram_notify(tmp_path: Path, httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify", json={"ok": True}
)
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
am, audit_path, _, ks = _make_alert_manager(tmp_path)
await am.medium(source="entry_cycle", message="snapshot delayed")
requests = httpx_mock.get_requests()
assert len(requests) == 1
body = json.loads(requests[0].read())
assert body["message"] == "[entry_cycle] snapshot delayed"
assert body["priority"] == "high"
assert body["tag"] == "entry_cycle"
assert body["text"] == "[HIGH][entry_cycle] snapshot delayed"
assert ks.is_armed() is False
assert any(e.payload["severity"] == "medium" for e in iter_entries(audit_path))
@@ -84,17 +74,13 @@ async def test_medium_calls_telegram_notify(tmp_path: Path, httpx_mock: HTTPXMoc
async def test_high_arms_kill_switch_and_calls_notify_alert(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_alert", json={"ok": True}
)
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
am, _, _, ks = _make_alert_manager(tmp_path)
await am.high(source="health", message="3 consecutive MCP failures")
body = json.loads(httpx_mock.get_request().read())
assert body == {
"source": "health",
"message": "3 consecutive MCP failures",
"priority": "high",
}
text = body["text"]
assert "ALERT [HIGH]" in text
assert "health" in text and "3 consecutive MCP failures" in text
assert ks.is_armed() is True
@@ -102,9 +88,7 @@ async def test_high_arms_kill_switch_and_calls_notify_alert(
async def test_critical_arms_kill_switch_and_calls_notify_system_error(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error", json={"ok": True}
)
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
am, _, _, ks = _make_alert_manager(tmp_path)
await am.critical(
source="audit_chain",
@@ -112,8 +96,9 @@ async def test_critical_arms_kill_switch_and_calls_notify_system_error(
component="safety.audit_log",
)
body = json.loads(httpx_mock.get_request().read())
assert body["component"] == "safety.audit_log"
assert body["priority"] == "critical"
text = body["text"]
assert "SYSTEM ERROR [CRITICAL]" in text
assert "safety.audit_log" in text
assert ks.is_armed() is True
@@ -121,9 +106,7 @@ async def test_critical_arms_kill_switch_and_calls_notify_system_error(
async def test_critical_when_already_armed_is_idempotent(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error", json={"ok": True}
)
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
am, _, _, ks = _make_alert_manager(tmp_path)
ks.arm(reason="prior", source="manual")
assert ks.is_armed() is True
+157
View File
@@ -0,0 +1,157 @@
"""TDD per :mod:`cerbero_bite.runtime.auto_pause` (§7-bis F)."""
from __future__ import annotations
from datetime import UTC, datetime, timedelta
from decimal import Decimal
import pytest
from cerbero_bite.config.schema import AutoPauseConfig
from cerbero_bite.runtime.auto_pause import (
evaluate_drawdown_breach,
is_paused,
pause_until,
)
from cerbero_bite.state.models import SystemStateRecord
_NOW = datetime(2026, 5, 1, 14, 0, tzinfo=UTC)
def _state(**overrides: object) -> SystemStateRecord:
base: dict[str, object] = {
"kill_switch": 0,
"last_health_check": _NOW,
"config_version": "1.0.0",
"started_at": _NOW - timedelta(hours=1),
}
base.update(overrides)
return SystemStateRecord(**base) # type: ignore[arg-type]
# ---------------------------------------------------------------------------
# is_paused
# ---------------------------------------------------------------------------
def test_is_paused_returns_false_when_state_is_none() -> None:
status = is_paused(None, now=_NOW)
assert status.paused is False
def test_is_paused_returns_false_when_until_is_none() -> None:
status = is_paused(_state(), now=_NOW)
assert status.paused is False
def test_is_paused_returns_true_when_until_in_future() -> None:
status = is_paused(
_state(auto_pause_until=_NOW + timedelta(weeks=2),
auto_pause_reason="DD breach"),
now=_NOW,
)
assert status.paused is True
assert status.reason == "DD breach"
def test_is_paused_returns_false_when_until_in_past() -> None:
status = is_paused(
_state(auto_pause_until=_NOW - timedelta(seconds=1)),
now=_NOW,
)
assert status.paused is False
# ---------------------------------------------------------------------------
# pause_until
# ---------------------------------------------------------------------------
def test_pause_until_adds_weeks() -> None:
until = pause_until(_NOW, weeks=2)
assert until == _NOW + timedelta(weeks=2)
def test_pause_until_clamps_to_one_week_minimum() -> None:
# weeks <= 0 deve cmq dare almeno 1 settimana di pausa, altrimenti
# la cron settimanale potrebbe scattare comunque.
assert pause_until(_NOW, weeks=0) == _NOW + timedelta(weeks=1)
assert pause_until(_NOW, weeks=-3) == _NOW + timedelta(weeks=1)
# ---------------------------------------------------------------------------
# evaluate_drawdown_breach
# ---------------------------------------------------------------------------
def _cfg(**overrides: object) -> AutoPauseConfig:
base: dict[str, object] = {
"enabled": True,
"lookback_trades": 5,
"max_drawdown_pct": Decimal("0.10"),
"pause_weeks": 2,
}
base.update(overrides)
return AutoPauseConfig(**base) # type: ignore[arg-type]
def test_drawdown_breach_when_enabled_and_threshold_exceeded() -> None:
decision = evaluate_drawdown_breach(
cfg=_cfg(),
recent_pnl_usd=[Decimal("-50"), Decimal("-60"), Decimal("-40"),
Decimal("-30"), Decimal("-20")], # cum 200 USD
capital_usd=Decimal("1500"),
)
# |200| / 1500 = 0.133 > 0.10
assert decision.should_pause is True
assert decision.reason is not None
assert "rolling DD" in decision.reason
def test_no_breach_when_filter_disabled() -> None:
decision = evaluate_drawdown_breach(
cfg=_cfg(enabled=False),
recent_pnl_usd=[Decimal("-200")] * 5, # massacro
capital_usd=Decimal("1500"),
)
assert decision.should_pause is False
def test_no_breach_when_lookback_insufficient() -> None:
decision = evaluate_drawdown_breach(
cfg=_cfg(lookback_trades=5),
recent_pnl_usd=[Decimal("-100")] * 3, # solo 3 trade, serve 5
capital_usd=Decimal("1500"),
)
assert decision.should_pause is False
def test_no_breach_when_cumulative_positive() -> None:
# Anche con tante perdite, se la somma è positiva non scattiamo.
decision = evaluate_drawdown_breach(
cfg=_cfg(),
recent_pnl_usd=[Decimal("-100"), Decimal("-50"),
Decimal("300"), Decimal("-20"), Decimal("-10")],
capital_usd=Decimal("1500"),
)
assert decision.should_pause is False
def test_no_breach_when_below_threshold() -> None:
decision = evaluate_drawdown_breach(
cfg=_cfg(),
recent_pnl_usd=[Decimal("-30")] * 5, # cum 150 / 1500 = 10% esatto
capital_usd=Decimal("1500"),
)
# esattamente alla soglia (>=) ⇒ pausa armata
assert decision.should_pause is True
def test_no_breach_when_capital_zero_or_negative() -> None:
decision = evaluate_drawdown_breach(
cfg=_cfg(),
recent_pnl_usd=[Decimal("-100")] * 5,
capital_usd=Decimal("0"),
)
assert decision.should_pause is False
+15 -33
View File
@@ -7,25 +7,14 @@ contains the expected statuses.
from __future__ import annotations
from pathlib import Path
import pytest
from click.testing import CliRunner
from pytest_httpx import HTTPXMock
from cerbero_bite.cli import main as cli_main
def _seed_token(tmp_path: Path) -> Path:
target = tmp_path / "core_token"
target.write_text("super-secret\n", encoding="utf-8")
return target
def test_ping_reports_each_service(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
token_file = _seed_token(tmp_path)
def test_ping_reports_each_service(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info",
json={
@@ -49,29 +38,24 @@ def test_ping_reports_each_service(
url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding",
json={"snapshot": {"ETH": {"binance": 0.0001}}},
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 5000.0},
)
result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"]
cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
)
assert result.exit_code == 0, result.output
assert "deribit" in result.output
assert "hyperliquid" in result.output
assert "macro" in result.output
assert "sentiment" in result.output
assert "portfolio" in result.output
assert "telegram" in result.output # listed even if skipped
# at least 5 OK statuses
assert result.output.count("OK") >= 5
# Telegram and Portfolio are no longer MCP services and are not
# listed by the ping command.
assert "portfolio" not in result.output
assert "OK" in result.output
def test_ping_reports_failure_when_service_unreachable(
tmp_path: Path, httpx_mock: HTTPXMock
httpx_mock: HTTPXMock,
) -> None:
token_file = _seed_token(tmp_path)
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info",
status_code=500,
@@ -90,21 +74,19 @@ def test_ping_reports_failure_when_service_unreachable(
url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding",
json={"snapshot": {"ETH": {"binance": 0.0001}}},
)
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 0.0},
)
result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"]
cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
)
assert result.exit_code == 0
assert "FAIL" in result.output
def test_ping_token_missing_exits_nonzero(tmp_path: Path) -> None:
result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(tmp_path / "nope")]
)
def test_ping_token_missing_exits_nonzero(
monkeypatch: pytest.MonkeyPatch,
) -> None:
# Ensure no env var leaks into the CLI invocation.
monkeypatch.delenv("CERBERO_BITE_MCP_TOKEN", raising=False)
result = CliRunner().invoke(cli_main, ["ping"])
assert result.exit_code == 1
assert "token error" in result.output
+22
View File
@@ -47,6 +47,28 @@ async def test_call_attaches_bearer_token(httpx_mock: HTTPXMock) -> None:
assert request is not None
assert request.headers["Authorization"] == "Bearer abc123"
assert request.headers["Content-Type"] == "application/json"
# Default bot tag is sent on every request.
assert request.headers["X-Bot-Tag"] == "BOT__CERBERO_BITE"
@pytest.mark.asyncio
async def test_call_attaches_custom_bot_tag(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
client = _make_client(bot_tag="BOT__SHADOW")
await client.call("any")
request = httpx_mock.get_request()
assert request is not None
assert request.headers["X-Bot-Tag"] == "BOT__SHADOW"
def test_init_rejects_blank_bot_tag() -> None:
with pytest.raises(ValueError, match="non-empty"):
_make_client(bot_tag=" ")
def test_init_rejects_too_long_bot_tag() -> None:
with pytest.raises(ValueError, match="64"):
_make_client(bot_tag="x" * 65)
@pytest.mark.asyncio
+203 -58
View File
@@ -1,95 +1,240 @@
"""Tests for PortfolioClient."""
"""Tests for in-process PortfolioClient (composes deribit + hyperliquid + macro)."""
from __future__ import annotations
from decimal import Decimal
from typing import Any
import pytest
from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.portfolio import PortfolioClient
# ---------------------------------------------------------------------------
# Test doubles
# ---------------------------------------------------------------------------
def _client() -> PortfolioClient:
http = HttpToolClient(
service="portfolio",
base_url="http://mcp-portfolio:9018",
token="t",
retry_max=1,
class _FakeDeribit:
SERVICE = "deribit"
def __init__(
self,
*,
equity_usd: Decimal | float = Decimal("0"),
positions: list[dict[str, Any]] | None = None,
) -> None:
self._equity = Decimal(str(equity_usd))
self._positions = positions or []
async def get_account_summary(self, currency: str = "USDC") -> dict[str, Any]:
assert currency == "USDC"
return {"equity": float(self._equity), "currency": "USDC"}
async def get_positions(self, currency: str = "USDC") -> list[dict[str, Any]]:
assert currency == "USDC"
return list(self._positions)
class _FakeHyperliquid:
SERVICE = "hyperliquid"
def __init__(
self,
*,
equity_usd: Decimal | float = Decimal("0"),
positions: list[dict[str, Any]] | None = None,
) -> None:
self._equity = Decimal(str(equity_usd))
self._positions = positions or []
async def get_account_summary(self) -> dict[str, Any]:
return {"equity": float(self._equity)}
async def get_positions(self) -> list[dict[str, Any]]:
return list(self._positions)
class _FakeMacro:
SERVICE = "macro"
def __init__(self, *, eur_usd: Decimal | float | None = Decimal("1.10")) -> None:
self._eur_usd = eur_usd
async def eur_usd_rate(self) -> Decimal:
if self._eur_usd is None:
raise McpDataAnomalyError(
"missing", service="macro", tool="get_asset_price"
)
return PortfolioClient(http)
return Decimal(str(self._eur_usd))
@pytest.mark.asyncio
async def test_total_equity_eur(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 12345.67},
def _make(
*,
deribit_eq: Decimal | float = 0,
hl_eq: Decimal | float = 0,
deribit_pos: list[dict[str, Any]] | None = None,
hl_pos: list[dict[str, Any]] | None = None,
eur_usd: Decimal | float | None = Decimal("1.10"),
) -> PortfolioClient:
return PortfolioClient(
deribit=_FakeDeribit(equity_usd=deribit_eq, positions=deribit_pos),
hyperliquid=_FakeHyperliquid(equity_usd=hl_eq, positions=hl_pos),
macro=_FakeMacro(eur_usd=eur_usd),
)
out = await _client().total_equity_eur()
assert out == Decimal("12345.67")
# ---------------------------------------------------------------------------
# total_equity_usd / total_equity_eur
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_total_equity_anomaly_when_missing(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={})
with pytest.raises(McpDataAnomalyError, match="total_value_eur"):
await _client().total_equity_eur()
async def test_total_equity_usd_sums_both_exchanges() -> None:
p = _make(deribit_eq="1500.50", hl_eq="982.50")
assert await p.total_equity_usd() == Decimal("2483.00")
@pytest.mark.asyncio
async def test_total_equity_anomaly_on_unexpected_shape(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json=[1, 2, 3])
with pytest.raises(McpDataAnomalyError, match="unexpected shape"):
await _client().total_equity_eur()
async def test_total_equity_eur_converts_with_fx() -> None:
p = _make(deribit_eq="1100", hl_eq="0", eur_usd="1.10")
# 1100 USD / 1.10 = 1000 EUR
assert await p.total_equity_eur() == Decimal("1000")
@pytest.mark.asyncio
async def test_asset_pct_aggregates_matching_tickers(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_holdings",
json=[
{"ticker": "ETH-USD", "current_value_eur": 3000.0},
{"ticker": "ETHE", "current_value_eur": 1000.0}, # ETH ticker variant
{"ticker": "AAPL", "current_value_eur": 6000.0},
async def test_total_equity_eur_zero_when_no_balance() -> None:
p = _make(deribit_eq=0, hl_eq=0, eur_usd="1.20")
assert await p.total_equity_eur() == Decimal("0")
@pytest.mark.asyncio
async def test_total_equity_eur_raises_on_non_positive_fx() -> None:
p = _make(deribit_eq="100", hl_eq="0", eur_usd="0")
with pytest.raises(McpDataAnomalyError, match="non-positive EURUSD"):
await p.total_equity_eur()
@pytest.mark.asyncio
async def test_total_equity_eur_propagates_macro_anomaly() -> None:
p = _make(deribit_eq="100", hl_eq="0", eur_usd=None)
with pytest.raises(McpDataAnomalyError):
await p.total_equity_eur()
# ---------------------------------------------------------------------------
# asset_pct_of_portfolio
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_asset_pct_aggregates_eth_across_both_exchanges() -> None:
p = _make(
deribit_eq="5000",
hl_eq="5000",
deribit_pos=[
{
"instrument_name": "ETH-15MAY26-2475-P",
"size": 10,
"mark_price": 100,
},
# BTC position should be ignored when asking for ETH
{
"instrument_name": "BTC-PERPETUAL",
"size": 1,
"mark_price": 75000,
},
],
hl_pos=[
{"coin": "ETH", "notional_usd": 1000},
],
)
pct = await _client().asset_pct_of_portfolio("ETH")
# 4000 / 10000 = 0.4
assert pct == Decimal("0.4")
# ETH exposure: 10×100 (deribit) + 1000 (hl) = 2000
# total equity: 10000
pct = await p.asset_pct_of_portfolio("ETH")
assert pct == Decimal("0.2")
@pytest.mark.asyncio
async def test_asset_pct_returns_zero_for_empty_portfolio(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(json=[])
assert await _client().asset_pct_of_portfolio("ETH") == Decimal("0")
async def test_asset_pct_returns_zero_when_no_positions() -> None:
p = _make(deribit_eq="1000", hl_eq="0")
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0")
@pytest.mark.asyncio
async def test_asset_pct_skips_entries_without_value(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
json=[
{"ticker": "ETH", "current_value_eur": None},
{"ticker": "AAPL", "current_value_eur": 1000.0},
]
async def test_asset_pct_returns_zero_when_no_equity() -> None:
p = _make(
deribit_eq=0,
hl_eq=0,
deribit_pos=[
{"instrument_name": "ETH-PERP", "notional_usd": 100},
],
)
assert await _client().asset_pct_of_portfolio("ETH") == Decimal("0")
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0")
@pytest.mark.asyncio
async def test_asset_pct_anomaly_when_response_not_list(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"holdings": []})
with pytest.raises(McpDataAnomalyError, match="unexpected shape"):
await _client().asset_pct_of_portfolio("ETH")
def test_portfolio_client_rejects_wrong_service() -> None:
bad = HttpToolClient(
service="macro", base_url="http://x:1", token="t", retry_max=1
async def test_asset_pct_uses_explicit_notional_when_present() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
# explicit notional_usd takes precedence over size×mark
{
"instrument_name": "ETH-XYZ",
"notional_usd": 250,
"size": 999,
"mark_price": 999,
},
],
)
with pytest.raises(ValueError, match="requires service 'portfolio'"):
PortfolioClient(bad)
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.25")
@pytest.mark.asyncio
async def test_asset_pct_falls_back_to_size_times_mark() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
{"instrument_name": "ETH-XYZ", "size": 5, "mark_price": 40},
],
)
# 5×40 / 1000 = 0.2
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.2")
@pytest.mark.asyncio
async def test_asset_pct_takes_absolute_value_for_short_positions() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
hl_pos=[{"coin": "ETH", "size": -10, "mark_price": 50}],
)
# |-10×50| / 1000 = 0.5
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.5")
@pytest.mark.asyncio
async def test_asset_pct_case_insensitive_match() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
{"instrument_name": "eth-perpetual", "notional_usd": 300},
],
)
assert await p.asset_pct_of_portfolio("eth") == Decimal("0.3")
@pytest.mark.asyncio
async def test_asset_pct_skips_non_dict_entries() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
"not a dict", # type: ignore[list-item]
{"instrument_name": "ETH", "notional_usd": 100},
],
)
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.1")
+171 -57
View File
@@ -1,25 +1,27 @@
"""Tests for TelegramClient (notify-only mode)."""
"""Tests for in-process TelegramClient (Bot API, notify-only)."""
from __future__ import annotations
import json
from decimal import Decimal
import httpx
import pytest
from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients.telegram import TelegramClient
def _client() -> TelegramClient:
http = HttpToolClient(
service="telegram",
base_url="http://mcp-telegram:9017",
token="t",
retry_max=1,
from cerbero_bite.clients.telegram import (
TelegramClient,
TelegramError,
load_telegram_credentials,
)
return TelegramClient(http)
SEND_URL = "https://api.telegram.org/botTOK/sendMessage"
def _client(**kw) -> TelegramClient:
defaults = {"bot_token": "TOK", "chat_id": "42"}
defaults.update(kw)
return TelegramClient(**defaults)
def _request_body(httpx_mock: HTTPXMock) -> dict:
@@ -28,34 +30,66 @@ def _request_body(httpx_mock: HTTPXMock) -> dict:
return json.loads(request.read())
# ---------------------------------------------------------------------------
# enabled / disabled
# ---------------------------------------------------------------------------
def test_enabled_when_both_token_and_chat_id_present() -> None:
assert _client().enabled is True
def test_disabled_when_token_missing() -> None:
c = TelegramClient(bot_token=None, chat_id="42")
assert c.enabled is False
def test_disabled_when_chat_id_missing() -> None:
c = TelegramClient(bot_token="TOK", chat_id=None)
assert c.enabled is False
def test_disabled_when_token_blank() -> None:
c = TelegramClient(bot_token=" ", chat_id="42")
assert c.enabled is False
@pytest.mark.asyncio
async def test_notify_sends_message_with_priority(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
)
async def test_disabled_notify_is_noop(httpx_mock: HTTPXMock) -> None:
c = TelegramClient(bot_token=None, chat_id=None)
await c.notify("hello")
assert httpx_mock.get_requests() == []
# ---------------------------------------------------------------------------
# notify formatting
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_notify_sends_with_priority_and_tag(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True, "result": {}})
await _client().notify("hello", priority="high", tag="entry")
body = _request_body(httpx_mock)
assert body == {"message": "hello", "priority": "high", "tag": "entry"}
assert body["chat_id"] == "42"
assert body["parse_mode"] == "HTML"
assert body["text"] == "[HIGH][entry] hello"
assert body["disable_web_page_preview"] is True
@pytest.mark.asyncio
async def test_notify_default_priority_normal(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify("plain")
body = _request_body(httpx_mock)
assert body["priority"] == "normal"
assert "tag" not in body
assert body["text"] == "[NORMAL] plain"
@pytest.mark.asyncio
async def test_notify_position_opened_serialises_decimals(
async def test_notify_position_opened_formats_decimals(
httpx_mock: HTTPXMock,
) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_opened",
json={"ok": True},
)
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_opened(
instrument="ETH-15MAY26-2475-P",
side="SELL",
@@ -64,59 +98,139 @@ async def test_notify_position_opened_serialises_decimals(
greeks={"delta": Decimal("-0.04"), "vega": Decimal("0.20")},
expected_pnl_usd=Decimal("45.00"),
)
body = _request_body(httpx_mock)
assert body["instrument"] == "ETH-15MAY26-2475-P"
assert body["greeks"] == {"delta": -0.04, "vega": 0.20}
assert body["expected_pnl"] == 45.0
assert body["size"] == 2.0
text = _request_body(httpx_mock)["text"]
assert "POSITION OPENED" in text
assert "ETH-15MAY26-2475-P" in text
assert "SELL" in text and "size: 2" in text and "bull_put" in text
assert "delta=-0.0400" in text and "vega=+0.2000" in text
assert "$+45.00" in text
@pytest.mark.asyncio
async def test_notify_position_opened_without_greeks(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_opened(
instrument="BTC-PERPETUAL", side="BUY", size=1, strategy="hedge"
)
text = _request_body(httpx_mock)["text"]
assert "greeks" not in text
assert "expected pnl" not in text
@pytest.mark.asyncio
async def test_notify_position_closed(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_closed(
instrument="ETH-15MAY26-2475-P_2350-P",
realized_pnl_usd=Decimal("32.50"),
reason="CLOSE_PROFIT",
)
body = _request_body(httpx_mock)
assert body == {
"instrument": "ETH-15MAY26-2475-P_2350-P",
"realized_pnl": 32.5,
"reason": "CLOSE_PROFIT",
}
text = _request_body(httpx_mock)["text"]
assert "POSITION CLOSED" in text
assert "ETH-15MAY26-2475-P_2350-P" in text
assert "$+32.50" in text
assert "CLOSE_PROFIT" in text
@pytest.mark.asyncio
async def test_notify_position_closed_negative_pnl(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_closed(
instrument="X", realized_pnl_usd=Decimal("-12.5"), reason="STOP"
)
text = _request_body(httpx_mock)["text"]
assert "$-12.50" in text
@pytest.mark.asyncio
async def test_notify_alert(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_alert(
source="kill_switch", message="armed manually", priority="critical"
)
body = _request_body(httpx_mock)
assert body == {
"source": "kill_switch",
"message": "armed manually",
"priority": "critical",
}
text = _request_body(httpx_mock)["text"]
assert "ALERT [CRITICAL]" in text
assert "kill_switch" in text and "armed manually" in text
@pytest.mark.asyncio
async def test_notify_system_error(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_system_error(
message="deribit feed anomaly",
component="clients.deribit",
message="deribit feed anomaly", component="clients.deribit"
)
body = _request_body(httpx_mock)
assert body["message"] == "deribit feed anomaly"
assert body["component"] == "clients.deribit"
assert body["priority"] == "critical"
text = _request_body(httpx_mock)["text"]
assert "SYSTEM ERROR [CRITICAL]" in text
assert "deribit feed anomaly" in text
assert "clients.deribit" in text
def test_telegram_client_rejects_wrong_service() -> None:
bad = HttpToolClient(
service="macro", base_url="http://x:1", token="t", retry_max=1
@pytest.mark.asyncio
async def test_notify_system_error_without_component(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_system_error(message="boom")
text = _request_body(httpx_mock)["text"]
assert "component" not in text
# ---------------------------------------------------------------------------
# error paths
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_http_non_200_raises(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, status_code=500, text="upstream")
with pytest.raises(TelegramError, match="HTTP 500"):
await _client().notify("x")
@pytest.mark.asyncio
async def test_api_ok_false_raises(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url=SEND_URL, json={"ok": False, "description": "chat not found"}
)
with pytest.raises(ValueError, match="requires service 'telegram'"):
TelegramClient(bad)
with pytest.raises(TelegramError, match="chat not found"):
await _client().notify("x")
# ---------------------------------------------------------------------------
# shared httpx client
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_uses_shared_http_client(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
shared = httpx.AsyncClient()
try:
c = _client(http_client=shared)
await c.notify("x")
finally:
await shared.aclose()
assert len(httpx_mock.get_requests()) == 1
# ---------------------------------------------------------------------------
# env-var loader
# ---------------------------------------------------------------------------
def test_load_credentials_returns_none_when_unset() -> None:
assert load_telegram_credentials(env={}) == (None, None)
def test_load_credentials_strips_whitespace() -> None:
env = {
"CERBERO_BITE_TELEGRAM_BOT_TOKEN": " abc ",
"CERBERO_BITE_TELEGRAM_CHAT_ID": " -100 ",
}
assert load_telegram_credentials(env=env) == ("abc", "-100")
def test_load_credentials_treats_empty_as_none() -> None:
env = {
"CERBERO_BITE_TELEGRAM_BOT_TOKEN": "",
"CERBERO_BITE_TELEGRAM_CHAT_ID": " ",
}
assert load_telegram_credentials(env=env) == (None, None)
+143
View File
@@ -329,3 +329,146 @@ def test_build_bear_call_breakeven_above_short_strike(
# breakeven = 3525 + 15 = 3540
assert proposal.breakeven == Decimal("3540")
assert proposal.spread_type == "bear_call"
# ---------------------------------------------------------------------------
# §3.2 (A): dynamic delta target by DVOL regime
# ---------------------------------------------------------------------------
def _cfg_with_delta_bands(cfg: StrategyConfig) -> StrategyConfig:
"""Profilo con step-function delta su DVOL.
Vol bassa (50) delta 0.15 (più premio), vol media (70)
0.12 (default), vol alta (90) 0.10 (più safety distance).
"""
from cerbero_bite.config.schema import (
DeltaByDvolBand,
ShortStrikeSpec,
StructureConfig,
)
bands = [
DeltaByDvolBand(
dvol_under=Decimal("50"),
delta_target=Decimal("0.15"),
delta_min=Decimal("0.13"),
delta_max=Decimal("0.17"),
),
DeltaByDvolBand(
dvol_under=Decimal("70"),
delta_target=Decimal("0.12"),
delta_min=Decimal("0.10"),
delta_max=Decimal("0.15"),
),
DeltaByDvolBand(
dvol_under=Decimal("90"),
delta_target=Decimal("0.10"),
delta_min=Decimal("0.08"),
delta_max=Decimal("0.12"),
),
]
new_short = ShortStrikeSpec(
**{**cfg.structure.short_strike.model_dump(), "delta_by_dvol": bands}
)
return cfg.model_copy(
update={
"structure": StructureConfig(
**{**cfg.structure.model_dump(exclude={"short_strike"}),
"short_strike": new_short}
)
}
)
def _bull_put_chain_wide(now_dt: datetime) -> list[OptionQuote]:
"""Chain con shorts e longs per delta 0.10, 0.12, 0.15.
I mid sono tarati per superare il credit/width 30% per ogni
accoppiamento shortlong testato (vedi commento §3.4).
"""
return [
# Shorts a delta 0.10 / 0.12 / 0.15 in OTM range [15-25%].
_quote(strike="2535", delta="-0.15", mid="0.026", now_dt=now_dt),
_quote(strike="2475", delta="-0.12", mid="0.020", now_dt=now_dt),
_quote(strike="2400", delta="-0.10", mid="0.015", now_dt=now_dt),
# Long candidati ~4% sotto ciascuno short.
_quote(strike="2415", delta="-0.10", mid="0.012", now_dt=now_dt),
_quote(strike="2355", delta="-0.08", mid="0.006", now_dt=now_dt),
_quote(strike="2280", delta="-0.06", mid="0.002", now_dt=now_dt),
]
def test_dynamic_delta_low_dvol_picks_higher_delta(
cfg: StrategyConfig, now: datetime
) -> None:
"""DVOL=40 → banda con delta_target=0.15."""
cfg_dyn = _cfg_with_delta_bands(cfg)
chain = _bull_put_chain_wide(now)
res = select_strikes(
chain=chain,
bias="bull_put",
spot=Decimal("3000"),
now=now,
cfg=cfg_dyn,
dvol_now=Decimal("40"),
)
assert res is not None
short, _ = res
assert short.delta == Decimal("-0.15")
def test_dynamic_delta_mid_dvol_picks_default_delta(
cfg: StrategyConfig, now: datetime
) -> None:
"""DVOL=60 → banda con delta_target=0.12."""
cfg_dyn = _cfg_with_delta_bands(cfg)
chain = _bull_put_chain_wide(now)
res = select_strikes(
chain=chain,
bias="bull_put",
spot=Decimal("3000"),
now=now,
cfg=cfg_dyn,
dvol_now=Decimal("60"),
)
assert res is not None
short, _ = res
assert short.delta == Decimal("-0.12")
def test_dynamic_delta_high_dvol_picks_lower_delta(
cfg: StrategyConfig, now: datetime
) -> None:
"""DVOL=85 → banda con delta_target=0.10 (più safety distance)."""
cfg_dyn = _cfg_with_delta_bands(cfg)
chain = _bull_put_chain_wide(now)
res = select_strikes(
chain=chain,
bias="bull_put",
spot=Decimal("3000"),
now=now,
cfg=cfg_dyn,
dvol_now=Decimal("85"),
)
assert res is not None
short, _ = res
assert short.delta == Decimal("-0.10")
def test_dynamic_delta_disabled_default_uses_static_delta(
cfg: StrategyConfig, now: datetime
) -> None:
"""delta_by_dvol vuoto (default) → comportamento invariato."""
chain = _bull_put_chain_wide(now)
res = select_strikes(
chain=chain,
bias="bull_put",
spot=Decimal("3000"),
now=now,
cfg=cfg, # golden config: delta_by_dvol=[]
dvol_now=Decimal("40"),
)
assert res is not None
short, _ = res
# Delta target statico = 0.12, quindi torna lo strike a -0.12.
assert short.delta == Decimal("-0.12")
+1 -1
View File
@@ -68,7 +68,7 @@ def test_compute_hash_is_independent_of_recorded_hash_value(tmp_path: Path) -> N
def test_load_repo_strategy_yaml(tmp_path: Path) -> None:
"""The committed strategy.yaml validates with the recorded hash."""
result = load_strategy(REPO_ROOT / "strategy.yaml")
assert result.config.config_version == "1.0.0"
assert result.config.config_version == "1.2.0"
assert result.config.sizing.kelly_fraction == Decimal("0.13")
assert result.computed_hash == result.config.config_hash
+88
View File
@@ -271,3 +271,91 @@ def test_iron_condor_adverse_move_either_direction(cfg: StrategyConfig) -> None:
)
res = evaluate(snap, cfg)
assert res.action == "CLOSE_AVERSE"
# ---------------------------------------------------------------------------
# §7-bis (D): vol-collapse harvest
# ---------------------------------------------------------------------------
def _harvest_cfg(
cfg: StrategyConfig, *, threshold: str = "15"
) -> StrategyConfig:
"""Clona la golden config con la soglia di vol-harvest abilitata."""
from cerbero_bite.config import ExitConfig
return cfg.model_copy(
update={
"exit": ExitConfig(
**{
**cfg.exit.model_dump(),
"vol_harvest_dvol_decrease": Decimal(threshold),
}
)
}
)
def test_vol_harvest_disabled_by_default_does_not_fire(cfg: StrategyConfig) -> None:
# Default: vol_harvest_dvol_decrease = 0 ⇒ filtro disabilitato.
snap = _snapshot(
credit_received_eth="0.030",
mark_combo_now_eth="0.022", # in profit (debit < credit)
dvol_at_entry="60",
dvol_now="40", # crollato di 20 punti
)
res = evaluate(snap, cfg)
assert res.action == "HOLD"
def test_vol_harvest_fires_when_dvol_collapsed_in_profit(
cfg: StrategyConfig,
) -> None:
harvest = _harvest_cfg(cfg, threshold="15")
snap = _snapshot(
credit_received_eth="0.030",
mark_combo_now_eth="0.022", # in profit ma sopra profit_take 50%
dvol_at_entry="60",
dvol_now="42", # 18, supera la soglia 15
)
res = evaluate(snap, harvest)
assert res.action == "CLOSE_VOL_HARVEST"
assert "harvest" in res.reason
def test_vol_harvest_does_not_fire_when_in_loss(cfg: StrategyConfig) -> None:
# Anche se DVOL crolla, se siamo in perdita non vogliamo harvest:
# è una funzione di "esci con il profitto in mano", non un panico.
harvest = _harvest_cfg(cfg, threshold="15")
snap = _snapshot(
credit_received_eth="0.030",
mark_combo_now_eth="0.040", # debit > credit ⇒ in perdita
dvol_at_entry="60",
dvol_now="42",
)
res = evaluate(snap, harvest)
assert res.action != "CLOSE_VOL_HARVEST"
def test_vol_harvest_does_not_fire_below_threshold(cfg: StrategyConfig) -> None:
harvest = _harvest_cfg(cfg, threshold="15")
snap = _snapshot(
credit_received_eth="0.030",
mark_combo_now_eth="0.022",
dvol_at_entry="60",
dvol_now="50", # 10, sotto la soglia 15
)
res = evaluate(snap, harvest)
assert res.action == "HOLD"
def test_profit_take_wins_over_vol_harvest(cfg: StrategyConfig) -> None:
# Quando il profit-take è già colpito, non passiamo per vol-harvest.
harvest = _harvest_cfg(cfg, threshold="15")
snap = _snapshot(
credit_received_eth="0.030",
mark_combo_now_eth="0.014", # ≤ 50% credit ⇒ profit-take
dvol_at_entry="60",
dvol_now="42",
)
res = evaluate(snap, harvest)
assert res.action == "CLOSE_PROFIT"
+99
View File
@@ -0,0 +1,99 @@
"""Tests for the GUI live-balances fetcher (soft-error handling)."""
from __future__ import annotations
from decimal import Decimal
from typing import Any
import pytest
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.gui.live_data import _fetch_deribit_currency
class _FakeDeribit:
def __init__(self, payload: dict[str, Any] | Exception) -> None:
self._payload = payload
async def get_account_summary(self, currency: str) -> dict[str, Any]:
del currency # not used by the fake; kept for signature parity
if isinstance(self._payload, Exception):
raise self._payload
return self._payload
@pytest.mark.asyncio
async def test_soft_error_payload_becomes_row_error() -> None:
"""MCP V2 returns 200 + ``error`` field when upstream auth fails."""
fake = _FakeDeribit(
{
"equity": 0,
"balance": 0,
"available_funds": 0,
"unrealized_pnl": 0,
"error": "Deribit auth failed (code=13004): invalid_credentials",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.exchange == "deribit"
assert row.currency == "USDC"
assert row.equity is None
assert row.available is None
assert row.unrealized_pnl is None
assert row.error is not None
assert "invalid_credentials" in row.error
@pytest.mark.asyncio
async def test_clean_payload_populates_balance_fields() -> None:
fake = _FakeDeribit(
{
"equity": "12.5",
"available_funds": "10.0",
"unrealized_pnl": "-0.25",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("12.5")
assert row.available == Decimal("10.0")
assert row.unrealized_pnl == Decimal("-0.25")
@pytest.mark.asyncio
async def test_exception_becomes_row_error() -> None:
fake = _FakeDeribit(RuntimeError("boom"))
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.equity is None
assert row.error is not None
assert "RuntimeError" in row.error
assert "boom" in row.error
@pytest.mark.asyncio
async def test_blank_error_field_is_ignored() -> None:
"""An ``error`` field that is empty/None must not trigger the soft-error path."""
fake = _FakeDeribit(
{"equity": "1.0", "available_funds": "1.0", "unrealized_pnl": "0.0", "error": None}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("1.0")
# Sanity-check: the production class signature is what we expect to be drop-in
# replaceable by ``_FakeDeribit``.
def test_fake_matches_production_signature() -> None:
assert hasattr(DeribitClient, "get_account_summary")
+205
View File
@@ -0,0 +1,205 @@
"""Tests for runtime.manual_actions_consumer."""
from __future__ import annotations
import json
from datetime import UTC, datetime
from pathlib import Path
from unittest.mock import MagicMock
import pytest
from cerbero_bite.runtime.manual_actions_consumer import consume_manual_actions
from cerbero_bite.safety.audit_log import AuditLog
from cerbero_bite.safety.kill_switch import KillSwitch, KillSwitchError
from cerbero_bite.state import Repository, connect, run_migrations, transaction
from cerbero_bite.state.models import ManualAction
def _now() -> datetime:
return datetime(2026, 4, 30, 12, 0, tzinfo=UTC)
def _ctx(tmp_path: Path):
db_path = tmp_path / "state.sqlite"
audit_path = tmp_path / "audit.log"
repo = Repository()
conn = connect(db_path)
run_migrations(conn)
with transaction(conn):
repo.init_system_state(conn, config_version="1.0.0", now=_now())
conn.close()
audit = AuditLog(audit_path)
ks = KillSwitch(
connection_factory=lambda: connect(db_path),
repository=repo,
audit_log=audit,
clock=_now,
)
ctx = MagicMock()
ctx.db_path = db_path
ctx.repository = repo
ctx.kill_switch = ks
ctx.audit_log = audit
return ctx
def _enqueue(ctx, kind: str, payload: dict[str, object]) -> int:
conn = connect(ctx.db_path)
try:
with transaction(conn):
return ctx.repository.enqueue_manual_action(
conn,
ManualAction(
kind=kind, # type: ignore[arg-type]
payload_json=json.dumps(payload),
created_at=_now(),
),
)
finally:
conn.close()
def _fetch_action(ctx, action_id: int):
conn = connect(ctx.db_path)
try:
row = conn.execute(
"SELECT consumed_at, consumed_by, result FROM manual_actions WHERE id = ?",
(action_id,),
).fetchone()
finally:
conn.close()
return row
@pytest.mark.asyncio
async def test_arm_kill_arms_kill_switch(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "arm_kill", {"reason": "GUI typed yes"})
assert ctx.kill_switch.is_armed() is False
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is True
row = _fetch_action(ctx, aid)
assert row["consumed_by"] == "engine"
assert row["result"] == "ok"
assert row["consumed_at"] is not None
@pytest.mark.asyncio
async def test_disarm_kill_disarms_kill_switch(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
ctx.kill_switch.arm(reason="prior", source="manual")
assert ctx.kill_switch.is_armed() is True
aid = _enqueue(ctx, "disarm_kill", {"reason": "operator override"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is False
row = _fetch_action(ctx, aid)
assert row["result"] == "ok"
@pytest.mark.asyncio
async def test_consumer_drains_queue(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
_enqueue(ctx, "arm_kill", {"reason": "first"})
_enqueue(ctx, "disarm_kill", {"reason": "second"})
_enqueue(ctx, "arm_kill", {"reason": "third"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 3
assert ctx.kill_switch.is_armed() is True
@pytest.mark.asyncio
async def test_unsupported_kind_marked_not_supported(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "force_close", {"proposal_id": "abc"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert row["result"] == "not_supported"
@pytest.mark.asyncio
async def test_missing_payload_uses_default_reason(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
_enqueue(ctx, "arm_kill", {})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is True
@pytest.mark.asyncio
async def test_kill_switch_error_caught_and_recorded(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
# Replace the kill switch with one whose arm raises.
bad_ks = MagicMock()
bad_ks.arm.side_effect = KillSwitchError("simulated")
bad_ks.is_armed.return_value = False
ctx.kill_switch = bad_ks
aid = _enqueue(ctx, "arm_kill", {"reason": "x"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert "KillSwitchError" in (row["result"] or "")
@pytest.mark.asyncio
async def test_empty_queue_returns_zero(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await consume_manual_actions(ctx, now=_now())
assert n == 0
@pytest.mark.asyncio
async def test_run_cycle_dispatches_to_runner(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
calls: list[str] = []
async def _entry() -> None:
calls.append("entry")
aid = _enqueue(ctx, "run_cycle", {"cycle": "entry"})
n = await consume_manual_actions(
ctx, cycle_runners={"entry": _entry}, now=_now()
)
assert n == 1
assert calls == ["entry"]
row = _fetch_action(ctx, aid)
assert row["result"] == "ok: ran entry"
@pytest.mark.asyncio
async def test_run_cycle_unknown_marked_error(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
async def _entry() -> None:
raise AssertionError("should not run")
aid = _enqueue(ctx, "run_cycle", {"cycle": "monitor"})
n = await consume_manual_actions(
ctx, cycle_runners={"entry": _entry}, now=_now()
)
assert n == 1
row = _fetch_action(ctx, aid)
assert "unknown cycle" in (row["result"] or "")
@pytest.mark.asyncio
async def test_run_cycle_without_runners_marks_not_supported(
tmp_path: Path,
) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "run_cycle", {"cycle": "entry"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert row["result"] == "not_supported"
+166
View File
@@ -0,0 +1,166 @@
"""Tests for runtime.market_snapshot_cycle (best-effort collector)."""
from __future__ import annotations
import json
from datetime import UTC, datetime
from decimal import Decimal
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock
import pytest
from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.deribit import DealerGammaSnapshot
from cerbero_bite.clients.sentiment import LiquidationHeatmap
from cerbero_bite.config import golden_config
from cerbero_bite.runtime.market_snapshot_cycle import collect_market_snapshot
from cerbero_bite.state import Repository, connect, run_migrations, transaction
def _now() -> datetime:
return datetime(2026, 4, 30, 12, 0, tzinfo=UTC)
def _ctx(tmp_path: Path) -> MagicMock:
db_path = tmp_path / "state.sqlite"
repo = Repository()
conn = connect(db_path)
run_migrations(conn)
with transaction(conn):
repo.init_system_state(conn, config_version="1.0.0", now=_now())
conn.close()
ctx = MagicMock()
ctx.db_path = db_path
ctx.repository = repo
ctx.cfg = golden_config()
# Default: every feed succeeds with sane mock values.
ctx.deribit = MagicMock()
ctx.deribit.spot_perp_price = AsyncMock(return_value=Decimal("3000"))
ctx.deribit.latest_dvol = AsyncMock(return_value=Decimal("55"))
ctx.deribit.realized_vol = AsyncMock(
return_value={
"rv_14d": Decimal("28"),
"rv_30d": Decimal("35"),
"iv_minus_rv_30d": Decimal("20"),
}
)
ctx.deribit.dealer_gamma_profile = AsyncMock(
return_value=DealerGammaSnapshot(
spot_price=Decimal("3000"),
total_net_dealer_gamma=Decimal("-66000000"),
gamma_flip_level=Decimal("2900"),
strikes_analyzed=42,
)
)
ctx.hyperliquid = MagicMock()
ctx.hyperliquid.funding_rate_annualized = AsyncMock(
return_value=Decimal("0.45")
)
ctx.sentiment = MagicMock()
ctx.sentiment.funding_cross_median_annualized = AsyncMock(
return_value=Decimal("0.30")
)
ctx.sentiment.liquidation_heatmap = AsyncMock(
return_value=LiquidationHeatmap(
asset="ETH",
avg_funding_rate=Decimal("0.0003"),
oi_delta_pct_4h=Decimal("1.2"),
oi_delta_pct_24h=None,
long_squeeze_risk="low",
short_squeeze_risk="low",
)
)
ctx.macro = MagicMock()
ctx.macro.next_high_severity_within = AsyncMock(return_value=3)
return ctx
def _read_snapshots(ctx: MagicMock, asset: str) -> list[dict]:
import sqlite3
conn = connect(ctx.db_path)
conn.row_factory = sqlite3.Row
try:
rows = conn.execute(
"SELECT * FROM market_snapshots WHERE asset = ? ORDER BY timestamp",
(asset,),
).fetchall()
finally:
conn.close()
return [dict(r) for r in rows]
@pytest.mark.asyncio
async def test_happy_path_persists_one_row_per_asset(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await collect_market_snapshot(ctx, assets=("ETH", "BTC"), now=_now())
assert n == 2
eth_rows = _read_snapshots(ctx, "ETH")
btc_rows = _read_snapshots(ctx, "BTC")
assert len(eth_rows) == 1
assert len(btc_rows) == 1
eth = eth_rows[0]
assert eth["fetch_ok"] == 1
assert eth["fetch_errors_json"] is None
assert Decimal(str(eth["spot"])) == Decimal("3000")
assert Decimal(str(eth["dealer_net_gamma"])) == Decimal("-66000000")
assert eth["macro_days_to_event"] == 3
@pytest.mark.asyncio
async def test_failure_in_one_metric_keeps_row_with_error(
tmp_path: Path,
) -> None:
ctx = _ctx(tmp_path)
ctx.deribit.dealer_gamma_profile = AsyncMock(
side_effect=McpDataAnomalyError(
"boom", service="deribit", tool="get_dealer_gamma_profile"
)
)
n = await collect_market_snapshot(ctx, assets=("ETH",), now=_now())
assert n == 1
rows = _read_snapshots(ctx, "ETH")
assert len(rows) == 1
assert rows[0]["fetch_ok"] == 0
errors = json.loads(rows[0]["fetch_errors_json"])
assert "dealer_gamma" in errors
assert rows[0]["dealer_net_gamma"] is None
# Other metrics still populated.
assert Decimal(str(rows[0]["spot"])) == Decimal("3000")
@pytest.mark.asyncio
async def test_btc_uses_btc_in_calls(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
await collect_market_snapshot(ctx, assets=("BTC",), now=_now())
ctx.deribit.spot_perp_price.assert_awaited_with("BTC")
ctx.hyperliquid.funding_rate_annualized.assert_awaited_with("BTC")
ctx.sentiment.liquidation_heatmap.assert_awaited_with("BTC")
@pytest.mark.asyncio
async def test_macro_failure_only_nulls_macro(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
ctx.macro.next_high_severity_within = AsyncMock(
side_effect=RuntimeError("calendar down")
)
await collect_market_snapshot(ctx, assets=("ETH",), now=_now())
rows = _read_snapshots(ctx, "ETH")
assert rows[0]["macro_days_to_event"] is None
assert rows[0]["fetch_ok"] == 0
errors = json.loads(rows[0]["fetch_errors_json"])
assert "macro" in errors
@pytest.mark.asyncio
async def test_returns_zero_for_empty_assets(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await collect_market_snapshot(ctx, assets=(), now=_now())
assert n == 0
+59 -20
View File
@@ -1,14 +1,14 @@
"""Tests for the MCP endpoint and token resolver."""
"""Tests for the MCP endpoint, token and bot-tag resolver."""
from __future__ import annotations
from pathlib import Path
import pytest
from cerbero_bite.config.mcp_endpoints import (
DEFAULT_BOT_TAG,
DEFAULT_ENDPOINTS,
MCP_SERVICES,
load_bot_tag,
load_endpoints,
load_token,
)
@@ -16,7 +16,7 @@ from cerbero_bite.config.mcp_endpoints import (
def test_defaults_match_known_docker_dns() -> None:
assert DEFAULT_ENDPOINTS["deribit"] == "http://mcp-deribit:9011"
assert DEFAULT_ENDPOINTS["telegram"] == "http://mcp-telegram:9017"
assert DEFAULT_ENDPOINTS["sentiment"] == "http://mcp-sentiment:9014"
def test_load_endpoints_uses_defaults_when_env_empty() -> None:
@@ -46,31 +46,70 @@ def test_for_service_unknown_raises_key_error() -> None:
endpoints.for_service("nope")
def test_load_token_uses_explicit_path(tmp_path: Path) -> None:
target = tmp_path / "core.token"
target.write_text("abcdef\n", encoding="utf-8")
assert load_token(path=target) == "abcdef"
def test_load_token_uses_explicit_value() -> None:
assert load_token(value="abcdef") == "abcdef"
def test_load_token_uses_env_var(tmp_path: Path) -> None:
target = tmp_path / "core.token"
target.write_text("xyz", encoding="utf-8")
token = load_token(env={"CERBERO_BITE_CORE_TOKEN_FILE": str(target)})
def test_load_token_strips_whitespace_in_explicit_value() -> None:
assert load_token(value=" abcdef\n") == "abcdef"
def test_load_token_uses_env_var() -> None:
token = load_token(env={"CERBERO_BITE_MCP_TOKEN": "xyz"})
assert token == "xyz"
def test_load_token_raises_when_file_missing(tmp_path: Path) -> None:
with pytest.raises(FileNotFoundError):
load_token(path=tmp_path / "missing")
def test_load_token_strips_whitespace_in_env_var() -> None:
token = load_token(env={"CERBERO_BITE_MCP_TOKEN": " xyz\n"})
assert token == "xyz"
def test_load_token_raises_when_file_empty(tmp_path: Path) -> None:
target = tmp_path / "empty"
target.write_text("", encoding="utf-8")
def test_load_token_raises_when_missing() -> None:
with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
load_token(env={})
def test_load_token_raises_when_empty() -> None:
with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
load_token(env={"CERBERO_BITE_MCP_TOKEN": " "})
def test_load_token_raises_when_explicit_value_blank() -> None:
with pytest.raises(ValueError, match="empty"):
load_token(path=target)
load_token(value=" ")
def test_load_bot_tag_default_when_unset() -> None:
assert load_bot_tag(env={}) == DEFAULT_BOT_TAG
def test_load_bot_tag_explicit_value_overrides_env() -> None:
tag = load_bot_tag(value="BOT__CUSTOM", env={"CERBERO_BITE_MCP_BOT_TAG": "x"})
assert tag == "BOT__CUSTOM"
def test_load_bot_tag_uses_env_when_set() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": "BOT__SHADOW"})
assert tag == "BOT__SHADOW"
def test_load_bot_tag_strips_whitespace() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " BOT__X\n"})
assert tag == "BOT__X"
def test_load_bot_tag_falls_back_to_default_when_blank_env() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " "})
assert tag == DEFAULT_BOT_TAG
def test_load_bot_tag_rejects_too_long() -> None:
with pytest.raises(ValueError, match="exceeds 64"):
load_bot_tag(value="x" * 65)
def test_mcp_services_table_is_complete() -> None:
expected = {"deribit", "hyperliquid", "macro", "sentiment", "telegram", "portfolio"}
# Telegram and Portfolio are now in-process and must NOT be listed
# as shared MCP services.
expected = {"deribit", "hyperliquid", "macro", "sentiment"}
assert set(MCP_SERVICES) == expected
+6 -2
View File
@@ -5,6 +5,7 @@ from __future__ import annotations
from datetime import UTC, datetime
from pathlib import Path
from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.config import golden_config
from cerbero_bite.config.mcp_endpoints import load_endpoints
from cerbero_bite.runtime import build_runtime
@@ -51,5 +52,8 @@ def test_build_runtime_clients_pinned_to_endpoints(tmp_path: Path) -> None:
assert ctx.macro.SERVICE == "macro"
assert ctx.sentiment.SERVICE == "sentiment"
assert ctx.hyperliquid.SERVICE == "hyperliquid"
assert ctx.portfolio.SERVICE == "portfolio"
assert ctx.telegram.SERVICE == "telegram"
# Portfolio is now an in-process aggregator over deribit/hyperliquid/macro;
# it has no SERVICE attribute. Telegram is also in-process and disabled
# when env vars are unset.
assert isinstance(ctx.portfolio, PortfolioClient)
assert ctx.telegram.enabled is False
+63
View File
@@ -0,0 +1,63 @@
"""Tests for the runtime flag loader."""
from __future__ import annotations
import pytest
from cerbero_bite.config.runtime_flags import (
DATA_ANALYSIS_ENV,
STRATEGY_ENV,
RuntimeFlags,
load_runtime_flags,
)
def test_default_profile_is_analysis_only() -> None:
flags = load_runtime_flags(env={})
assert flags == RuntimeFlags(
data_analysis_enabled=True, strategy_enabled=False
)
def test_strategy_can_be_explicitly_enabled() -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: "true"})
assert flags.strategy_enabled is True
assert flags.data_analysis_enabled is True
def test_data_analysis_can_be_disabled() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: "false"})
assert flags.data_analysis_enabled is False
assert flags.strategy_enabled is False
@pytest.mark.parametrize(
"raw,expected",
[
("1", True),
("0", False),
("yes", True),
("no", False),
("on", True),
("OFF", False),
("ENABLED", True),
("Disabled", False),
("True", True),
("False", False),
(" true ", True),
],
)
def test_parses_common_truthy_falsy_tokens(raw: str, expected: bool) -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: raw})
assert flags.strategy_enabled is expected
def test_blank_value_falls_back_to_default() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: " ", STRATEGY_ENV: ""})
assert flags.data_analysis_enabled is True
assert flags.strategy_enabled is False
def test_unknown_token_raises() -> None:
with pytest.raises(ValueError, match=DATA_ANALYSIS_ENV):
load_runtime_flags(env={DATA_ANALYSIS_ENV: "maybe"})
Generated
+2
View File
@@ -111,6 +111,7 @@ dependencies = [
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "python-dateutil" },
{ name = "python-dotenv" },
{ name = "pyyaml" },
{ name = "rich" },
{ name = "sqlalchemy" },
@@ -161,6 +162,7 @@ requires-dist = [
{ name = "pydantic", specifier = ">=2.9" },
{ name = "pydantic-settings", specifier = ">=2.5" },
{ name = "python-dateutil", specifier = ">=2.9" },
{ name = "python-dotenv", specifier = ">=1.2.2" },
{ name = "pyyaml", specifier = ">=6.0" },
{ name = "rich", specifier = ">=13.9" },
{ name = "scipy", marker = "extra == 'backtest'", specifier = ">=1.14" },