9 Commits

Author SHA1 Message Date
Adriano ce158a92dd feat(mcp+runtime): allineamento a Cerbero MCP V2 e flag operativi
Adegua Cerbero Bite alla nuova versione 2.0.0 del server MCP unificato
(testnet/mainnet routing per token, header X-Bot-Tag obbligatorio) e
introduce due interruttori operativi indipendenti per separare la
raccolta dati dall'esecuzione di strategia.

Auth e collegamento MCP
- Token bearer letto dalla nuova variabile CERBERO_BITE_MCP_TOKEN; il
  valore sceglie l'ambiente upstream (testnet vs mainnet) sul server.
  Rimosso il caricamento da file (`secrets/core.token`,
  CERBERO_BITE_CORE_TOKEN_FILE, Docker secret /run/secrets/core_token).
- Aggiunto header X-Bot-Tag (default `BOT__CERBERO_BITE`, override via
  CERBERO_BITE_MCP_BOT_TAG) su ogni call MCP, con validazione lato client
  (non vuoto, ≤ 64 caratteri).
- Cartella `secrets/` rimossa, `.gitignore` ripulito, Dockerfile e
  docker-compose.yml aggiornati con env passthrough e fail-fast quando
  manca il token.

Modalità operativa (RuntimeFlags)
- Nuovo modulo `config/runtime_flags.py` con `RuntimeFlags(
  data_analysis_enabled, strategy_enabled)` e loader che parserizza
  CERBERO_BITE_ENABLE_DATA_ANALYSIS e CERBERO_BITE_ENABLE_STRATEGY
  (true/false/yes/no/on/off/enabled/disabled, case-insensitive).
- L'orchestratore espone i flag, audita e logga la modalità al boot
  (`engine started: env=… data_analysis=… strategy=…`), e in
  `install_scheduler` esclude i job `entry`/`monitor` quando strategy è
  off e il job `market_snapshot` quando data analysis è off. I job di
  infrastruttura (health, backup, manual_actions) restano sempre attivi.
- Default profile = "solo analisi dati" (data_analysis=true,
  strategy=false), pensato per la finestra di soak post-deploy.

GUI saldi
- `gui/live_data.py::_fetch_deribit_currency` riconosce il campo soft
  `error` nel payload V2 (HTTP 200 con `error` valorizzato dal server
  quando l'auth Deribit fallisce) e lo propaga come `BalanceRow.error`,
  evitando di mostrare un fuorviante equity = 0,00.

CLI
- Sostituita l'opzione `--token-file` con `--token` (stringa) sui comandi
  start/dry-run/ping; il default proviene dall'env. Le chiamate al
  builder dell'orchestrator passano anche `bot_tag` e `flags`.

Documentazione
- `docs/04-mcp-integration.md`: descrizione del nuovo flusso di auth V2
  (token = ambiente, X-Bot-Tag nell'audit) e router unificati.
- `docs/06-operational-flow.md`: nuova sezione "Modalità operativa" con
  i tre profili canonici e tabella di gating per ogni job; aggiunto
  `market_snapshot` al cron summary.
- `docs/10-config-spec.md`: nuova sezione "Variabili d'ambiente"
  tabellare con tutti gli env, comprese le bool dei flag operativi.
- `docs/02-architecture.md`: layout del repo aggiornato (`secrets/`
  rimosso, `runtime_flags.py` aggiunto), descrizione di `config/`
  estesa.

Test
- 5 nuovi test su `_fetch_deribit_currency` (soft-error, payload pulito,
  eccezione, error blank, signature parity).
- 7 nuovi test su `load_runtime_flags` (default, override, parsing
  truthy/falsy, blank fallback, valore invalido).
- 4 nuovi test su `HttpToolClient` (X-Bot-Tag default e custom, blank e
  troppo lungo rifiutati).
- 3 nuovi test integration sull'orchestratore (gating dei job in base
  ai flag).
- Test esistenti su token/CLI ping/orchestrator aggiornati al nuovo
  schema. Suite intera: 404 passed, 1 skipped (sqlite3 CLI assente
  sull'host di sviluppo).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:14:40 +02:00
Adriano d9454fc996 feat(state+runtime+gui): market_snapshots — calibrazione soglie da dati
Sistema dedicato di raccolta dati per scegliere le soglie dei filtri
sui percentili reali invece di valori a istinto.

Nuovi componenti:

* state/migrations/0003_market_snapshots.sql — tabella + index, PK
  composta (timestamp, asset). Ogni colonna numerica è NULL-able per
  preservare la continuità della serie quando un singolo MCP fallisce.
* state/models.py — MarketSnapshotRecord Pydantic.
* state/repository.py — record_market_snapshot, list_market_snapshots,
  _row_to_market_snapshot.
* runtime/market_snapshot_cycle.py — collettore best-effort che chiama
  spot/dvol/realized_vol/dealer_gamma/funding_perp/funding_cross/
  liquidation_heatmap/macro per ogni asset; raccoglie gli errori in
  fetch_errors_json e segna fetch_ok=false ma persiste comunque la
  riga.
* clients/deribit.py — generalizzati dealer_gamma_profile(currency),
  realized_vol(currency), spot_perp_price(asset). dealer_gamma_profile_eth
  resta come alias per la chiamata dell'entry cycle.
* runtime/orchestrator.py — nuovo job APScheduler `market_snapshot`
  cron */15 con assets configurabili (default ETH+BTC); il consumer
  manual_actions ora dispatcha anche kind=run_cycle cycle=market_snapshot
  per la GUI.
* gui/data_layer.py — load_market_snapshots, enqueue_run_cycle accetta
  market_snapshot; tipo MarketSnapshotRecord esposto.
* gui/pages/6_📐_Calibrazione.py — selezione asset+finestra, conteggio
  fetch_ok, per ogni metrica: istogramma, soglia da strategy.yaml come
  vline rossa, percentili P5/P10/P25/P50/P75/P90/P95, % di tick che la
  soglia avrebbe filtrato.
* gui/pages/1_📊_Status.py — bottone "📐 Forza snapshot" (4° del pannello
  Forza ciclo) per popolare la tabella senza aspettare il cron.

5 nuovi test sul collector (happy, fault tolerance, asset switch,
macro fail, empty assets); test_orchestrator job set aggiornato.
368/368 tests pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 14:39:09 +02:00
Adriano 63d1aa4262 feat(gui): traduzione italiana, logo Cerbero, saldi live e Forza ciclo
* Localizzazione italiana di tutte le pagine (Stato, Audit, Equity,
  Storico, Posizione) e della home; date relative ("5s fa", "12m fa").
* Logo Cerbero (cane a tre teste) in src/cerbero_bite/gui/assets/
  cerbero_logo.png — sostituisce l'emoji 🐺 (lupo, semanticamente
  errata) sia come favicon (`page_icon`) sia in sidebar e header.
* Caricamento automatico di `.env` dal CWD all'avvio della CLI (skip
  sotto pytest tramite PYTEST_CURRENT_TEST), evitando di doversi
  esportare manualmente le 4 URL MCP. Aggiunto python-dotenv come
  dipendenza, `.env.example` committato come template, `.env` resta
  ignorato da git.
* Pagina Stato: nuovo pannello "Saldi exchange" che fa fetch live
  via gateway MCP (Deribit USDC + USDT, Hyperliquid USDC + opzionale
  USDT spot) con cache TTL 60s e bottone refresh; tile riassuntivi
  totale USD / EUR / cambio.
* Pagina Stato: nuovo pannello "Forza ciclo" con tre bottoni
  (entry/monitor/health) che accodano azioni `run_cycle` nella tabella
  manual_actions; il consumer dell'engine — quando in esecuzione —
  dispatcha al `Orchestrator.run_*` corrispondente.
* manual_actions: nuovo `kind="run_cycle"` nello schema
  ManualAction; consumer accetta dict di cycle_runners che
  l'orchestrator popola in install_scheduler. 3 nuovi test (dispatch
  entry, ciclo sconosciuto, fallback senza runner).
* gui/live_data.py — modulo dedicato al fetch MCP dalla GUI
  (relax controllato della regola "no MCP from GUI" solo per i saldi,
  non per i dati di trading).

363/363 tests pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 14:11:40 +02:00
Adriano da88e7f746 docs: align 05/06/09/11 with implemented GUI Phases A–D
* docs/11-gui-streamlit.md — replaces the original spec with what was
  actually built: implementation status table, real page filenames
  (1_Status, 2_Audit, 3_Equity, 4_History, 5_Position), per-page
  inventory of implemented vs deferred sections, GUI ↔ engine table
  showing arm_kill/disarm_kill via manual_actions and the
  not_supported markers for force_close + approve/reject_proposal,
  consumer signature with cron */1, lock model clarified (no GUI
  lockfile), DoD updated with current state.
* docs/05-data-model.md — manual_actions is no longer "pianificata":
  populated by gui/data_layer.py, drained by the manual_actions job;
  per-kind status table (arm/disarm OK, others not_supported).
* docs/09-development-roadmap.md — Phase 4.5 marked implemented with
  per-task / markers for the deferred items (auto-refresh,
  AppTest, force-close hook).
* docs/06-operational-flow.md — adds Flusso 5b describing the
  manual_actions consumer pattern (enqueue → KillSwitch transition →
  audit log linkage).

360/360 tests still pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:31:25 +02:00
Adriano e8345a29c8 feat(gui+runtime): Phase D — kill-switch arm/disarm from the dashboard
Wires the GUI's first write path through the manual_actions queue:

* runtime/manual_actions_consumer.py — drains the queue and
  dispatches arm_kill / disarm_kill via KillSwitch (preserving the
  audit chain). Unsupported kinds (force_close, approve/reject_proposal)
  are marked result="not_supported" so they don't sit forever.
* runtime/orchestrator.py — adds a `manual_actions` job at */1 cron
  to the canonical scheduler manifest.
* gui/data_layer.py — write helpers enqueue_arm_kill /
  enqueue_disarm_kill (the only write path the GUI uses) plus
  load_pending_manual_actions for the pending strip.
* gui/pages/1_📊_Status.py — kill-switch arm/disarm panel with typed
  confirmation ("yes I am sure") + reason field; pending-actions table
  rendered when the queue is non-empty.

End-to-end smoke against the testnet state.sqlite:
  GUI enqueue → consumer dispatch → KillSwitch transition → audit
  chain hash linkage holds, "source":"manual_gui" recorded.

7 new unit tests for the consumer (arm, disarm, drain, unsupported,
default-reason, KillSwitchError handling, empty queue); 360/360 pass.
ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:33:58 +02:00
Adriano 6f6dd4c8dd feat(gui): Phase C — Position drilldown with payoff diagram
* gui/data_layer.py — adds load_position_by_id, load_decisions_for_position,
  compute_payoff_curve (pure math: bull_put / bear_call piecewise linear
  P&L at expiry, with breakeven), compute_distance_metrics (OTM%,
  days-to-expiry, days-held, width%).
* gui/pages/5_💼_Position.py — selector across open + 10 most-recent
  closed positions (with deep-link support via ?proposal_id=…), header
  metrics, distance summary, leg snapshot table (entry-time only —
  the GUI never calls MCP), plotly payoff diagram with strike/breakeven/
  entry-spot annotations and max profit/max loss tiles, decision
  history table from the decisions table.

Live greeks/mid are deliberately not pulled: per docs/11-gui-streamlit.md
the GUI reads SQLite + audit log only and lets the engine refresh data.

Validated math against a synthetic bull_put 2475/2350 × 2 contracts:
breakeven 2452.50, max profit $45, max loss $-160 — all matching the
expected formulas (credit, width × n − credit).

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:28:26 +02:00
Adriano db888ce0e8 feat(gui): Phase B — Equity + History pages
Adds the analytics surface of the dashboard:

* gui/data_layer.py — extended with load_closed_positions (windowed
  filter on closed_at) and three pure-function aggregators:
  compute_equity_curve, compute_kpis, compute_monthly_stats. Drawdown
  is measured against the running peak of cumulative realised P&L.
* gui/pages/3_📈_Equity.py — KPI strip, plotly cumulative-PnL line,
  drawdown area below, P&L histogram by close_reason, per-month table
  with win-rate.
* gui/pages/4_📜_History.py — windowed table of closed trades with
  multiselect close-reason and winners/losers radio filters, six-tile
  KPI strip, CSV export button.
* pyproject.toml — relax mypy on plotly + pandas (no shipped stubs).

Validated with synthetic data: 3 trades, 67% win rate, $50 total,
max drawdown $30 — all matching expected math. GUI launches, HTTP 200
on / and /_stcore/health.

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:11:02 +02:00
Adriano 1af983aff1 feat(gui): Phase A — read-only Streamlit dashboard (Status + Audit)
Implements the foundation of the local observation dashboard described
in docs/11-gui-streamlit.md:

* gui/data_layer.py — read-only wrappers over Repository (system_state,
  open positions) and audit_log (tail iteration, chain verify). The GUI
  never imports runtime/ nor calls MCP services.
* gui/main.py — Streamlit entry point with sidebar (engine health
  badge, kill switch banner, last health check age), home overview.
* gui/pages/1_📊_Status.py — engine status with colored health banner,
  kill switch detail, audit anchor, open positions table.
* gui/pages/2_🔍_Audit.py — live audit log stream (newest-first),
  event filters, hash-chain integrity verify button.
* cli.py gui — replaces the placeholder with os.execvpe to
  `python -m streamlit run` bound to 127.0.0.1, --browser.gatherUsageStats
  false; --db / --audit paths exported via env to the GUI process.
* pyproject.toml — N999 ignore for src/cerbero_bite/gui/pages/* (Streamlit
  auto-discovers pages whose filename contains numbers and emoji icons).

Smoke test: GUI launches, HTTP 200 on / and /_stcore/health, data layer
correctly reflects current testnet state (engine=running, kill_switch
disarmed, 0 open positions, audit chain integra 7 entries).

353/353 tests still pass; ruff clean; mypy strict src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:07:23 +02:00
Adriano abf5a140e2 refactor: telegram + portfolio in-process (drop shared MCP)
Each bot now manages its own notification + portfolio aggregation:

* TelegramClient calls the public Bot API directly via httpx, reading
  CERBERO_BITE_TELEGRAM_BOT_TOKEN / CERBERO_BITE_TELEGRAM_CHAT_ID from
  env. No credentials → silent disabled mode.
* PortfolioClient composes DeribitClient + HyperliquidClient + the new
  MacroClient.get_asset_price/eur_usd_rate to expose equity (EUR) and
  per-asset exposure as the bot's own slice (no cross-bot view).
* mcp-telegram and mcp-portfolio removed from MCP_SERVICES / McpEndpoints
  and the cerbero-bite ping CLI; health_check no longer probes portfolio.

Docs (02/04/06/07) and docker-compose updated to reflect the new
architecture.

353/353 tests pass; ruff clean; mypy src clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:31:20 +02:00
61 changed files with 5347 additions and 762 deletions
+43
View File
@@ -0,0 +1,43 @@
# Template per `.env` (questo file viene committato; `.env` no).
#
# Copia: `cp .env.example .env` e popola i valori effettivi.
# --- Endpoint MCP ---
# Default Docker network (interno alla suite Cerbero_mcp V2):
# CERBERO_BITE_MCP_DERIBIT_URL=http://cerbero-mcp:9000/mcp-deribit
# ...
# Gateway pubblico (host esterno alla rete Docker):
CERBERO_BITE_MCP_DERIBIT_URL=https://cerbero-mcp.tielogic.xyz/mcp-deribit
CERBERO_BITE_MCP_HYPERLIQUID_URL=https://cerbero-mcp.tielogic.xyz/mcp-hyperliquid
CERBERO_BITE_MCP_MACRO_URL=https://cerbero-mcp.tielogic.xyz/mcp-macro
CERBERO_BITE_MCP_SENTIMENT_URL=https://cerbero-mcp.tielogic.xyz/mcp-sentiment
# --- Token bearer MCP ---
# Cerbero MCP V2 sceglie l'ambiente upstream (testnet vs mainnet) in
# base al token presentato nell'header Authorization. Per switchare a
# mainnet sostituire il valore con il MAINNET_TOKEN emesso dal cluster
# Cerbero_mcp e riavviare il bot. Il token NON viene mai loggato.
CERBERO_BITE_MCP_TOKEN=
# --- Bot tag (header X-Bot-Tag) ---
# Identifica il bot nell'audit log del server MCP. Default fissato dal
# progetto: `BOT__CERBERO_BITE`. Ridefinirlo solo per ambienti
# alternativi (es. shadow run, replay).
CERBERO_BITE_MCP_BOT_TAG=BOT__CERBERO_BITE
# --- Modalità operativa ---
# Due interruttori indipendenti che decidono cosa fa il bot a ogni
# giro del decision loop:
# * ENABLE_DATA_ANALYSIS=true → raccolta dati MCP, snapshot di
# mercato, calcolo indicatori, log e audit ATTIVI
# * ENABLE_STRATEGY=true → valutazione regole §2-§9 e
# proposta/esecuzione di entry/exit ATTIVE
# Periodo iniziale ("solo analisi dati"): tenere
# ENABLE_DATA_ANALYSIS=true e ENABLE_STRATEGY=false.
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
# --- Telegram (notify-only) ---
# Lascia commentato per modalità disabled (no notifiche).
# CERBERO_BITE_TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
# CERBERO_BITE_TELEGRAM_CHAT_ID=-1001234567890
-3
View File
@@ -43,6 +43,3 @@ data/
.env .env
.env.* .env.*
!.env.example !.env.example
secrets/*
!secrets/.gitkeep
!secrets/README.md
+1 -2
View File
@@ -34,8 +34,7 @@ WORKDIR /app
ENV PATH=/opt/venv/bin:$PATH \ ENV PATH=/opt/venv/bin:$PATH \
PYTHONDONTWRITEBYTECODE=1 \ PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \ PYTHONUNBUFFERED=1
CERBERO_BITE_CORE_TOKEN_FILE=/run/secrets/core_token
COPY --from=builder /opt/venv /opt/venv COPY --from=builder /opt/venv /opt/venv
COPY --from=builder /app/src /app/src COPY --from=builder /app/src /app/src
+27 -20
View File
@@ -1,24 +1,24 @@
# docker-compose.yml — Cerbero Bite # docker-compose.yml — Cerbero Bite
# #
# Bite runs in its own Compose project but joins the same Docker # Bite runs in its own Compose project but joins the same Docker
# network used by Cerbero_mcp so it can resolve `mcp-deribit`, # network used by Cerbero MCP V2 so it can resolve the in-cluster
# `mcp-macro` and friends by their service name (see the gateway # service name when running co-located, and otherwise reaches the
# Caddyfile in Cerbero_mcp). # public gateway (`https://cerbero-mcp.tielogic.xyz`) over the host
# network.
# #
# The shared network is declared as external here. Create it once on # The shared network is declared as external here. Create it once on
# the host with `docker network create cerbero-suite` (or rename the # the host with `docker network create cerbero-suite` (or rename the
# Cerbero_mcp network to `cerbero-suite` and mark it external). # Cerbero_mcp network to `cerbero-suite` and mark it external).
# #
# Secrets are read from ./secrets/, which is .gitignore'd. # Authentication: a single bearer token is passed through from the
# host `.env` file via `CERBERO_BITE_MCP_TOKEN`. The Cerbero MCP V2
# server uses the token to decide whether the upstream environment
# is testnet or mainnet; switching environment = switching token.
networks: networks:
cerbero-suite: cerbero-suite:
external: true external: true
secrets:
core_token:
file: ./secrets/core.token
volumes: volumes:
bite-data: bite-data:
@@ -33,19 +33,26 @@ services:
cap_drop: [ALL] cap_drop: [ALL]
security_opt: security_opt:
- no-new-privileges:true - no-new-privileges:true
secrets:
- core_token
environment: environment:
CERBERO_BITE_CORE_TOKEN_FILE: /run/secrets/core_token # MCP auth — token is sourced from the host .env (compose
# Service URLs — the defaults below match the cerbero-suite # interpolation). The `X-Bot-Tag` value below is the audit
# network DNS. Override per service if you need to point at a # identifier the MCP server logs for every write call.
# different host (dev only). CERBERO_BITE_MCP_TOKEN: ${CERBERO_BITE_MCP_TOKEN:?missing CERBERO_BITE_MCP_TOKEN}
CERBERO_BITE_MCP_DERIBIT_URL: http://mcp-deribit:9011 CERBERO_BITE_MCP_BOT_TAG: ${CERBERO_BITE_MCP_BOT_TAG:-BOT__CERBERO_BITE}
CERBERO_BITE_MCP_HYPERLIQUID_URL: http://mcp-hyperliquid:9012 # Service URLs — defaults below match the in-cluster cerbero-suite
CERBERO_BITE_MCP_MACRO_URL: http://mcp-macro:9013 # network DNS (V2 unified image listening on port 9000). Override
CERBERO_BITE_MCP_SENTIMENT_URL: http://mcp-sentiment:9014 # any of them to point at the public gateway, a custom host, or
CERBERO_BITE_MCP_TELEGRAM_URL: http://mcp-telegram:9017 # localhost for dev work.
CERBERO_BITE_MCP_PORTFOLIO_URL: http://mcp-portfolio:9018 CERBERO_BITE_MCP_DERIBIT_URL: ${CERBERO_BITE_MCP_DERIBIT_URL:-http://cerbero-mcp:9000/mcp-deribit}
CERBERO_BITE_MCP_HYPERLIQUID_URL: ${CERBERO_BITE_MCP_HYPERLIQUID_URL:-http://cerbero-mcp:9000/mcp-hyperliquid}
CERBERO_BITE_MCP_MACRO_URL: ${CERBERO_BITE_MCP_MACRO_URL:-http://cerbero-mcp:9000/mcp-macro}
CERBERO_BITE_MCP_SENTIMENT_URL: ${CERBERO_BITE_MCP_SENTIMENT_URL:-http://cerbero-mcp:9000/mcp-sentiment}
# Telegram and Portfolio are no longer shared MCP services. The
# bot now calls the Telegram Bot API directly and aggregates
# portfolio in-process from Deribit + Hyperliquid + Macro.
# Set the two env vars below to enable Telegram notifications.
# CERBERO_BITE_TELEGRAM_BOT_TOKEN: ...
# CERBERO_BITE_TELEGRAM_CHAT_ID: ...
volumes: volumes:
- bite-data:/app/data - bite-data:/app/data
healthcheck: healthcheck:
+8 -6
View File
@@ -75,7 +75,7 @@ Adriano gli eventi post-fact (entry placed, exit filled, alert).
| Format/lint | `ruff` | Standard del progetto | | Format/lint | `ruff` | Standard del progetto |
| Dependency manager | `uv` | Coerente con `Cerbero_mcp` | | Dependency manager | `uv` | Coerente con `Cerbero_mcp` |
| Client MCP | `httpx.AsyncClient` long-lived (pooling) + `tenacity` per retry | HTTP REST diretto, non SDK `mcp` | | Client MCP | `httpx.AsyncClient` long-lived (pooling) + `tenacity` per retry | HTTP REST diretto, non SDK `mcp` |
| Notifiche | MCP `cerbero-telegram` (notify-only) | Riusa il canale esistente | | Notifiche | Bot API Telegram in-process (notify-only) | Token e chat-id da env, no-op se non configurati |
| GUI | `streamlit` ≥ 1.40 + `plotly` (Fase 4.5) | Dashboard locale, processo separato | | GUI | `streamlit` ≥ 1.40 + `plotly` (Fase 4.5) | Dashboard locale, processo separato |
## Layout cartelle ## Layout cartelle
@@ -88,9 +88,9 @@ Cerbero_Bite/
├── strategy.yaml # config golden + execution.environment ├── strategy.yaml # config golden + execution.environment
├── strategy.local.yaml.example # override locale (gitignored) ├── strategy.local.yaml.example # override locale (gitignored)
├── Dockerfile # image runtime + HEALTHCHECK ├── Dockerfile # image runtime + HEALTHCHECK
├── docker-compose.yml # rete external cerbero-suite + secrets ├── docker-compose.yml # rete external cerbero-suite, env passthrough
├── .env.example # template variabili (token MCP, bot tag, modalità)
├── docs/ # questa documentazione ├── docs/ # questa documentazione
├── secrets/ # gitignored (solo .gitkeep + README)
├── src/cerbero_bite/ ├── src/cerbero_bite/
│ ├── __init__.py │ ├── __init__.py
│ ├── __main__.py # entry point CLI │ ├── __main__.py # entry point CLI
@@ -135,7 +135,8 @@ Cerbero_Bite/
│ ├── config/ # caricamento e validazione yaml │ ├── config/ # caricamento e validazione yaml
│ │ ├── schema.py │ │ ├── schema.py
│ │ ├── loader.py │ │ ├── loader.py
│ │ ── mcp_endpoints.py # URL + token loader │ │ ── mcp_endpoints.py # URL + token + bot tag (da .env)
│ │ └── runtime_flags.py # ENABLE_DATA_ANALYSIS / ENABLE_STRATEGY
│ ├── reporting/ # report umani (Fase 5) │ ├── reporting/ # report umani (Fase 5)
│ ├── gui/ # Streamlit dashboard (Fase 4.5) │ ├── gui/ # Streamlit dashboard (Fase 4.5)
│ └── safety/ # kill switch, dead man, audit │ └── safety/ # kill switch, dead man, audit
@@ -170,8 +171,9 @@ Cerbero_Bite/
effetti collaterali. Espone `Orchestrator` come façade per il CLI. effetti collaterali. Espone `Orchestrator` come façade per il CLI.
- **`state/`** persistenza. Mai logica di business. Solo CRUD. - **`state/`** persistenza. Mai logica di business. Solo CRUD.
- **`config/`** caricamento di `strategy.yaml`, validazione, - **`config/`** caricamento di `strategy.yaml`, validazione,
esposizione immutabile dei parametri. Risolve gli URL MCP e legge esposizione immutabile dei parametri. Risolve gli URL MCP, legge
il bearer token al boot. il bearer token + il bot tag al boot ed espone i due interruttori
operativi `RuntimeFlags(data_analysis_enabled, strategy_enabled)`.
- **`safety/`** controlli trasversali (vedere `07-risk-controls.md`). - **`safety/`** controlli trasversali (vedere `07-risk-controls.md`).
- **`reporting/`** generazione di stringhe per Telegram. Niente - **`reporting/`** generazione di stringhe per Telegram. Niente
logica di trading, solo formatting. logica di trading, solo formatting.
+81 -28
View File
@@ -1,10 +1,22 @@
# 04 — MCP Integration # 04 — MCP Integration
Cerbero Bite consuma sei servizi MCP HTTP della suite (`Cerbero_mcp`). Cerbero Bite consuma quattro router MCP HTTP della suite Cerbero MCP V2
Non utilizza l'SDK Python `mcp`: ogni server espone gli endpoint REST (`Cerbero_mcp`): `mcp-deribit`, `mcp-hyperliquid`, `mcp-macro`,
`POST <base_url>/tools/<tool_name>` con autenticazione Bearer, e Cerbero `mcp-sentiment`. Dalla V2 i quattro router vivono nello stesso processo
Bite vi si collega tramite `httpx.AsyncClient` long-lived FastAPI dietro lo stesso host (default in-cluster
(`clients/_base.py`). `http://cerbero-mcp:9000/mcp-{exchange}`, gateway pubblico
`https://cerbero-mcp.tielogic.xyz/mcp-{exchange}`). Cerbero Bite non
utilizza l'SDK Python `mcp`: ogni router espone gli endpoint REST
`POST <base_url>/tools/<tool_name>` con autenticazione Bearer e header
`X-Bot-Tag`, e Cerbero Bite vi si collega tramite `httpx.AsyncClient`
long-lived (`clients/_base.py`).
Telegram e Portfolio, in passato esposti come servizi MCP condivisi,
sono stati rimossi dal layer MCP e gestiti **in-process** da ogni bot
della suite: il client Telegram chiama direttamente la Bot API
pubblica e l'aggregatore di portafoglio compone equity ed esposizioni
dai client di scambio (Deribit + Hyperliquid) convertendo in EUR
attraverso `cerbero-macro.get_asset_price("EURUSD")`.
## Configurazione di connessione ## Configurazione di connessione
@@ -14,26 +26,50 @@ con default che corrispondono al DNS della rete Docker
ecc.). Ogni servizio può essere sovrascritto da una variabile ecc.). Ogni servizio può essere sovrascritto da una variabile
d'ambiente dedicata, utile in sviluppo: d'ambiente dedicata, utile in sviluppo:
| Servizio | Variabile d'ambiente | Default Docker DNS | | Servizio | Variabile d'ambiente | Default Docker DNS legacy |
|---|---|---| |---|---|---|
| Deribit | `CERBERO_BITE_MCP_DERIBIT_URL` | `http://mcp-deribit:9011` | | Deribit | `CERBERO_BITE_MCP_DERIBIT_URL` | `http://mcp-deribit:9011` |
| Hyperliquid | `CERBERO_BITE_MCP_HYPERLIQUID_URL` | `http://mcp-hyperliquid:9012` | | Hyperliquid | `CERBERO_BITE_MCP_HYPERLIQUID_URL` | `http://mcp-hyperliquid:9012` |
| Macro | `CERBERO_BITE_MCP_MACRO_URL` | `http://mcp-macro:9013` | | Macro | `CERBERO_BITE_MCP_MACRO_URL` | `http://mcp-macro:9013` |
| Sentiment | `CERBERO_BITE_MCP_SENTIMENT_URL` | `http://mcp-sentiment:9014` | | Sentiment | `CERBERO_BITE_MCP_SENTIMENT_URL` | `http://mcp-sentiment:9014` |
| Telegram | `CERBERO_BITE_MCP_TELEGRAM_URL` | `http://mcp-telegram:9017` |
| Portfolio | `CERBERO_BITE_MCP_PORTFOLIO_URL` | `http://mcp-portfolio:9018` |
Il bearer token per le chiamate è il token con capability `core` letto I default mostrati sopra sono il legacy della topologia V1 (un container
da `secrets/core.token` (path configurabile via per servizio). Sulla V2 unificata ogni URL deve includere il prefisso di
`CERBERO_BITE_CORE_TOKEN_FILE`, default `/run/secrets/core_token` nel router, ad esempio `http://cerbero-mcp:9000/mcp-deribit` o
container). Non è loggato. `https://cerbero-mcp.tielogic.xyz/mcp-deribit`. Le URL effettive sono
configurate in `.env`.
Telegram (notify-only) viene configurato direttamente via due
variabili d'ambiente, lette al boot dal client in-process:
| Variabile | Uso |
|---|---|
| `CERBERO_BITE_TELEGRAM_BOT_TOKEN` | Token del bot fornito da BotFather |
| `CERBERO_BITE_TELEGRAM_CHAT_ID` | Identificativo della chat o del gruppo destinatario |
Quando una delle due manca, il client Telegram entra in modalità
**disabled** e ogni `notify_*` diventa un no-op a livello di DEBUG.
Il bearer token per le chiamate è letto dalla variabile d'ambiente
`CERBERO_BITE_MCP_TOKEN` (vedi `.env`). Sulla V2 il valore del token
decide quale ambiente upstream serve la richiesta: lo stesso server MCP
fronteggia testnet e mainnet contemporaneamente, e si passa da uno
all'altro semplicemente sostituendo il valore della variabile e
riavviando il bot. Il token non viene mai loggato.
A ogni chiamata Cerbero Bite aggiunge anche l'header `X-Bot-Tag`, con
valore di default `BOT__CERBERO_BITE` (override via
`CERBERO_BITE_MCP_BOT_TAG`). Il server MCP scrive il valore nell'audit
record di ogni operazione di scrittura, così ogni write resta
attribuibile al bot d'origine.
```python ```python
# clients/_base.py — sintesi # clients/_base.py — sintesi
class HttpToolClient: class HttpToolClient:
service: str # "deribit", "macro", ... service: str # "deribit", "macro", ...
base_url: str # "http://mcp-deribit:9011" base_url: str # "https://cerbero-mcp.tielogic.xyz/mcp-deribit"
token: str # bearer token: str # bearer (testnet o mainnet, scelto da env)
bot_tag: str = "BOT__CERBERO_BITE" # X-Bot-Tag header
timeout_s: float = 8.0 timeout_s: float = 8.0
retry_max: int = 3 # esponenziale 1s/5s/30s retry_max: int = 3 # esponenziale 1s/5s/30s
client: httpx.AsyncClient | None # condiviso dal RuntimeContext client: httpx.AsyncClient | None # condiviso dal RuntimeContext
@@ -100,22 +136,35 @@ Cerbero Bite è deterministico e non interpreta testi liberi.
| Tool | Uso | | Tool | Uso |
|---|---| |---|---|
| `get_macro_calendar(days, country_filter, importance_min)` | Filtro entry §2.5: zero eventi `high` in `country_filter` (default `["US","EU"]`) entro la finestra DTE | | `get_macro_calendar(days, country_filter, importance_min)` | Filtro entry §2.5: zero eventi `high` in `country_filter` (default `["US","EU"]`) entro la finestra DTE |
| `get_asset_price(ticker="EURUSD")` | Tasso di cambio EUR/USD usato dall'aggregatore di portafoglio per convertire l'equity USD degli scambi in EUR |
### `cerbero-portfolio` ## Componenti in-process
| Tool | Uso | ### Portfolio aggregator (`clients/portfolio.py`)
Il client `PortfolioClient` non chiama più un servizio MCP dedicato;
compone i dati dei due exchange usati dal bot e applica il cambio
EUR/USD letto da `cerbero-macro`.
| Metodo | Comportamento |
|---|---| |---|---|
| `get_total_portfolio_value(currency="EUR")` | Capitale di base per il sizing engine, dopo conversione in USD | | `total_equity_eur()` | Somma `equity` USD di Deribit (USDC) e Hyperliquid, divide per `EURUSD` per ottenere il capitale in EUR consumato dal sizing engine |
| `get_holdings()` | Aggregazione manuale di `current_value_eur` per i ticker che contengono `"ETH"`, usata dal filtro §2.7 (`eth_holdings_pct_max`) | | `asset_pct_of_portfolio(ticker)` | Somma il notional USD assoluto delle posizioni aperte su entrambi gli scambi il cui `instrument`/`coin` contiene `ticker`, e lo divide per l'equity totale USD. Usato dal filtro §2.7 (`eth_holdings_pct_max`) |
### `cerbero-telegram` **Nota di scope**: la vista è la *slice* del singolo bot. Holdings su
exchange esterni, in cold storage, o gestiti da altri bot della suite
non vengono contati. Il filtro §2.7 va quindi inteso come cap
per-bot, non come cap suite-wide.
Cerbero Bite usa Telegram in modalità **notify-only**: nessuna conferma ### Telegram client (`clients/telegram.py`)
manuale, nessun callback. L'engine apre e chiude le posizioni
automaticamente quando le regole sono soddisfatte; Telegram viene
informato post-fact.
| Tool | Uso | Cerbero Bite usa Telegram in modalità **notify-only**: nessuna
conferma manuale, nessun callback. L'engine apre e chiude le
posizioni automaticamente quando le regole sono soddisfatte; il
client invia il messaggio al `chat_id` configurato chiamando
direttamente `https://api.telegram.org/bot<TOKEN>/sendMessage`.
| Metodo | Uso |
|---|---| |---|---|
| `notify(message, priority, tag)` | Alert MEDIUM o messaggi informativi | | `notify(message, priority, tag)` | Alert MEDIUM o messaggi informativi |
| `notify_position_opened(instrument, side, size, strategy, greeks, expected_pnl)` | Notifica di entry placed | | `notify_position_opened(instrument, side, size, strategy, greeks, expected_pnl)` | Notifica di entry placed |
@@ -123,16 +172,20 @@ informato post-fact.
| `notify_alert(source, message, priority)` | Alert HIGH (kill switch) | | `notify_alert(source, message, priority)` | Alert HIGH (kill switch) |
| `notify_system_error(message, component, priority)` | Alert CRITICAL | | `notify_system_error(message, component, priority)` | Alert CRITICAL |
Quando le credenziali env non sono configurate, il client è in
modalità disabled e ogni invio diventa un no-op silente: il ciclo
decisionale non viene bloccato.
## Errori e degradation ## Errori e degradation
| Server fuori uso | Comportamento | | Componente fuori uso | Comportamento |
|---|---| |---|---|
| `cerbero-deribit` | **Hard fail**: senza dati di mercato e canale di esecuzione il ciclo viene saltato; in monitor le posizioni esistenti restano nello stato corrente, alert HIGH e kill switch | | `cerbero-deribit` | **Hard fail**: senza dati di mercato e canale di esecuzione il ciclo viene saltato; in monitor le posizioni esistenti restano nello stato corrente, alert HIGH e kill switch |
| `cerbero-hyperliquid` | Skip del filtro funding §2.6 con warning; il ciclo prosegue se le altre condizioni sono soddisfatte | | `cerbero-hyperliquid` | Skip del filtro funding §2.6 con warning; il ciclo prosegue se le altre condizioni sono soddisfatte |
| `cerbero-sentiment` | Bias §3.1 cade su `no_entry` per default (senza funding cross il bias non può fissare la direzione) | | `cerbero-sentiment` | Bias §3.1 cade su `no_entry` per default (senza funding cross il bias non può fissare la direzione) |
| `cerbero-macro` | Hard fail per il filtro §2.5; senza calendar non si apre | | `cerbero-macro` | Hard fail per il filtro §2.5 e per la conversione EUR/USD del portfolio aggregator; senza calendar/FX non si apre |
| `cerbero-portfolio` | Skip dei filtri §2.7 con warning; il sizing usa l'ultimo capitale noto da SQLite | | Portfolio aggregator (deribit o hyperliquid down) | I metodi di `PortfolioClient` propagano l'eccezione dell'exchange sottostante; il sizing engine si comporta come per un guasto MCP del livello inferiore |
| `cerbero-telegram` | Skip notifiche post-fact; il ciclo decisionale non viene bloccato (l'engine non aspetta risposte) | | Telegram client | Errore HTTP o `ok=false` dalla Bot API → `TelegramError` propagata dal chiamante. In modalità disabled (env mancanti) tutti i `notify_*` sono no-op silenti e il ciclo decisionale prosegue |
I trigger HIGH e CRITICAL armano il kill switch e propagano un alert I trigger HIGH e CRITICAL armano il kill switch e propagano un alert
in audit chain. in audit chain.
+23 -10
View File
@@ -152,27 +152,40 @@ CREATE TABLE dvol_history (
### `manual_actions` ### `manual_actions`
Coda di azioni manuali generate dalla GUI Streamlit (vedi Coda di azioni manuali generate dalla GUI Streamlit (vedi
`11-gui-streamlit.md`). Schema previsto in vista della Fase 4.5; al `11-gui-streamlit.md`). La tabella è popolata dal layer
momento la GUI non è implementata e la tabella resta vuota. `gui/data_layer.py` (`enqueue_arm_kill`, `enqueue_disarm_kill`) ed è
drenata dal job APScheduler `manual_actions`
(`runtime/manual_actions_consumer.consume_manual_actions`, cron
`*/1 * * * *`).
```sql ```sql
CREATE TABLE manual_actions ( CREATE TABLE manual_actions (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
kind TEXT NOT NULL, -- approve_proposal, reject_proposal, kind TEXT NOT NULL, -- arm_kill, disarm_kill,
-- force_close, arm_kill, disarm_kill -- force_close, approve_proposal, reject_proposal
proposal_id TEXT, -- NULL se l'azione non è legata a una proposta proposal_id TEXT, -- NULL se l'azione non è legata a una proposta
payload_json TEXT, -- JSON con motivo, conferma typed, ecc. payload_json TEXT, -- JSON con reason, conferma typed, ecc.
created_at TEXT NOT NULL, created_at TEXT NOT NULL,
consumed_at TEXT, -- NULL = ancora da processare consumed_at TEXT, -- NULL = ancora da processare
consumed_by TEXT, consumed_by TEXT, -- "engine" quando applicata dal consumer
result TEXT result TEXT -- "ok" / "not_supported" / "error: ..."
); );
CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at); CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at);
``` ```
Le `manual_actions` non bypassano i risk control: il consumer Stato implementativo per `kind`:
(quando esisterà) applicherà gli stessi check di
`safety.system_healthy()` prima di eseguire. | `kind` | Implementato | Effetto |
|---|---|---|
| `arm_kill` | ✅ | `KillSwitch.arm(reason, source="manual_gui")` |
| `disarm_kill` | ✅ | `KillSwitch.disarm(reason, source="manual_gui")` |
| `force_close` | ⏳ | Marcato `result="not_supported"` finché l'orchestrator non espone `handle_force_close` |
| `approve_proposal` / `reject_proposal` | ⏳ | Idem |
Le `manual_actions` **non** bypassano i risk control: ogni azione di
kill switch passa dalla classe `KillSwitch`, che valida lo stato e
appende l'evento corrispondente alla audit chain. La typed
confirmation lato GUI è gating prima dell'enqueue.
### `system_state` ### `system_state`
+75 -1
View File
@@ -140,7 +140,7 @@ Trigger: ogni 5 minuti.
- macro.get_macro_calendar(days=1) - macro.get_macro_calendar(days=1)
- sentiment.get_cross_exchange_funding (no asset filter) - sentiment.get_cross_exchange_funding (no asset filter)
- hyperliquid.get_funding_rate("ETH") - hyperliquid.get_funding_rate("ETH")
- portfolio.get_total_portfolio_value - portfolio: skip (componente in-process, copertura indiretta dai probe deribit/hyperliquid/macro)
- telegram: skip (notify-only, no probe non invasivo) - telegram: skip (notify-only, no probe non invasivo)
2. SQLite read-write probe (transazione fittizia) 2. SQLite read-write probe (transazione fittizia)
3. Lock file ancora valido 3. Lock file ancora valido
@@ -154,6 +154,26 @@ Trigger: ogni 5 minuti.
Il dead-man (`scripts/dead_man.sh`) sorveglia che `HEALTH_OK` venga Il dead-man (`scripts/dead_man.sh`) sorveglia che `HEALTH_OK` venga
scritto: silenzio > 15 min → kill switch via SQLite e alert. scritto: silenzio > 15 min → kill switch via SQLite e alert.
## Flusso 5b — Manual actions consumer
Trigger: cron `*/1 * * * *` (job APScheduler `manual_actions`).
```
1. Mentre la coda ha righe non consumate:
- leggi `next_unconsumed_action` (oldest-first)
- dispatch per kind:
arm_kill → KillSwitch.arm(reason, source="manual_gui")
disarm_kill → KillSwitch.disarm(reason, source="manual_gui")
force_close / approve_proposal / reject_proposal → result="not_supported"
- mark_action_consumed con consumed_by="engine" e result
2. Latenza tipica end-to-end (enqueue da GUI → effetto): ≤ 60 sec.
```
Il consumer è il **canale unico** di scrittura dalla GUI verso il
runtime: ogni transizione del kill switch passa dalla classe
`KillSwitch` per mantenere SQLite e audit chain in lock-step. Vedi
`runtime/manual_actions_consumer.py` e `docs/11-gui-streamlit.md`.
## Flusso 6 — Recovery dopo crash ## Flusso 6 — Recovery dopo crash
All'avvio o dopo un riavvio del container: All'avvio o dopo un riavvio del container:
@@ -203,7 +223,61 @@ proposed
| `0 2,14 * * *` | Position monitoring | 2× giorno | | `0 2,14 * * *` | Position monitoring | 2× giorno |
| `0 12 1 * *` | Kelly recalibration | Mensile | | `0 12 1 * *` | Kelly recalibration | Mensile |
| `*/5 * * * *` | Health check | 5 min | | `*/5 * * * *` | Health check | 5 min |
| `*/15 * * * *` | Market snapshot (calibrazione soglie) | 15 min |
| `0 0 * * *` | Backup SQLite + rotation log | Giornaliero | | `0 0 * * *` | Backup SQLite + rotation log | Giornaliero |
| `0 8 * * *` | Daily digest Telegram | Giornaliero | | `0 8 * * *` | Daily digest Telegram | Giornaliero |
Tutti gli orari in UTC. Tutti gli orari in UTC.
## Modalità operativa (interruttori `RuntimeFlags`)
Il bot riconosce due interruttori indipendenti, letti da
`.env` al boot tramite `cerbero_bite.config.runtime_flags.load_runtime_flags()`:
| Variabile d'ambiente | Default | Cosa abilita |
|---|---|---|
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | `true` | Job `market_snapshot` ogni 15 min: raccolta dati MCP, scrittura tabella `market_snapshots`, calibrazione soglie. |
| `CERBERO_BITE_ENABLE_STRATEGY` | `false` | Job `entry` (lunedì 14:00 UTC) e `monitor` (2× giorno): valutazione regole §2-§9 di `01-strategy-rules.md` e proposta/esecuzione ordini. |
I job di infrastruttura (`health`, `backup`, `manual_actions`) sono
**sempre attivi**, indipendentemente dai flag, perché tengono in vita il
kill switch e la persistenza.
### Profilo "solo analisi dati" (default)
Configurazione standard del periodo di soak post-deploy:
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=false
```
Effetto: il bot raccoglie snapshot di mercato, alimenta `market_snapshots`,
ma **non** invia entry né chiude posizioni autonomamente. I metodi
`run_entry`/`run_monitor` restano richiamabili manualmente da CLI
(`cerbero-bite dry-run --cycle entry|monitor`) e tramite `manual_actions`
per testing e validazione.
### Profilo "trading attivo"
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=true
CERBERO_BITE_ENABLE_STRATEGY=true
```
Effetto: tutti i job canonici vengono installati nello scheduler. Lo
switch va fatto solo dopo che la qualità dei dati raccolti è stata
validata e Adriano dà esplicito consenso al passaggio.
### Disattivazione completa dell'analisi dati
Caso eccezionale (manutenzione, problema MCP):
```env
CERBERO_BITE_ENABLE_DATA_ANALYSIS=false
CERBERO_BITE_ENABLE_STRATEGY=false
```
Il bot resta vivo per health check e ricezione di manual actions, ma
non interroga MCP per dati di mercato e non opera. Il kill switch resta
operativo.
+1 -1
View File
@@ -34,7 +34,7 @@ infrastrutturali o decisioni umane fuori posto.
| Causa | Auto-arm | Implementato | Note | | Causa | Auto-arm | Implementato | Note |
|---|---|---|---| |---|---|---|---|
| MCP `cerbero-deribit` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH | | MCP `cerbero-deribit` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH |
| MCP `cerbero-macro` / `cerbero-portfolio` / `cerbero-hyperliquid` / `cerbero-sentiment` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH | | MCP `cerbero-macro` / `cerbero-hyperliquid` / `cerbero-sentiment` non risponde per 3 health check consecutivi | Sì | `runtime/health_check.py` | Severity HIGH |
| `mcp-deribit.environment_info.environment``strategy.execution.environment` | Sì | `runtime/orchestrator.boot` + health check | Severity CRITICAL al boot, HIGH a runtime | | `mcp-deribit.environment_info.environment``strategy.execution.environment` | Sì | `runtime/orchestrator.boot` + health check | Severity CRITICAL al boot, HIGH a runtime |
| Mismatch tra il tail del file `data/audit.log` e `system_state.last_audit_hash` (truncation o tampering) | Sì | `runtime/orchestrator._verify_audit_anchor` | Severity CRITICAL al boot | | Mismatch tra il tail del file `data/audit.log` e `system_state.last_audit_hash` (truncation o tampering) | Sì | `runtime/orchestrator._verify_audit_anchor` | Severity CRITICAL al boot |
| Stato SQLite incoerente con il broker (recovery non risolutivo) | Sì | `runtime/recovery.py` | Severity CRITICAL al boot | | Stato SQLite incoerente con il broker (recovery non risolutivo) | Sì | `runtime/recovery.py` | Severity CRITICAL al boot |
+25 -16
View File
@@ -126,29 +126,38 @@ Definition of Done:
- Engine può girare in `--dry-run` per 24h senza errori - Engine può girare in `--dry-run` per 24h senza errori
- I log sono leggibili e completi - I log sono leggibili e completi
## Fase 4.5 — GUI Streamlit (4 giorni) ## Fase 4.5 — GUI Streamlit (4 giorni) ✅ implementata
**Obiettivo:** dashboard locale per osservazione e azioni manuali. Spec **Obiettivo:** dashboard locale per osservazione e azioni manuali. Spec
dettagliata in `11-gui-streamlit.md`. dettagliata in `11-gui-streamlit.md`.
Tasks: Implementata in quattro round (AD):
1. Setup `gui/main.py` + sidebar nav + auto-refresh 1. `gui/main.py` + sidebar nav (auto-refresh attivo non cablato; il
2. Pagina Status (engine, capitale, MCP health, kill switch panel) re-render Streamlit è sufficiente per la frequenza tipica)
3. Pagina Equity (curve, drawdown, monthly stats) 2. Pagina Status (engine state, kill switch panel con typed
4. Pagina Position (legs, payoff plotly, decision history, force-close) confirmation, audit anchor, open positions)
5. Pagina History (filtri, KPI, export CSV) 3. Pagina Equity (cumulative P&L, drawdown, P&L distribution per
6. Pagina Audit (log live, verify chain, search) close reason, per-month stats)
7. Tabella `manual_actions` + consumer job APScheduler nell'engine 4. ✅ Pagina Position (legs from entry snapshot, payoff plotly per
8. Test integration con `streamlit.testing.v1.AppTest` bull_put/bear_call con annotazioni, decision history) — greche
live e force-close differiti
5. ✅ Pagina History (filtri window/reason/winners-losers, KPI strip,
CSV export)
6. ✅ Pagina Audit (live log stream, chain verify, event filter)
7. ✅ Consumer `runtime/manual_actions_consumer.py` con job APScheduler
`*/1` per arm/disarm (force_close = `not_supported` per ora)
8. ⏳ Test integration con `streamlit.testing.v1.AppTest`
Definition of Done: Definition of Done — stato:
- `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765` - `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765`
- Tutte le 5 pagine raggiungibili e popolate - Tutte le 5 pagine raggiungibili e popolate
- Disarm da GUI loggato in audit chain ed effettivo entro 30 sec - Disarm da GUI loggato in audit chain (`source="manual_gui"`) ed
- Force-close da GUI consumato dall'engine entro 30 sec effettivo entro ~1 minuto
- Test integration su ogni pagina passing - ⏳ Force-close da GUI: l'enqueue funziona ma l'orchestrator deve
ancora esporre `handle_force_close`
- ⏳ Test integration AppTest: non scritti
## Fase 5 — Reporting e UX (3-5 giorni) ## Fase 5 — Reporting e UX (3-5 giorni)
+28
View File
@@ -307,3 +307,31 @@ Non è permesso parametrizzare:
superiori, non ulteriormente liberalizzabili). superiori, non ulteriormente liberalizzabili).
- Lo **scheduler** per intervalli più stretti (un'ottimizzazione che - Lo **scheduler** per intervalli più stretti (un'ottimizzazione che
non si fa via config). non si fa via config).
## Variabili d'ambiente
`strategy.yaml` definisce **cosa** fa il bot quando è acceso. Le
variabili d'ambiente in `.env` definiscono **come** si collega al
mondo esterno e **quali interruttori operativi** sono attivi.
Queste vivono fuori da `strategy.yaml` perché cambiano per ambiente
(testnet vs mainnet, soak vs trading) ma non per regola di strategia.
| Variabile | Tipo | Default | Uso |
|---|---|---|---|
| `CERBERO_BITE_MCP_TOKEN` | string (obbligatoria) | — | Bearer token presentato a Cerbero MCP V2. Il valore decide l'ambiente upstream (testnet o mainnet). Cambia il valore = cambia l'ambiente. |
| `CERBERO_BITE_MCP_BOT_TAG` | string ≤ 64 char | `BOT__CERBERO_BITE` | Header `X-Bot-Tag` registrato nell'audit log del server MCP per ogni write. |
| `CERBERO_BITE_MCP_DERIBIT_URL` | URL | gateway pubblico | Override URL router Deribit. |
| `CERBERO_BITE_MCP_HYPERLIQUID_URL` | URL | gateway pubblico | Override URL router Hyperliquid. |
| `CERBERO_BITE_MCP_MACRO_URL` | URL | gateway pubblico | Override URL router Macro. |
| `CERBERO_BITE_MCP_SENTIMENT_URL` | URL | gateway pubblico | Override URL router Sentiment. |
| `CERBERO_BITE_ENABLE_DATA_ANALYSIS` | bool (`true`/`false`) | `true` | Abilita il job `market_snapshot` (raccolta dati MCP ogni 15 min). |
| `CERBERO_BITE_ENABLE_STRATEGY` | bool (`true`/`false`) | `false` | Abilita i job `entry` e `monitor` (esecuzione regole §2-§9). |
| `CERBERO_BITE_TELEGRAM_BOT_TOKEN` | string | — | Token bot Telegram (notify-only). Senza, il client è in modalità disabled. |
| `CERBERO_BITE_TELEGRAM_CHAT_ID` | string | — | Chat ID destinatario notifiche Telegram. |
I valori bool accettano in input `1`/`0`, `true`/`false`, `yes`/`no`,
`on`/`off`, `enabled`/`disabled` (case-insensitive). Qualunque altro
valore fa fallire il boot con `ValueError`.
Vedi `06-operational-flow.md` §"Modalità operativa" per i profili
canonici di `ENABLE_DATA_ANALYSIS` e `ENABLE_STRATEGY`.
+183 -134
View File
@@ -46,129 +46,152 @@ uv run streamlit run src/cerbero_bite/gui/main.py \
--browser.gatherUsageStats false --browser.gatherUsageStats false
``` ```
## Stato implementativo
La dashboard è stata costruita in quattro fasi incrementali:
| Fase | Contenuto | Stato |
|---|---|---|
| A | Status + Audit (osservazione di base) | ✅ |
| B | Equity + History (analitica + export CSV) | ✅ |
| C | Position drilldown con payoff plotly + decision history | ✅ |
| D | Kill-switch arm/disarm dalla dashboard via coda `manual_actions` | ✅ |
Per scelta di scope, restano fuori dalla prima iterazione: force-close
dalla GUI (richiede un hook `handle_force_close` nell'orchestrator),
approve/reject di una proposta (il bot decide autonomamente, non c'è un
flusso di proposta in attesa) e auto-refresh attivo via
`st_autorefresh`. Il consumer di `manual_actions` riconosce già i
`kind` corrispondenti e li archivia con `result="not_supported"`
finché i flussi non saranno cablati.
## Layout cartelle ## Layout cartelle
``` ```
src/cerbero_bite/gui/ src/cerbero_bite/gui/
├── __init__.py ├── __init__.py
├── main.py # entry point streamlit, sidebar nav ├── main.py # entry Streamlit, sidebar, home
├── pages/ ├── data_layer.py # wrapper read-only + write helpers
│ ├── 1_📊_status.py └── pages/
├── 2_📈_equity.py ├── 1_📊_Status.py # health, kill switch, audit anchor
├── 3_💼_position.py ├── 2_🔍_Audit.py # log stream + chain integrity
├── 4_📜_history.py ├── 3_📈_Equity.py # cumulative P&L + drawdown
── 5_🔍_audit.py ── 4_📜_History.py # closed trades + KPI + CSV
├── components/ └── 5_💼_Position.py # drilldown + payoff plotly
│ ├── kill_switch_panel.py
│ ├── mcp_health_grid.py
│ ├── pending_proposal_card.py
│ ├── payoff_chart.py
│ └── greeks_panel.py
└── data_layer.py # wrapper read-only verso state.repository
``` ```
I componenti riutilizzabili descritti nello spec originale
(`kill_switch_panel`, `payoff_chart`, ecc.) non sono stati estratti in
file separati: ogni pagina è autonoma e tiene la propria UI inline,
così l'evoluzione resta locale al singolo file. La promozione a
componenti separati è giustificata solo se più pagine condividono lo
stesso widget — al momento non è il caso.
## Pagine ## Pagine
### 1. 📊 Status (home) ### 1. 📊 Status (home)
Vista a colpo d'occhio dello stato corrente. Stato corrente e controlli sul kill switch.
Sezioni: Sezioni implementate:
- **Engine status**: badge verde/giallo/rosso (running/degraded/killed), - **Engine status banner** colorato in base alla health derivata dalla
uptime, ultimo health check, kill_switch state, kill_reason se armato. combinazione `system_state.kill_switch` + età di `last_health_check`
- **Capitale**: equity corrente da `cerbero-portfolio` (cache ultimo (`running`/`degraded`/`stopped`/`killed`/`unknown`).
valore noto + timestamp), variazione % vs giorno prima, vs settimana, - **Top metric tiles**: posizioni aperte, età ultimo health check,
vs mese. `started_at`, `config_version`.
- **Posizione attiva**: card con riepilogo (proposal_id, expiry, credit, - **Kill switch controls**: form arm/disarm con typed confirmation
P&L unrealized stimato, days_to_expiry) o "nessuna posizione aperta". (`"yes I am sure"`) + reason obbligatoria. La submission scrive
- **MCP health grid**: 8 box, uno per server, con latenza ms e semaforo. un'azione in `manual_actions`; il consumer la applica entro un minuto.
- **Pending action**: se l'engine ha una proposta in attesa di conferma - **Pending manual actions**: tabella delle azioni in coda non ancora
e il timeout Telegram è scaduto, qui appare una card con `Approve`/`Reject`. consumate (visibile solo se la coda è non vuota).
Effetto: la decisione viene scritta in coda e il decision orchestrator - **Audit anchor**: hash chain head persistito in `system_state`.
la legge al prossimo health-check. - **Open positions table**: spread type, contracts, credit, max loss,
- **Big buttons**: `🟢 Disarm` / `🔴 Arm Kill Switch` (con conferma strikes, status, opened/expiry.
typed `"yes I am sure"`).
Auto-refresh: 5 secondi. Sezioni non ancora implementate rispetto allo spec originale: capitale
con variazioni %, MCP health grid (i probe sono fatti dall'engine e
visibili in audit), pending-proposal card. Il refresh automatico è
manuale (la pagina si aggiorna alla navigazione o al re-render
spontaneo di Streamlit).
### 2. 📈 Equity ### 2. 🔍 Audit
Grafico storia capitale e analitica. Live log stream + verifica integrità della hash chain.
Sezioni: Sezioni implementate:
- **Equity curve** (line chart): capitale nel tempo dall'inizio del - **Chain integrity verify**: bottone che richiama `verify_chain` e
tracking. Risoluzione giornaliera. Sovrapposizione opzionale: riporta numero di entries verificate o l'errore di mismatch.
- banda Monte Carlo P5/P50/P95 (statica, dal documento) - **Filtri**: limit (10500) + event filter (auto-popolato dagli event
- DVOL nel tempo (asse Y secondario) effettivamente presenti nella tail).
- eventi macro (vertical lines sui giorni FOMC/CPI) - **Event-count strip**: `Counter` dei tipi di evento nella finestra.
- **Drawdown rolling** (sotto curve): area chart del DD% corrente. - **Tail table**: timestamp, event, payload JSON canonico, hash
- **P&L distribution** (histogram): trade chiusi raggruppati per outcome abbreviato — newest-first.
(profit_take, stop_loss, vol_stop, time_stop, ecc.).
- **Tabella mensile**: per ogni mese — n trade, win rate, P&L, max DD.
Filtri: range temporale, asset (solo ETH per ora). ### 3. 📈 Equity
Auto-refresh: 30 secondi (cambia raramente). Curva P&L cumulato e analitica trade chiusi.
### 3. 💼 Position Sezioni implementate:
Drill-down sulla posizione attualmente aperta (se esiste). - **KPI strip**: closed trades, win rate, total P&L, edge per trade,
max drawdown (USD + %).
- **Cumulative P&L** (Plotly): riempito a zero, con linea zero di
riferimento.
- **Drawdown** (Plotly area chart, asse invertito).
- **P&L distribution by close reason**: istogrammi Plotly sovrapposti
con conteggio trades per reason in metric tiles.
- **Per-month stats**: tabella aggregata UTC (mese, n trade, vincitori,
win rate, P&L totale, P&L medio).
Sezioni: Window picker: All time, last 30/90 giorni, year-to-date. Banda Monte
Carlo, overlay DVOL e linee eventi macro non sono ancora implementati.
- **Header**: proposal_id, opened_at, expiry, days_left, status.
- **Legs table**: instrument, side, size, mid corrente, delta,
theta, vega — refresh periodico via `clients.deribit`.
- **Greche aggregate**: delta/theta/vega netti.
- **Payoff diagram** (plotly): P&L vs spot ETH a scadenza, con
breakeven, max profit, max loss, spot corrente come marker.
- **Decision history**: tabella con tutte le `decisions` di tipo
`exit_check` per questa posizione, in ordine cronologico, con
outcome HOLD / CLOSE_*.
- **Distance metrics**: short strike a `X% OTM`, delta corrente,
distanza in sigma.
- **Force close** (collapsibile): typed confirmation + reason field.
Su submit: scrive in coda azione `manual_close`, l'engine la consuma
al prossimo monitor cycle.
Auto-refresh: 10 secondi.
### 4. 📜 History ### 4. 📜 History
Storico trade chiusi. Storico trade chiusi con filtri ed esportazione.
Sezioni: Sezioni implementate:
- **Filtri**: range temporale, outcome (multiselect), P&L > 0 / < 0 / tutti. - **Window picker**: All time, last 7/30/90 giorni, year-to-date.
- **Tabella trade chiusi** (`st.dataframe` sortable): proposal_id, - **Filtri di dettaglio**: multiselect su `close_reason`, radio
opened_at, closed_at, expiry, n_contracts, credit_usd, debit_paid_usd, vincitori/perdenti/tutti.
pnl_usd, outcome, days_held. - **KPI strip a sei tile**: trades, win rate, total P&L, avg win,
- **KPI strip**: n trade, win rate, avg win, avg loss, edge per trade, avg loss, edge per trade.
edge cumulato. - **Tabella trade chiusi**: proposal_id (short), spread type, asset,
- **Confronto Monte Carlo**: side-by-side delle metriche reali vs contracts, strikes, credit/max_loss, P&L, close_reason, days_held,
attese da simulazione, con delta in %. opened/closed/expiry.
- **Export CSV**: bottone download per uso fiscale. - **CSV export**: download diretto via `st.download_button`.
Auto-refresh: manuale (button). Confronto Monte Carlo side-by-side non ancora implementato.
### 5. 🔍 Audit ### 5. 💼 Position
Log e audit chain. Drilldown su una posizione specifica (open o ultime 10 chiuse).
Sezioni: Sezioni implementate:
- **Live log stream**: ultimi 100 eventi, filtro per `level` e `event`. - **Position selector** con label `proposal_id · spread_type ·
Auto-refresh 5 sec. short/long · status`. Supporta deep-link via query string
- **Audit chain status**: bottone `Verify`. Mostra "✅ chain integra `?proposal_id=…`.
fino a 14.382 eventi" o "❌ tampering rilevato a evento N". - **Header tiles**: status, spread, contracts, credit USD; caption con
- **Search**: ricerca testuale negli ultimi 30 giorni di log. proposal_id pieno + opened/expiry.
- **Stats engine**: numero kill switch armati nell'ultimo mese, MCP - **Distance metrics**: short strike OTM%, days-to-expiry, days-held,
failure count per server, average decision loop latency. delta at entry, width % of spot.
- **Export log**: download `.jsonl.gz` per analisi forensica. - **Legs table** (snapshot al momento dell'entry, non live): leg, instrument,
strike, side, size, delta. Una caption ricorda che mid e greche live
non sono fetchate dalla GUI.
- **Payoff at expiry** (Plotly): curva P&L con annotazioni per short
strike, long strike, breakeven, entry spot. Tile riassuntivi per
max profit, max loss, breakeven. Implementato per `bull_put` e
`bear_call`; gli iron condor cadono su una curva piatta (placeholder).
- **Decision history**: tabella delle righe `decisions` legate al
`proposal_id`, newest-first, con outputs JSON canonici.
Auto-refresh: manuale. Le greche/mid live e il force-close manuale richiedono che l'engine
esponga rispettivamente uno snapshot persistito e l'hook
`handle_force_close` — fuori scope della prima iterazione.
## Comunicazione GUI ↔ Engine ## Comunicazione GUI ↔ Engine
@@ -177,53 +200,70 @@ MCP. Tutto passa via:
| Azione GUI | Effetto | | Azione GUI | Effetto |
|---|---| |---|---|
| Visualizzazione stato | Read da `state/repository.py` (SQLite) | | Visualizzazione stato | Read da `state/repository.py` (SQLite) tramite `gui/data_layer.py` |
| Equity / storico | Read da SQLite + `data/log/*.jsonl` | | Equity / storico | Read da SQLite (`positions` con `status='closed'`) + audit log |
| MCP health | Read da `state.system_state.last_health_check` (l'engine fa il check) | | MCP health | Read indiretto da `system_state.last_health_check` (l'engine fa il probe) |
| **Disarm kill switch** | Write su `system_state` con `kill_switch=0`; l'engine al prossimo health check rileva e log `KILL_SWITCH_DISARMED` | | **Disarm kill switch** | `enqueue_disarm_kill(reason)` → riga in `manual_actions` con `kind="disarm_kill"`; consumer chiama `KillSwitch.disarm` (audit `KILL_SWITCH_DISARMED`, `source="manual_gui"`) |
| **Arm kill switch** | Write su `system_state` con `kill_switch=1, kill_reason="manual via GUI"` | | **Arm kill switch** | `enqueue_arm_kill(reason)` → riga `kind="arm_kill"`; consumer chiama `KillSwitch.arm` |
| **Force close** | Insert riga in tabella `manual_actions` (nuova) con `kind="force_close", proposal_id=...`; l'engine al prossimo monitor cycle la consuma | | Force close | Pianificato: `kind="force_close"`. Oggi il consumer marca `result="not_supported"`; richiede l'hook `Orchestrator.handle_force_close` |
| **Approve pending proposal** | Insert riga in `manual_actions` con `kind="approve_proposal", proposal_id=...` | | Approve / reject pending proposal | Pianificato: `kind="approve_proposal"` / `"reject_proposal"`. Stesso stato (non implementato lato orchestrator) |
**Nuova tabella SQLite** (`05-data-model.md` da estendere): La GUI **non** scrive direttamente su `system_state`: ogni transizione
del kill switch passa dal consumer e dalla classe `KillSwitch`, così
SQLite e audit chain restano sincronizzati come per le transizioni
automatiche.
**Schema SQLite** (vedi `05-data-model.md`):
```sql ```sql
CREATE TABLE manual_actions ( CREATE TABLE manual_actions (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
kind TEXT NOT NULL, -- approve_proposal, reject_proposal, force_close, etc. kind TEXT NOT NULL,
proposal_id TEXT, proposal_id TEXT,
payload_json TEXT, payload_json TEXT,
created_at TEXT NOT NULL, created_at TEXT NOT NULL,
consumed_at TEXT, -- NULL = ancora da processare consumed_at TEXT,
consumed_by TEXT, consumed_by TEXT,
result TEXT result TEXT
); );
CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at); CREATE INDEX idx_manual_actions_unconsumed ON manual_actions(consumed_at);
``` ```
L'engine include un nuovo job APScheduler `every 30s`: **Consumer**: `runtime/manual_actions_consumer.consume_manual_actions`.
Registrato come job APScheduler `manual_actions` con cron
`*/1 * * * *` (latenza ≤ 1 minuto, sufficiente per kill-switch). Il
consumer drena tutta la coda a ogni tick e per ogni azione setta
`consumed_at`, `consumed_by="engine"` e `result` (`"ok"`,
`"not_supported"` o `"error: …"`).
```python ```python
async def consume_manual_actions(): # src/cerbero_bite/runtime/manual_actions_consumer.py — sintesi
actions = state.fetch_unconsumed_manual_actions() async def consume_manual_actions(ctx, *, now=None):
for a in actions: while (action := ctx.repository.next_unconsumed_action(...)) is not None:
if a.kind == "force_close": if action.kind == "arm_kill":
await orchestrator.handle_force_close(a.proposal_id, a.payload) ctx.kill_switch.arm(reason=payload.get("reason"), source="manual_gui")
elif a.kind == "approve_proposal": elif action.kind == "disarm_kill":
await orchestrator.handle_proposal_approved(a.proposal_id) ctx.kill_switch.disarm(reason=payload.get("reason"), source="manual_gui")
# etc. else:
state.mark_action_consumed(a.id, result="ok") result = "not_supported"
ctx.repository.mark_action_consumed(...)
``` ```
Le azioni write **non bypassano** i risk control: una `force_close` deve Le azioni write **non bypassano** i risk control: la transizione passa
comunque passare dal `safety.system_healthy()` e da una conferma typed sempre per `KillSwitch.arm/disarm`, che valida lo stato e logga in
nella GUI prima di essere scritta in coda. audit. La typed confirmation (`"yes I am sure"`) è gating lato GUI
prima dell'enqueue.
## Lock e concorrenza ## Lock e concorrenza
- L'engine tiene `data/.lockfile` esclusivo. - L'engine tiene `data/.lockfile` esclusivo via `runtime/lockfile.py`.
- La GUI tiene `data/.gui-lockfile` esclusivo (impedisce due tab/Streamlit aperti). - La GUI **non** acquisisce un lock dedicato; più tab Streamlit
- Entrambi possono leggere SQLite (modalità WAL). contemporanee sono possibili (sconsigliate ma non impedite). Il
vincolo single-writer su SQLite è preservato perché ogni write
passa dalla riga `manual_actions` (auto-increment) e dal consumer
dell'engine.
- Entrambi possono leggere SQLite (le connessioni sono in modalità
short-lived: aperte per chiamata e chiuse subito).
- Le `manual_actions` sono il **canale di scrittura** condiviso, con - Le `manual_actions` sono il **canale di scrittura** condiviso, con
primary key auto-increment e flag `consumed_at` per consumo idempotente. primary key auto-increment e flag `consumed_at` per consumo idempotente.
@@ -251,26 +291,35 @@ Per chiarezza:
Telegram resta il canale primario; la GUI è canale di **fallback** Telegram resta il canale primario; la GUI è canale di **fallback**
per quando Adriano è davanti al laptop e non al telefono. per quando Adriano è davanti al laptop e non al telefono.
## Stima di sforzo ## Stima di sforzo (storica)
Inserita come **Fase 4.5** nella roadmap, tra Orchestrator e Reporting: La Fase 4.5 è stata implementata in quattro round (AD). Lo spec
originale stimava ~4 giorni ed è stato consegnato in linea con la
stima, con il caveat che `streamlit.testing.v1.AppTest` non è ancora
cablato (le pagine sono validate manualmente via smoke test HTTP) e
che force-close + approve/reject restano fuori scope.
| Task | Giorni | | Task | Giorni stimati | Stato |
|---|---| |---|---|---|
| Setup `gui/main.py` + sidebar nav + autorefresh | 0.5 | | Setup `gui/main.py` + sidebar nav + autorefresh | 0.5 | ✅ (autorefresh non attivo) |
| Pagina Status + MCP health grid + kill_switch panel | 0.5 | | Pagina Status + kill_switch panel | 0.5 | ✅ (MCP health grid non implementata) |
| Pagina Equity + drawdown + plot mensili | 0.5 | | Pagina Equity + drawdown + plot mensili | 0.5 | ✅ |
| Pagina Position + payoff plotly + decision history | 1.0 | | Pagina Position + payoff plotly + decision history | 1.0 | ✅ (greche live e force-close differiti) |
| Pagina History + filtri + export CSV | 0.5 | | Pagina History + filtri + export CSV | 0.5 | ✅ |
| Pagina Audit + search log + verify chain | 0.5 | | Pagina Audit + verify chain | 0.5 | ✅ (search e export gz differiti) |
| `manual_actions` table + consumer job APScheduler | 0.5 | | `manual_actions` consumer + APScheduler | 0.5 | ✅ (arm/disarm; force_close = `not_supported`) |
| Test integration (Streamlit AppTest framework) | 0.5 | | Test integration (Streamlit AppTest) | 0.5 | ⏳ |
| **Totale** | **~4 giorni** | | **Totale stimato** | **~4 giorni** | |
Definition of Done: Definition of Done — stato attuale:
- `cerbero-bite gui` lancia la dashboard - ✅ `cerbero-bite gui` lancia la dashboard su `127.0.0.1:8765`
- Tutte le 5 pagine raggiungibili e popolate (anche con dati fake) - ✅ Tutte le 5 pagine raggiungibili e popolate dai dati di runtime
- Disarm da GUI loggato in audit chain ed effettivo entro 30 sec - ✅ Disarm da GUI loggato in audit chain (`source="manual_gui"`) ed
- Force-close da GUI consumato dall'engine entro 30 sec effettivo entro ~1 minuto (cron `*/1`)
- Test integration con `streamlit.testing.v1.AppTest` per ogni pagina - ⏳ Force-close da GUI: l'enqueue è possibile, ma l'orchestrator non
ha ancora `handle_force_close`; il consumer marca `result="not_supported"`
- ⏳ Test integration con `streamlit.testing.v1.AppTest`: non scritti
Le voci aperte sono follow-up isolati e non bloccano l'uso quotidiano
della dashboard come tableau d'observation.
+5 -1
View File
@@ -20,6 +20,7 @@ dependencies = [
"httpx>=0.27", "httpx>=0.27",
"tenacity>=9.0", "tenacity>=9.0",
"python-dateutil>=2.9", "python-dateutil>=2.9",
"python-dotenv>=1.2.2",
] ]
[project.optional-dependencies] [project.optional-dependencies]
@@ -96,6 +97,9 @@ ignore = [
[tool.ruff.lint.per-file-ignores] [tool.ruff.lint.per-file-ignores]
"tests/**" = ["PLR2004", "ARG", "S101", "ERA001", "B017"] "tests/**" = ["PLR2004", "ARG", "S101", "ERA001", "B017"]
# Streamlit auto-discovers pages whose file names start with a number and
# may contain icons; the convention conflicts with N999.
"src/cerbero_bite/gui/pages/*" = ["N999"]
[tool.ruff.format] [tool.ruff.format]
quote-style = "double" quote-style = "double"
@@ -113,7 +117,7 @@ no_implicit_reexport = true
files = ["src/cerbero_bite"] files = ["src/cerbero_bite"]
[[tool.mypy.overrides]] [[tool.mypy.overrides]]
module = ["apscheduler.*"] module = ["apscheduler.*", "plotly.*", "pandas.*"]
ignore_missing_imports = true ignore_missing_imports = true
[tool.pytest.ini_options] [tool.pytest.ini_options]
View File
-28
View File
@@ -1,28 +0,0 @@
# `secrets/`
Cartella runtime per i credenziali sensibili. Tutti i file in questa
directory sono `.gitignore`d eccetto questo README e `.gitkeep`.
## Contenuto atteso
| File | Origine | Uso |
|---|---|---|
| `core.token` | copia di `Cerbero_mcp/secrets/core.token` | bearer token con capability `core` per chiamare i tool MCP. Letta una sola volta al boot del container. |
## Setup
```bash
cp /path/to/Cerbero_mcp/secrets/core.token secrets/core.token
chmod 600 secrets/core.token
```
Il `docker-compose.yml` di Cerbero Bite monta `secrets/core.token`
come Docker secret a `/run/secrets/core_token` dentro il container, e
la variabile d'ambiente `CERBERO_BITE_CORE_TOKEN_FILE` punta lì per
default.
## Rotazione
Quando il token core viene ruotato sul cluster Cerbero_mcp, sostituire
anche la copia locale. Il container va riavviato perché il token è
letto solo all'avvio.
+101 -26
View File
@@ -10,6 +10,7 @@ without changing the surface.
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
import os
import sys import sys
from collections.abc import Callable from collections.abc import Callable
from datetime import UTC, datetime from datetime import UTC, datetime
@@ -26,14 +27,15 @@ from cerbero_bite.clients import HttpToolClient, McpError
from cerbero_bite.clients.deribit import DeribitClient from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.clients.sentiment import SentimentClient from cerbero_bite.clients.sentiment import SentimentClient
from cerbero_bite.config.loader import compute_config_hash, load_strategy from cerbero_bite.config.loader import compute_config_hash, load_strategy
from cerbero_bite.config.mcp_endpoints import ( from cerbero_bite.config.mcp_endpoints import (
DEFAULT_ENDPOINTS, DEFAULT_ENDPOINTS,
load_bot_tag,
load_endpoints, load_endpoints,
load_token, load_token,
) )
from cerbero_bite.config.runtime_flags import load_runtime_flags
from cerbero_bite.logging import configure as configure_logging from cerbero_bite.logging import configure as configure_logging
from cerbero_bite.logging import get_logger from cerbero_bite.logging import get_logger
from cerbero_bite.runtime.orchestrator import Orchestrator, make_orchestrator from cerbero_bite.runtime.orchestrator import Orchestrator, make_orchestrator
@@ -74,6 +76,14 @@ def _phase0_notice(action: str) -> None:
@click.pass_context @click.pass_context
def main(ctx: click.Context, log_dir: Path, log_level: str) -> None: def main(ctx: click.Context, log_dir: Path, log_level: str) -> None:
"""Cerbero Bite — rule-based ETH credit spread engine.""" """Cerbero Bite — rule-based ETH credit spread engine."""
# Load `.env` once at CLI entry, unless we are running under
# pytest (which sets ``PYTEST_CURRENT_TEST`` for the duration of
# the test). Existing env vars win over the file (override=False).
if "PYTEST_CURRENT_TEST" not in os.environ:
from dotenv import load_dotenv # noqa: PLC0415
load_dotenv(Path.cwd() / ".env", override=False)
configure_logging(log_dir=log_dir, level=log_level.upper()) configure_logging(log_dir=log_dir, level=log_level.upper())
ctx.ensure_object(dict) ctx.ensure_object(dict)
ctx.obj["log_dir"] = log_dir ctx.obj["log_dir"] = log_dir
@@ -197,9 +207,14 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
show_default=True, show_default=True,
), ),
click.option( click.option(
"--token-file", "--token",
type=click.Path(dir_okay=False, path_type=Path), type=str,
default=None, default=None,
help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). "
"The server uses the token to choose between testnet "
"and mainnet upstream environments."
),
), ),
click.option( click.option(
"--db", "--db",
@@ -235,7 +250,7 @@ def _engine_options(func: Callable[..., Any]) -> Callable[..., Any]:
def _build_orchestrator( def _build_orchestrator(
*, *,
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -243,7 +258,7 @@ def _build_orchestrator(
enforce_hash: bool = True, enforce_hash: bool = True,
) -> Orchestrator: ) -> Orchestrator:
loaded = load_strategy(strategy_path, enforce_hash=enforce_hash) loaded = load_strategy(strategy_path, enforce_hash=enforce_hash)
token = load_token(path=token_file) resolved_token = load_token(value=token)
# Strategy file values win over the CLI defaults; explicit overrides # Strategy file values win over the CLI defaults; explicit overrides
# via env-style values (CLI flags) still apply when the user provides # via env-style values (CLI flags) still apply when the user provides
# them — Click signals "default" via Click's resilient_parsing flag, # them — Click signals "default" via Click's resilient_parsing flag,
@@ -262,11 +277,13 @@ def _build_orchestrator(
return make_orchestrator( return make_orchestrator(
cfg=loaded.config, cfg=loaded.config,
endpoints=load_endpoints(), endpoints=load_endpoints(),
token=token, token=resolved_token,
db_path=db, db_path=db,
audit_path=audit, audit_path=audit,
expected_environment=chosen_env, # type: ignore[arg-type] expected_environment=chosen_env, # type: ignore[arg-type]
eur_to_usd=chosen_fx, eur_to_usd=chosen_fx,
bot_tag=load_bot_tag(),
flags=load_runtime_flags(),
) )
@@ -274,7 +291,7 @@ def _build_orchestrator(
@_engine_options @_engine_options
def start( def start(
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -284,7 +301,7 @@ def start(
try: try:
orch = _build_orchestrator( orch = _build_orchestrator(
strategy_path=strategy_path, strategy_path=strategy_path,
token_file=token_file, token=token,
db=db, db=db,
audit=audit, audit=audit,
environment=environment, environment=environment,
@@ -314,7 +331,7 @@ def start(
) )
def dry_run( def dry_run(
strategy_path: Path, strategy_path: Path,
token_file: Path | None, token: str | None,
db: Path, db: Path,
audit: Path, audit: Path,
environment: str, environment: str,
@@ -324,7 +341,7 @@ def dry_run(
"""Execute one cycle without starting the scheduler.""" """Execute one cycle without starting the scheduler."""
orch = _build_orchestrator( orch = _build_orchestrator(
strategy_path=strategy_path, strategy_path=strategy_path,
token_file=token_file, token=token,
db=db, db=db,
audit=audit, audit=audit,
environment=environment, environment=environment,
@@ -498,10 +515,13 @@ def kill_switch_status(db: Path) -> None:
@main.command() @main.command()
@click.option( @click.option(
"--token-file", "--token",
type=click.Path(dir_okay=False, path_type=Path), type=str,
default=None, default=None,
help="Path to the bearer token file (default: secrets/core_token).", help=(
"MCP bearer token (overrides CERBERO_BITE_MCP_TOKEN). The "
"server uses the token to choose between testnet and mainnet."
),
) )
@click.option( @click.option(
"--timeout", "--timeout",
@@ -510,16 +530,16 @@ def kill_switch_status(db: Path) -> None:
show_default=True, show_default=True,
help="Per-service timeout in seconds for the ping call.", help="Per-service timeout in seconds for the ping call.",
) )
def ping(token_file: Path | None, timeout: float) -> None: def ping(token: str | None, timeout: float) -> None:
"""Print health status for every MCP service Cerbero Bite uses.""" """Print health status for every MCP service Cerbero Bite uses."""
try: try:
token = load_token(path=token_file) resolved_token = load_token(value=token)
except (FileNotFoundError, ValueError) as exc: except ValueError as exc:
console.print(f"[red]token error[/red]: {exc}") console.print(f"[red]token error[/red]: {exc}")
sys.exit(1) sys.exit(1)
endpoints = load_endpoints() endpoints = load_endpoints()
rows = asyncio.run(_ping_all(endpoints, token=token, timeout=timeout)) rows = asyncio.run(_ping_all(endpoints, token=resolved_token, timeout=timeout))
table = Table(title="MCP services") table = Table(title="MCP services")
table.add_column("service") table.add_column("service")
@@ -560,12 +580,6 @@ async def _ping_one(
if service == "hyperliquid": if service == "hyperliquid":
await HyperliquidClient(http).funding_rate_annualized("ETH") await HyperliquidClient(http).funding_rate_annualized("ETH")
return "ok", "ETH-PERP reachable" return "ok", "ETH-PERP reachable"
if service == "portfolio":
await PortfolioClient(http).total_equity_eur()
return "ok", "portfolio reachable"
if service == "telegram":
# Notify-only: no read tool. Skip without hitting the bot.
return "skipped", "notify-only client (no health probe)"
return "skipped", "no probe defined" # pragma: no cover return "skipped", "no probe defined" # pragma: no cover
except McpError as exc: except McpError as exc:
return "fail", f"{type(exc).__name__}: {exc}" return "fail", f"{type(exc).__name__}: {exc}"
@@ -587,9 +601,70 @@ async def _ping_all(
@main.command() @main.command()
def gui() -> None: @click.option(
"""Launch the Streamlit dashboard.""" "--db",
_phase0_notice("gui command not yet implemented (will run streamlit on 127.0.0.1:8765).") type=click.Path(path_type=Path),
default=_DEFAULT_DB_PATH,
show_default=True,
help="SQLite state file the dashboard reads.",
)
@click.option(
"--audit",
type=click.Path(path_type=Path),
default=_DEFAULT_AUDIT_PATH,
show_default=True,
help="Audit log file the dashboard streams.",
)
@click.option(
"--port",
type=int,
default=8765,
show_default=True,
help="Local port to bind (always 127.0.0.1).",
)
@click.option(
"--headless/--no-headless",
default=True,
show_default=True,
help="When true, do not auto-open the browser.",
)
def gui(db: Path, audit: Path, port: int, headless: bool) -> None:
"""Launch the Streamlit dashboard (read-only, localhost only)."""
try:
import streamlit # noqa: F401, PLC0415
except ImportError:
click.echo(
"streamlit not installed. Run `uv sync --extra gui` first.",
err=True,
)
sys.exit(1)
main_path = Path(__file__).parent / "gui" / "main.py"
if not main_path.is_file():
click.echo(f"GUI entry point not found: {main_path}", err=True)
sys.exit(1)
env = os.environ.copy()
env["CERBERO_BITE_GUI_DB"] = str(db.resolve())
env["CERBERO_BITE_GUI_AUDIT"] = str(audit.resolve())
cmd = [
sys.executable,
"-m",
"streamlit",
"run",
str(main_path),
"--server.address",
"127.0.0.1",
"--server.port",
str(port),
"--server.headless",
"true" if headless else "false",
"--browser.gatherUsageStats",
"false",
]
click.echo(f"Launching GUI on http://127.0.0.1:{port}")
os.execvpe(cmd[0], cmd, env)
@main.command() @main.command()
+31 -5
View File
@@ -1,10 +1,13 @@
"""HTTP tool client common to every MCP wrapper. """HTTP tool client common to every MCP wrapper.
Each MCP service exposes ``POST <base_url>/tools/<tool_name>`` with a Each MCP service exposes ``POST <base_url>/tools/<tool_name>`` with a
JSON body and a ``Bearer <core_token>`` header. ``HttpToolClient`` is a JSON body, a ``Bearer <token>`` header (the token decides the upstream
thin wrapper around :class:`httpx.AsyncClient` that: environment, testnet or mainnet, on the Cerbero MCP V2 server), and an
``X-Bot-Tag`` header that identifies the calling bot in the audit log.
``HttpToolClient`` is a thin wrapper around :class:`httpx.AsyncClient`
that:
* Adds the auth header. * Adds the auth and bot-tag headers.
* Applies the project-wide timeout (default 8 s, see * Applies the project-wide timeout (default 8 s, see
``docs/10-config-spec.md`` ``mcp.call_timeout_s``). ``docs/10-config-spec.md`` ``mcp.call_timeout_s``).
* Retries the call on transient failures with exponential backoff * Retries the call on transient failures with exponential backoff
@@ -44,7 +47,7 @@ from cerbero_bite.clients._exceptions import (
McpToolError, McpToolError,
) )
__all__ = ["HttpToolClient"] __all__ = ["DEFAULT_BOT_TAG", "HttpToolClient"]
_log = logging.getLogger("cerbero_bite.clients") _log = logging.getLogger("cerbero_bite.clients")
@@ -53,6 +56,12 @@ _RETRYABLE: tuple[type[BaseException], ...] = (
McpServerError, McpServerError,
) )
# Bot identifier sent on every MCP call via the ``X-Bot-Tag`` header.
# The Cerbero MCP V2 server logs this value in the audit record so each
# write operation can be traced back to the originating bot.
DEFAULT_BOT_TAG = "BOT__CERBERO_BITE"
_BOT_TAG_MAX_LEN = 64
class HttpToolClient: class HttpToolClient:
"""Async client for ``POST <base>/tools/<tool>`` style MCP services. """Async client for ``POST <base>/tools/<tool>`` style MCP services.
@@ -61,7 +70,14 @@ class HttpToolClient:
service: short service identifier (``"deribit"``, ``"macro"`` …). service: short service identifier (``"deribit"``, ``"macro"`` …).
base_url: e.g. ``"http://mcp-deribit:9011"``. Trailing slash base_url: e.g. ``"http://mcp-deribit:9011"``. Trailing slash
is stripped. is stripped.
token: bearer token for the ``Authorization`` header. token: bearer token for the ``Authorization`` header. On
Cerbero MCP V2 the value of the token decides whether the
upstream environment is testnet or mainnet; the bot does
not need to know which is which.
bot_tag: value of the ``X-Bot-Tag`` header. Defaults to
:data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``). The
server rejects requests with a missing/empty/over-long
value with HTTP 400.
timeout_s: per-request timeout, default 8 seconds. timeout_s: per-request timeout, default 8 seconds.
retry_max: max number of attempts (1 = no retry). retry_max: max number of attempts (1 = no retry).
retry_base_delay: base delay for exponential backoff. retry_base_delay: base delay for exponential backoff.
@@ -74,15 +90,24 @@ class HttpToolClient:
service: str, service: str,
base_url: str, base_url: str,
token: str, token: str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0, timeout_s: float = 8.0,
retry_max: int = 3, retry_max: int = 3,
retry_base_delay: float = 1.0, retry_base_delay: float = 1.0,
sleep: Callable[[int | float], Awaitable[None] | None] | None = None, sleep: Callable[[int | float], Awaitable[None] | None] | None = None,
client: httpx.AsyncClient | None = None, client: httpx.AsyncClient | None = None,
) -> None: ) -> None:
cleaned_tag = bot_tag.strip()
if not cleaned_tag:
raise ValueError("bot_tag must be a non-empty string")
if len(cleaned_tag) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"bot_tag exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned_tag!r}"
)
self._service = service self._service = service
self._base_url = base_url.rstrip("/") self._base_url = base_url.rstrip("/")
self._token = token self._token = token
self._bot_tag = cleaned_tag
self._timeout = httpx.Timeout(timeout_s) self._timeout = httpx.Timeout(timeout_s)
self._retry_max = max(1, retry_max) self._retry_max = max(1, retry_max)
self._retry_base_delay = retry_base_delay self._retry_base_delay = retry_base_delay
@@ -114,6 +139,7 @@ class HttpToolClient:
headers = { headers = {
"Authorization": f"Bearer {self._token}", "Authorization": f"Bearer {self._token}",
"Content-Type": "application/json", "Content-Type": "application/json",
"X-Bot-Tag": self._bot_tag,
} }
payload = body or {} payload = body or {}
+66 -3
View File
@@ -303,14 +303,15 @@ class DeribitClient:
return Decimal(str(entry["close"])) return Decimal(str(entry["close"]))
return None return None
async def dealer_gamma_profile_eth( async def dealer_gamma_profile(
self, self,
currency: str,
*, *,
expiry_from: datetime | None = None, expiry_from: datetime | None = None,
expiry_to: datetime | None = None, expiry_to: datetime | None = None,
top_n_strikes: int = 50, top_n_strikes: int = 50,
) -> DealerGammaSnapshot: ) -> DealerGammaSnapshot:
"""Return the aggregated dealer net gamma snapshot for ETH options. """Return the aggregated dealer net gamma snapshot for ``currency``.
Long-gamma regime (``total_net_dealer_gamma > 0``) is associated Long-gamma regime (``total_net_dealer_gamma > 0``) is associated
with vol-suppressing dealer hedging — the entry filter §2.8 uses with vol-suppressing dealer hedging — the entry filter §2.8 uses
@@ -318,7 +319,7 @@ class DeribitClient:
(vol-amplifying dealer flow). (vol-amplifying dealer flow).
""" """
body: dict[str, Any] = { body: dict[str, Any] = {
"currency": "ETH", "currency": currency.upper(),
"top_n_strikes": top_n_strikes, "top_n_strikes": top_n_strikes,
} }
if expiry_from is not None: if expiry_from is not None:
@@ -347,6 +348,68 @@ class DeribitClient:
strikes_analyzed=int(raw.get("strikes_analyzed") or 0), strikes_analyzed=int(raw.get("strikes_analyzed") or 0),
) )
async def dealer_gamma_profile_eth(
self,
*,
expiry_from: datetime | None = None,
expiry_to: datetime | None = None,
top_n_strikes: int = 50,
) -> DealerGammaSnapshot:
"""Backwards-compatible alias of :py:meth:`dealer_gamma_profile`."""
return await self.dealer_gamma_profile(
"ETH",
expiry_from=expiry_from,
expiry_to=expiry_to,
top_n_strikes=top_n_strikes,
)
async def realized_vol(
self,
currency: str,
*,
windows: tuple[int, ...] = (14, 30),
) -> dict[str, Decimal | None]:
"""Annualised realised vol for ``currency`` plus IV-RV spread.
Returns ``{"rv_14d", "rv_30d", "iv_minus_rv_30d", "iv_current"}``
(``None`` for any missing field). Pure read-only — no side
effects on the engine.
"""
raw = await self._http.call(
"get_realized_vol",
{"currency": currency.upper(), "windows": list(windows)},
)
if not isinstance(raw, dict):
return {}
rv = raw.get("realized_vol_pct") or {}
spread = raw.get("iv_minus_rv_pct") or {}
return {
"rv_14d": _to_decimal(rv.get("14d")),
"rv_30d": _to_decimal(rv.get("30d")),
"iv_current": _to_decimal(raw.get("iv_current_pct")),
"iv_minus_rv_30d": _to_decimal(spread.get("30d")),
"iv_minus_rv_14d": _to_decimal(spread.get("14d")),
}
async def spot_perp_price(self, asset: str) -> Decimal:
"""Mark price of ``<ASSET>-PERPETUAL`` (cheap proxy for spot)."""
instrument = f"{asset.upper()}-PERPETUAL"
raw = await self._http.call("get_ticker", {"instrument": instrument})
if not isinstance(raw, dict):
raise McpDataAnomalyError(
f"get_ticker: unexpected shape for {instrument}",
service=self.SERVICE,
tool="get_ticker",
)
mark = raw.get("mark_price") or raw.get("last_price")
if mark is None:
raise McpDataAnomalyError(
f"get_ticker: missing mark_price for {instrument}",
service=self.SERVICE,
tool="get_ticker",
)
return Decimal(str(mark))
async def adx_14( async def adx_14(
self, self,
*, *,
+23 -3
View File
@@ -1,13 +1,17 @@
"""Wrapper around ``mcp-hyperliquid``. """Wrapper around ``mcp-hyperliquid``.
Cerbero Bite consumes a single tool: ``get_funding_rate`` for ETH-PERP, Cerbero Bite consumes:
used by entry filter §2.6 of ``docs/01-strategy-rules.md`` (cap on the
absolute annualised funding rate). * ``get_funding_rate`` — entry filter §2.6 cap on absolute annualised
funding rate (``docs/01-strategy-rules.md``).
* ``get_account_summary`` and ``get_positions`` — feed the in-process
portfolio aggregator (equity + ETH/BTC exposure on the perp side).
""" """
from __future__ import annotations from __future__ import annotations
from decimal import Decimal from decimal import Decimal
from typing import Any
from cerbero_bite.clients._base import HttpToolClient from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError from cerbero_bite.clients._exceptions import McpDataAnomalyError
@@ -47,3 +51,19 @@ class HyperliquidClient:
tool="get_funding_rate", tool="get_funding_rate",
) )
return Decimal(str(rate)) * Decimal(HOURLY_FUNDING_PERIODS_PER_YEAR) return Decimal(str(rate)) * Decimal(HOURLY_FUNDING_PERIODS_PER_YEAR)
async def get_account_summary(self) -> dict[str, Any]:
"""Account equity and balances (USD)."""
raw: Any = await self._http.call("get_account_summary", {})
return raw if isinstance(raw, dict) else {}
async def get_positions(self) -> list[dict[str, Any]]:
"""Open perp positions (list of dicts)."""
raw: Any = await self._http.call("get_positions", {})
if isinstance(raw, list):
return raw
if isinstance(raw, dict):
inner = raw.get("positions")
if isinstance(inner, list):
return inner
return []
+30
View File
@@ -9,11 +9,13 @@ the requested window. The orchestrator feeds the result straight into
from __future__ import annotations from __future__ import annotations
from datetime import UTC, datetime from datetime import UTC, datetime
from decimal import Decimal
from typing import Any from typing import Any
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from cerbero_bite.clients._base import HttpToolClient from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError
__all__ = ["MacroClient", "MacroEvent"] __all__ = ["MacroClient", "MacroEvent"]
@@ -71,6 +73,34 @@ class MacroClient:
) )
return out return out
async def get_asset_price(self, ticker: str) -> Decimal:
"""Return the latest cross-asset price for ``ticker`` (e.g. ``EURUSD``)."""
raw = await self._http.call("get_asset_price", {"ticker": ticker})
if not isinstance(raw, dict):
raise McpDataAnomalyError(
f"macro get_asset_price unexpected shape: {type(raw).__name__}",
service=self.SERVICE,
tool="get_asset_price",
)
if raw.get("error"):
raise McpDataAnomalyError(
f"macro get_asset_price error for {ticker}: {raw['error']}",
service=self.SERVICE,
tool="get_asset_price",
)
price = raw.get("price")
if price is None:
raise McpDataAnomalyError(
f"macro get_asset_price missing 'price' for {ticker}",
service=self.SERVICE,
tool="get_asset_price",
)
return Decimal(str(price))
async def eur_usd_rate(self) -> Decimal:
"""Return EUR→USD spot rate (i.e. ``EURUSD`` price)."""
return await self.get_asset_price("EURUSD")
async def next_high_severity_within( async def next_high_severity_within(
self, self,
*, *,
+130 -65
View File
@@ -1,92 +1,157 @@
"""Wrapper around ``mcp-portfolio``. """In-process portfolio aggregator.
Cerbero Bite uses two pieces of information from this service: Each Cerbero Suite bot now manages its own portfolio view: instead of
calling a shared ``mcp-portfolio`` service, this client composes the
account summaries and open positions from the exchanges the bot
actually uses (Deribit options + Hyperliquid perps) and converts them
to EUR via the macro service.
* total portfolio value (EUR) — fed to the sizing engine after FX Two values are exposed:
conversion to USD;
* exposure of a specific asset as percentage of the total portfolio —
used by entry filter §2.7 (``eth_holdings_pct_max``).
The portfolio service stores everything in EUR. The orchestrator is * :py:meth:`total_equity_eur` — sum of USDC equity on Deribit and USD
responsible for the EUR→USD conversion using a live FX rate. equity on Hyperliquid, converted to EUR using the live ``EURUSD``
rate from ``mcp-macro``.
* :py:meth:`asset_pct_of_portfolio` — fraction (0..1) of total USD
equity exposed to a specific ticker via open positions on the two
exchanges. Used by entry filter §2.7 (``eth_holdings_pct_max``).
**Scope note**: this is the bot's own slice. Holdings on other
exchanges, in cold storage, or held by other bots in the suite are
*not* counted. The §2.7 limit is therefore a per-bot cap, not a
suite-wide one.
""" """
from __future__ import annotations from __future__ import annotations
import asyncio
from collections.abc import Iterable
from decimal import Decimal from decimal import Decimal
from typing import Any from typing import Any, cast
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
__all__ = ["PortfolioClient"] __all__ = ["PortfolioClient"]
class PortfolioClient: def _decimal_or_zero(value: Any) -> Decimal:
SERVICE = "portfolio" if value is None:
return Decimal(0)
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return Decimal(0)
def __init__(self, http: HttpToolClient) -> None:
if http.service != self.SERVICE: def _position_notional_usd(pos: dict[str, Any]) -> Decimal:
raise ValueError( """Best-effort USD notional of an open position.
f"PortfolioClient requires service '{self.SERVICE}', got '{http.service}'"
Prefers an explicit ``notional_usd`` / ``size_usd`` / ``value_usd``
field. Falls back to ``|size × mark_price|`` (or ``index_price`` if
mark is missing). Returns 0 on malformed entries.
"""
for key in ("notional_usd", "size_usd", "value_usd", "position_value"):
v = pos.get(key)
if v is not None:
return abs(_decimal_or_zero(v))
size = _decimal_or_zero(pos.get("size") or pos.get("szi"))
mark = _decimal_or_zero(
pos.get("mark_price")
or pos.get("entry_price")
or pos.get("index_price")
) )
self._http = http return abs(size * mark)
def _instrument_label(pos: dict[str, Any]) -> str:
for key in ("instrument_name", "instrument", "symbol", "coin", "asset"):
v = pos.get(key)
if v is not None:
return str(v).upper()
return ""
class PortfolioClient:
"""Aggregates equity + asset exposure across the bot's exchange accounts."""
def __init__(
self,
*,
deribit: DeribitClient,
hyperliquid: HyperliquidClient,
macro: MacroClient,
) -> None:
self._deribit = deribit
self._hyperliquid = hyperliquid
self._macro = macro
async def _equity_usd_components(self) -> tuple[Decimal, Decimal]:
"""Concurrent fetch of (deribit_equity_usd, hyperliquid_equity_usd)."""
deribit_summary, hl_summary = await asyncio.gather(
self._deribit.get_account_summary(currency="USDC"),
self._hyperliquid.get_account_summary(),
)
deribit_eq = _decimal_or_zero(deribit_summary.get("equity"))
hl_eq = _decimal_or_zero(hl_summary.get("equity"))
return deribit_eq, hl_eq
async def total_equity_usd(self) -> Decimal:
"""Sum equity USD across the bot's exchange accounts."""
deribit_eq, hl_eq = await self._equity_usd_components()
return deribit_eq + hl_eq
async def total_equity_eur(self) -> Decimal: async def total_equity_eur(self) -> Decimal:
"""Return the aggregate portfolio value in EUR.""" """Return aggregate bot equity in EUR.
raw = await self._http.call(
"get_total_portfolio_value", {"currency": "EUR"} Concurrent: account summaries × FX. Raises
) :class:`McpDataAnomalyError` if the FX rate is non-positive.
if not isinstance(raw, dict): """
components_t = asyncio.create_task(self._equity_usd_components())
fx_t = asyncio.create_task(self._macro.eur_usd_rate())
await asyncio.gather(components_t, fx_t)
deribit_eq, hl_eq = components_t.result()
fx = fx_t.result()
if fx <= 0:
raise McpDataAnomalyError( raise McpDataAnomalyError(
f"portfolio total_value_eur unexpected shape: {type(raw).__name__}", f"non-positive EURUSD rate: {fx}",
service=self.SERVICE, service="macro",
tool="get_total_portfolio_value", tool="get_asset_price",
) )
value = raw.get("total_value_eur") usd_total = deribit_eq + hl_eq
if value is None: return usd_total / fx
raise McpDataAnomalyError(
"portfolio response missing 'total_value_eur'",
service=self.SERVICE,
tool="get_total_portfolio_value",
)
return Decimal(str(value))
async def asset_pct_of_portfolio(self, ticker: str) -> Decimal: async def asset_pct_of_portfolio(self, ticker: str) -> Decimal:
"""Return the fraction (0..1) of the portfolio held in ``ticker``. """Fraction of bot equity (USD) exposed to ``ticker``.
Iterates the holdings list and aggregates ``current_value_eur`` Sums absolute USD notional of open positions whose instrument
for any holding whose ticker contains ``ticker`` (case-insensitive). label contains ``ticker`` (case-insensitive) on Deribit and
Empty portfolio → 0. Hyperliquid, divided by the bot's total USD equity. Returns 0
when there is no equity or no exposure.
""" """
holdings = await self._http.call("get_holdings", {"min_value_eur": 0})
if not isinstance(holdings, list):
raise McpDataAnomalyError(
f"portfolio get_holdings unexpected shape: {type(holdings).__name__}",
service=self.SERVICE,
tool="get_holdings",
)
target = ticker.upper() target = ticker.upper()
matching_value = Decimal("0") deribit_pos_t = asyncio.create_task(
total_value = Decimal("0") self._deribit.get_positions(currency="USDC")
for entry in holdings: )
if not isinstance(entry, dict): hl_pos_t = asyncio.create_task(self._hyperliquid.get_positions())
continue equity_t = asyncio.create_task(self._equity_usd_components())
value = entry.get("current_value_eur") await asyncio.gather(deribit_pos_t, hl_pos_t, equity_t)
if value is None:
continue
value_dec = Decimal(str(value))
total_value += value_dec
entry_ticker = str(entry.get("ticker") or "").upper()
if target in entry_ticker:
matching_value += value_dec
if total_value == 0: exposure_usd = Decimal(0)
return Decimal("0") for raw_pos in cast(Iterable[Any], deribit_pos_t.result()):
return matching_value / total_value if not isinstance(raw_pos, dict):
continue
if target in _instrument_label(raw_pos):
exposure_usd += _position_notional_usd(raw_pos)
for raw_pos in cast(Iterable[Any], hl_pos_t.result()):
if not isinstance(raw_pos, dict):
continue
if target in _instrument_label(raw_pos):
exposure_usd += _position_notional_usd(raw_pos)
async def health(self) -> dict[str, Any]: deribit_eq, hl_eq = equity_t.result()
"""Lightweight call used by ``cerbero-bite ping``.""" total_eq = deribit_eq + hl_eq
result: Any = await self._http.call("get_last_update_info", {}) if total_eq <= 0:
return result if isinstance(result, dict) else {} return Decimal(0)
return exposure_usd / total_eq
+126 -49
View File
@@ -1,41 +1,115 @@
"""Wrapper around ``mcp-telegram`` (notify-only mode). """Direct Telegram Bot API client (notify-only).
Cerbero Bite during the testnet phase (and through the soft launch) is Cerbero Bite is fully autonomous: Telegram is used solely to *notify*
fully autonomous: Telegram is used purely to *notify* Adriano of what the operator of what the engine has done — there is no inbound queue
the engine has done, never to gate execution. As a consequence: and no confirmation logic.
* No ``send_with_buttons`` and no callback queue. Credentials are read from the environment:
* Confirmation timeouts are handled inside the orchestrator's own
state machine, not by waiting on Telegram replies. * ``CERBERO_BITE_TELEGRAM_BOT_TOKEN`` — bot token from BotFather.
* All notifications go through one of the typed endpoints * ``CERBERO_BITE_TELEGRAM_CHAT_ID`` — destination chat id.
(``notify``, ``notify_position_opened``, ``notify_position_closed``,
``notify_alert``, ``notify_system_error``) — the formatting lives If either is missing the client runs in **disabled** mode: every
on the server side. ``notify_*`` becomes a no-op logged at DEBUG. This keeps unconfigured
deployments and the test environment harmless.
""" """
from __future__ import annotations from __future__ import annotations
import logging
import os
from decimal import Decimal from decimal import Decimal
from typing import Any from typing import Any
from cerbero_bite.clients._base import HttpToolClient import httpx
__all__ = ["TelegramClient"] __all__ = [
"TELEGRAM_BOT_TOKEN_ENV",
"TELEGRAM_CHAT_ID_ENV",
"TelegramClient",
"TelegramError",
"load_telegram_credentials",
]
def _to_float(value: Decimal | float) -> float: TELEGRAM_BOT_TOKEN_ENV = "CERBERO_BITE_TELEGRAM_BOT_TOKEN"
return float(value) if isinstance(value, Decimal) else value TELEGRAM_CHAT_ID_ENV = "CERBERO_BITE_TELEGRAM_CHAT_ID"
_log = logging.getLogger("cerbero_bite.clients.telegram")
class TelegramError(RuntimeError):
"""Raised when the Telegram Bot API rejects a sendMessage call."""
def _to_float(value: Decimal | float | int) -> float:
return float(value)
def load_telegram_credentials(
env: dict[str, str] | None = None,
) -> tuple[str | None, str | None]:
"""Return ``(bot_token, chat_id)`` from env. Empty strings → ``None``."""
e = env if env is not None else os.environ
token = (e.get(TELEGRAM_BOT_TOKEN_ENV) or "").strip() or None
chat = (e.get(TELEGRAM_CHAT_ID_ENV) or "").strip() or None
return token, chat
class TelegramClient: class TelegramClient:
SERVICE = "telegram" """Notify-only client over the public Telegram Bot API."""
def __init__(self, http: HttpToolClient) -> None: BASE_URL = "https://api.telegram.org"
if http.service != self.SERVICE:
raise ValueError( def __init__(
f"TelegramClient requires service '{self.SERVICE}', got '{http.service}'" self,
*,
bot_token: str | None,
chat_id: str | None,
http_client: httpx.AsyncClient | None = None,
timeout_s: float = 5.0,
parse_mode: str = "HTML",
) -> None:
self._token = (bot_token or "").strip() or None
self._chat_id = (str(chat_id).strip() if chat_id is not None else "") or None
self._client = http_client
self._timeout = timeout_s
self._parse_mode = parse_mode
@property
def enabled(self) -> bool:
return self._token is not None and self._chat_id is not None
async def _send(self, text: str) -> None:
if not self.enabled:
_log.debug("telegram disabled, dropping message: %s", text[:120])
return
url = f"{self.BASE_URL}/bot{self._token}/sendMessage"
payload: dict[str, Any] = {
"chat_id": self._chat_id,
"text": text,
"parse_mode": self._parse_mode,
"disable_web_page_preview": True,
}
client = self._client
owns = client is None
if client is None:
client = httpx.AsyncClient(timeout=self._timeout)
try:
resp = await client.post(url, json=payload, timeout=self._timeout)
finally:
if owns:
await client.aclose()
if resp.status_code != 200:
raise TelegramError(
f"telegram HTTP {resp.status_code}: {resp.text[:200]}"
) )
self._http = http data = resp.json()
if not isinstance(data, dict) or not data.get("ok", False):
desc = (
data.get("description", "?") if isinstance(data, dict) else str(data)
)
raise TelegramError(f"telegram api error: {desc}")
async def notify( async def notify(
self, self,
@@ -44,10 +118,10 @@ class TelegramClient:
priority: str = "normal", priority: str = "normal",
tag: str | None = None, tag: str | None = None,
) -> None: ) -> None:
body: dict[str, Any] = {"message": message, "priority": priority} prefix = f"[{priority.upper()}]"
if tag is not None: if tag:
body["tag"] = tag prefix = f"{prefix}[{tag}]"
await self._http.call("notify", body) await self._send(f"{prefix} {message}")
async def notify_position_opened( async def notify_position_opened(
self, self,
@@ -59,17 +133,19 @@ class TelegramClient:
greeks: dict[str, Decimal | float] | None = None, greeks: dict[str, Decimal | float] | None = None,
expected_pnl_usd: Decimal | float | None = None, expected_pnl_usd: Decimal | float | None = None,
) -> None: ) -> None:
body: dict[str, Any] = { lines = [
"instrument": instrument, "<b>POSITION OPENED</b>",
"side": side, f"instrument: <code>{instrument}</code>",
"size": float(size), f"side: {side} | size: {size} | strategy: {strategy}",
"strategy": strategy, ]
} if greeks:
if greeks is not None: joined = ", ".join(
body["greeks"] = {k: _to_float(v) for k, v in greeks.items()} f"{k}={_to_float(v):+.4f}" for k, v in greeks.items()
)
lines.append(f"greeks: {joined}")
if expected_pnl_usd is not None: if expected_pnl_usd is not None:
body["expected_pnl"] = _to_float(expected_pnl_usd) lines.append(f"expected pnl: ${_to_float(expected_pnl_usd):+.2f}")
await self._http.call("notify_position_opened", body) await self._send("\n".join(lines))
async def notify_position_closed( async def notify_position_closed(
self, self,
@@ -78,13 +154,12 @@ class TelegramClient:
realized_pnl_usd: Decimal | float, realized_pnl_usd: Decimal | float,
reason: str, reason: str,
) -> None: ) -> None:
await self._http.call( pnl = _to_float(realized_pnl_usd)
"notify_position_closed", await self._send(
{ "<b>POSITION CLOSED</b>\n"
"instrument": instrument, f"instrument: <code>{instrument}</code>\n"
"realized_pnl": _to_float(realized_pnl_usd), f"realized pnl: ${pnl:+.2f}\n"
"reason": reason, f"reason: {reason}"
},
) )
async def notify_alert( async def notify_alert(
@@ -94,9 +169,10 @@ class TelegramClient:
message: str, message: str,
priority: str = "high", priority: str = "high",
) -> None: ) -> None:
await self._http.call( await self._send(
"notify_alert", f"<b>ALERT [{priority.upper()}]</b>\n"
{"source": source, "message": message, "priority": priority}, f"source: {source}\n"
f"{message}"
) )
async def notify_system_error( async def notify_system_error(
@@ -106,7 +182,8 @@ class TelegramClient:
component: str | None = None, component: str | None = None,
priority: str = "critical", priority: str = "critical",
) -> None: ) -> None:
body: dict[str, Any] = {"message": message, "priority": priority} text = f"<b>SYSTEM ERROR [{priority.upper()}]</b>\n"
if component is not None: if component:
body["component"] = component text += f"component: {component}\n"
await self._http.call("notify_system_error", body) text += message
await self._send(text)
+71 -34
View File
@@ -1,43 +1,55 @@
"""Resolve MCP service URLs and the bearer token. """Resolve MCP service URLs, the bearer token and the bot tag.
Cerbero Bite runs in its own Docker container that joins the Cerbero MCP V2 (a single FastAPI image fronting Deribit, Hyperliquid,
``cerbero-suite`` network: every MCP service is reachable by the Macro, Sentiment and friends) is deployed on a dedicated VPS and reached
container DNS name plus its internal port (``mcp-deribit:9011`` etc.). through the public gateway at ``https://cerbero-mcp.tielogic.xyz``. The
server decides the upstream environment (testnet vs mainnet) entirely
from the bearer token attached to each request — Cerbero Bite does not
have to be told which is which: swapping the token in ``.env`` is enough
to switch environments.
The resolver supports two layers of override: The resolver supports the following layers of override:
1. Per-service environment variables (``CERBERO_BITE_MCP_DERIBIT_URL``, 1. Per-service URL env vars (``CERBERO_BITE_MCP_DERIBIT_URL``,
``CERBERO_BITE_MCP_MACRO_URL``…). Useful for dev when running ``CERBERO_BITE_MCP_HYPERLIQUID_URL``, ``CERBERO_BITE_MCP_MACRO_URL``,
outside Docker — point at ``http://localhost:9011`` etc. ``CERBERO_BITE_MCP_SENTIMENT_URL``). Useful for local dev when the
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var: path to the file that bot must talk to a same-host MCP server (``http://localhost:9000``)
stores the bearer token (default instead of the public gateway.
``/run/secrets/core_token``). The file is read at boot, the 2. ``CERBERO_BITE_MCP_TOKEN`` env var: the bearer token used on every
trailing whitespace is stripped, and the value is *not* logged. request. The token's value is *never* logged.
3. ``CERBERO_BITE_MCP_BOT_TAG`` env var: identifier sent on the
``X-Bot-Tag`` header (default ``BOT__CERBERO_BITE``). Must be a
non-empty string of at most 64 characters.
""" """
from __future__ import annotations from __future__ import annotations
import os import os
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path
from cerbero_bite.clients._base import DEFAULT_BOT_TAG
__all__ = [ __all__ = [
"DEFAULT_BOT_TAG",
"DEFAULT_ENDPOINTS", "DEFAULT_ENDPOINTS",
"MCP_SERVICES", "MCP_SERVICES",
"McpEndpoints", "McpEndpoints",
"load_bot_tag",
"load_endpoints", "load_endpoints",
"load_token", "load_token",
] ]
# Service identifier → (default Docker DNS host, default port, env var name) # Service identifier → (default Docker DNS host, default port, env var name)
#
# Telegram and Portfolio used to be shared MCP services; both are now
# in-process per bot (Telegram → public Bot API, Portfolio → aggregator
# over Deribit + Hyperliquid + Macro). They are no longer listed here.
MCP_SERVICES: dict[str, tuple[str, int, str]] = { MCP_SERVICES: dict[str, tuple[str, int, str]] = {
"deribit": ("mcp-deribit", 9011, "CERBERO_BITE_MCP_DERIBIT_URL"), "deribit": ("mcp-deribit", 9011, "CERBERO_BITE_MCP_DERIBIT_URL"),
"hyperliquid": ("mcp-hyperliquid", 9012, "CERBERO_BITE_MCP_HYPERLIQUID_URL"), "hyperliquid": ("mcp-hyperliquid", 9012, "CERBERO_BITE_MCP_HYPERLIQUID_URL"),
"macro": ("mcp-macro", 9013, "CERBERO_BITE_MCP_MACRO_URL"), "macro": ("mcp-macro", 9013, "CERBERO_BITE_MCP_MACRO_URL"),
"sentiment": ("mcp-sentiment", 9014, "CERBERO_BITE_MCP_SENTIMENT_URL"), "sentiment": ("mcp-sentiment", 9014, "CERBERO_BITE_MCP_SENTIMENT_URL"),
"telegram": ("mcp-telegram", 9017, "CERBERO_BITE_MCP_TELEGRAM_URL"),
"portfolio": ("mcp-portfolio", 9018, "CERBERO_BITE_MCP_PORTFOLIO_URL"),
} }
@@ -58,8 +70,6 @@ class McpEndpoints:
hyperliquid: str hyperliquid: str
macro: str macro: str
sentiment: str sentiment: str
telegram: str
portfolio: str
def for_service(self, name: str) -> str: def for_service(self, name: str) -> str:
try: try:
@@ -78,31 +88,58 @@ def load_endpoints(env: dict[str, str] | None = None) -> McpEndpoints:
return McpEndpoints(**resolved) return McpEndpoints(**resolved)
_DEFAULT_TOKEN_FILE = "/run/secrets/core_token" _TOKEN_ENV = "CERBERO_BITE_MCP_TOKEN"
_TOKEN_FILE_ENV = "CERBERO_BITE_CORE_TOKEN_FILE" _BOT_TAG_ENV = "CERBERO_BITE_MCP_BOT_TAG"
_BOT_TAG_MAX_LEN = 64
def load_token( def load_token(
*, *,
path: str | Path | None = None, value: str | None = None,
env: dict[str, str] | None = None, env: dict[str, str] | None = None,
) -> str: ) -> str:
"""Read the bearer token from disk and return it stripped. """Return the MCP bearer token, stripped of surrounding whitespace.
Resolution order: Resolution order:
1. explicit ``path`` argument; 1. explicit ``value`` argument (e.g. from a CLI flag);
2. ``CERBERO_BITE_CORE_TOKEN_FILE`` env var; 2. ``CERBERO_BITE_MCP_TOKEN`` env var.
3. ``/run/secrets/core_token`` (Docker secrets default).
""" """
e = env if env is not None else os.environ if value is not None:
target = ( token = value.strip()
Path(path)
if path is not None
else Path(e.get(_TOKEN_FILE_ENV, _DEFAULT_TOKEN_FILE))
)
if not target.is_file():
raise FileNotFoundError(f"core token file not found: {target}")
token = target.read_text(encoding="utf-8").strip()
if not token: if not token:
raise ValueError(f"core token file is empty: {target}") raise ValueError("explicit MCP token is empty")
return token return token
e = env if env is not None else os.environ
raw = e.get(_TOKEN_ENV, "")
token = raw.strip()
if not token:
raise ValueError(
f"{_TOKEN_ENV} is unset or empty; set it in .env to the testnet or "
"mainnet bearer issued by Cerbero MCP"
)
return token
def load_bot_tag(
*,
value: str | None = None,
env: dict[str, str] | None = None,
) -> str:
"""Return the ``X-Bot-Tag`` value, with the project default as fallback.
Resolution order:
1. explicit ``value`` argument;
2. ``CERBERO_BITE_MCP_BOT_TAG`` env var;
3. :data:`DEFAULT_BOT_TAG` (``"BOT__CERBERO_BITE"``).
"""
raw = value if value is not None else (env if env is not None else os.environ).get(
_BOT_TAG_ENV, ""
)
cleaned = raw.strip() if raw else ""
if not cleaned:
return DEFAULT_BOT_TAG
if len(cleaned) > _BOT_TAG_MAX_LEN:
raise ValueError(
f"{_BOT_TAG_ENV} exceeds {_BOT_TAG_MAX_LEN} characters: {cleaned!r}"
)
return cleaned
+78
View File
@@ -0,0 +1,78 @@
"""Operational mode flags read from the environment.
Cerbero Bite supports two independent runtime switches:
* ``CERBERO_BITE_ENABLE_DATA_ANALYSIS`` — when ``true``, the periodic
market-snapshot job is scheduled and writes 15-minute snapshots to
``market_snapshots``; when ``false``, the bot still pings MCP for
health and reconciliation but does not record any market dataset.
* ``CERBERO_BITE_ENABLE_STRATEGY`` — when ``true``, the entry and
monitor cycles are scheduled and may propose/execute trades; when
``false``, no entry or monitor logic runs autonomously (the methods
remain callable from the CLI ``dry-run`` and via manual actions, so
the operator can still test code paths on demand).
The default profile is "analysis only": data analysis on, strategy off.
This is the mode used during the post-deploy soak window where the
team observes data quality before opening any position.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
__all__ = [
"DATA_ANALYSIS_ENV",
"STRATEGY_ENV",
"RuntimeFlags",
"load_runtime_flags",
]
DATA_ANALYSIS_ENV = "CERBERO_BITE_ENABLE_DATA_ANALYSIS"
STRATEGY_ENV = "CERBERO_BITE_ENABLE_STRATEGY"
_TRUE_TOKENS = frozenset({"1", "true", "yes", "on", "enabled"})
_FALSE_TOKENS = frozenset({"0", "false", "no", "off", "disabled"})
@dataclass(frozen=True)
class RuntimeFlags:
"""Boolean switches that gate optional cycles.
Both fields default to the canonical "analysis only" profile.
"""
data_analysis_enabled: bool = True
strategy_enabled: bool = False
def _parse_bool(raw: str, *, var: str, default: bool) -> bool:
cleaned = raw.strip().lower()
if not cleaned:
return default
if cleaned in _TRUE_TOKENS:
return True
if cleaned in _FALSE_TOKENS:
return False
raise ValueError(
f"{var}: expected one of "
f"{sorted(_TRUE_TOKENS | _FALSE_TOKENS)}, got {raw!r}"
)
def load_runtime_flags(env: dict[str, str] | None = None) -> RuntimeFlags:
"""Build a :class:`RuntimeFlags` from environment variables."""
e = env if env is not None else os.environ
return RuntimeFlags(
data_analysis_enabled=_parse_bool(
e.get(DATA_ANALYSIS_ENV, ""),
var=DATA_ANALYSIS_ENV,
default=True,
),
strategy_enabled=_parse_bool(
e.get(STRATEGY_ENV, ""),
var=STRATEGY_ENV,
default=False,
),
)
Binary file not shown.

After

Width:  |  Height:  |  Size: 668 KiB

+750
View File
@@ -0,0 +1,750 @@
"""Read-only data access for the Streamlit GUI.
The GUI MUST NOT import ``runtime/`` modules nor make MCP calls. Every
piece of information shown on screen is derived from:
* SQLite (``data/state.sqlite``) via :class:`Repository`.
* The audit log (``data/audit.log``) via the parsing helpers in
:mod:`cerbero_bite.safety.audit_log`.
The module exposes small frozen dataclasses purpose-built for rendering
so each Streamlit page can grab a snapshot in one call instead of
poking at the repository directly.
"""
from __future__ import annotations
import json
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from decimal import Decimal
from pathlib import Path
from typing import Literal
from uuid import UUID
from cerbero_bite.safety.audit_log import (
AuditChainError,
AuditEntry,
iter_entries,
verify_chain,
)
from cerbero_bite.state import Repository, connect, transaction
from cerbero_bite.state.models import (
DecisionRecord,
ManualAction,
MarketSnapshotRecord,
PositionRecord,
SystemStateRecord,
)
from cerbero_bite.state.repository import _row_to_manual
__all__ = [
"DEFAULT_AUDIT_PATH",
"DEFAULT_DB_PATH",
"AuditChainStatus",
"EngineHealth",
"EngineSnapshot",
"EquityPoint",
"MonthlyStats",
"PayoffCurve",
"PortfolioKpis",
"PositionDistanceMetrics",
"compute_distance_metrics",
"compute_equity_curve",
"compute_kpis",
"compute_monthly_stats",
"compute_payoff_curve",
"enqueue_arm_kill",
"enqueue_disarm_kill",
"enqueue_run_cycle",
"load_audit_chain_status",
"load_audit_tail",
"load_closed_positions",
"load_decisions_for_position",
"load_engine_snapshot",
"load_market_snapshots",
"load_open_positions",
"load_pending_manual_actions",
"load_position_by_id",
]
DEFAULT_DB_PATH = Path("data/state.sqlite")
DEFAULT_AUDIT_PATH = Path("data/audit.log")
EngineHealth = Literal["running", "degraded", "killed", "stopped", "unknown"]
@dataclass(frozen=True)
class EngineSnapshot:
"""One-shot snapshot used by the Status page."""
health: EngineHealth
kill_switch_armed: bool
kill_reason: str | None
kill_at: datetime | None
last_health_check: datetime | None
last_health_check_age_s: float | None
started_at: datetime | None
config_version: str | None
last_audit_hash: str | None
open_positions: int
@property
def health_label(self) -> str:
return {
"running": "ATTIVO",
"degraded": "DEGRADATO",
"killed": "KILL SWITCH ARMATO",
"stopped": "FERMO",
"unknown": "SCONOSCIUTO",
}[self.health]
@dataclass(frozen=True)
class AuditChainStatus:
"""Result of calling ``verify_chain`` on the audit log."""
ok: bool
entries_verified: int
error: str | None
def load_engine_snapshot(
*,
db_path: Path | str = DEFAULT_DB_PATH,
now: datetime | None = None,
stale_after_s: float = 600.0,
) -> EngineSnapshot:
"""Read system_state + open positions count and derive engine health.
Health rules:
* kill switch armed → ``killed``
* no system_state row → ``unknown`` (engine never started)
* last health check older than ``stale_after_s`` → ``stopped``
* last health check older than 2× cycle (10 min) but younger than
``stale_after_s`` → ``degraded``
* fresh health check → ``running``
"""
db_path = Path(db_path)
if not db_path.exists():
return EngineSnapshot(
health="unknown",
kill_switch_armed=False,
kill_reason=None,
kill_at=None,
last_health_check=None,
last_health_check_age_s=None,
started_at=None,
config_version=None,
last_audit_hash=None,
open_positions=0,
)
repo = Repository()
conn = connect(db_path)
try:
state: SystemStateRecord | None = repo.get_system_state(conn)
open_pos = len(repo.list_open_positions(conn))
finally:
conn.close()
if state is None:
return EngineSnapshot(
health="unknown",
kill_switch_armed=False,
kill_reason=None,
kill_at=None,
last_health_check=None,
last_health_check_age_s=None,
started_at=None,
config_version=None,
last_audit_hash=None,
open_positions=open_pos,
)
reference = (now or datetime.now(UTC)).astimezone(UTC)
last_check = state.last_health_check
age = (reference - last_check).total_seconds() if last_check else None
if state.kill_switch:
health: EngineHealth = "killed"
elif age is None:
health = "unknown"
elif age > stale_after_s:
health = "stopped"
elif age > 600: # over 10 minutes since last health probe
health = "degraded"
else:
health = "running"
return EngineSnapshot(
health=health,
kill_switch_armed=bool(state.kill_switch),
kill_reason=state.kill_reason,
kill_at=state.kill_at,
last_health_check=last_check,
last_health_check_age_s=age,
started_at=state.started_at,
config_version=state.config_version,
last_audit_hash=state.last_audit_hash,
open_positions=open_pos,
)
def load_open_positions(
*, db_path: Path | str = DEFAULT_DB_PATH
) -> list[PositionRecord]:
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_open_positions(conn)
finally:
conn.close()
def load_closed_positions(
*,
db_path: Path | str = DEFAULT_DB_PATH,
start: datetime | None = None,
end: datetime | None = None,
) -> list[PositionRecord]:
"""Return positions with status ``closed`` (sorted oldest → newest).
The optional ``start`` / ``end`` window filters by ``closed_at``.
Positions still in flight (open / awaiting_fill / closing /
cancelled) are excluded. ``cancelled`` positions are also excluded
since they never had P&L impact.
"""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
rows = repo.list_positions(conn, status="closed")
finally:
conn.close()
out: list[PositionRecord] = []
for r in rows:
if r.closed_at is None:
continue
if start is not None and r.closed_at < start:
continue
if end is not None and r.closed_at > end:
continue
out.append(r)
out.sort(key=lambda p: p.closed_at) # type: ignore[arg-type, return-value]
return out
# ---------------------------------------------------------------------------
# Analytics
# ---------------------------------------------------------------------------
@dataclass(frozen=True)
class EquityPoint:
"""One point on the cumulative-PnL curve."""
timestamp: datetime
realized_pnl_usd: Decimal
cumulative_pnl_usd: Decimal
drawdown_usd: Decimal
drawdown_pct: float
@dataclass(frozen=True)
class MonthlyStats:
"""Aggregated stats for a calendar month."""
year_month: str # "2026-04"
n_trades: int
n_wins: int
win_rate: float
pnl_usd: Decimal
avg_pnl_usd: Decimal
@dataclass(frozen=True)
class PortfolioKpis:
"""High-level KPI strip for the History/Equity pages."""
n_trades: int
n_wins: int
win_rate: float
total_pnl_usd: Decimal
avg_win_usd: Decimal
avg_loss_usd: Decimal
edge_per_trade_usd: Decimal
max_drawdown_usd: Decimal
max_drawdown_pct: float
def compute_equity_curve(positions: list[PositionRecord]) -> list[EquityPoint]:
"""Build a cumulative PnL series from closed positions.
Drawdown is measured against the running peak of cumulative PnL
(so it accounts for past wins). ``drawdown_pct`` is expressed
relative to the peak — undefined when peak ≤ 0 (returns 0.0).
"""
if not positions:
return []
points: list[EquityPoint] = []
cumulative = Decimal(0)
peak = Decimal(0)
for pos in positions:
if pos.pnl_usd is None or pos.closed_at is None:
continue
cumulative += pos.pnl_usd
peak = max(peak, cumulative)
dd_usd = peak - cumulative
dd_pct = float(dd_usd / peak) if peak > 0 else 0.0
points.append(
EquityPoint(
timestamp=pos.closed_at,
realized_pnl_usd=pos.pnl_usd,
cumulative_pnl_usd=cumulative,
drawdown_usd=dd_usd,
drawdown_pct=dd_pct,
)
)
return points
def compute_kpis(positions: list[PositionRecord]) -> PortfolioKpis:
"""Aggregate KPI strip across the supplied closed positions."""
pnls = [p.pnl_usd for p in positions if p.pnl_usd is not None]
n = len(pnls)
if n == 0:
zero = Decimal(0)
return PortfolioKpis(
n_trades=0,
n_wins=0,
win_rate=0.0,
total_pnl_usd=zero,
avg_win_usd=zero,
avg_loss_usd=zero,
edge_per_trade_usd=zero,
max_drawdown_usd=zero,
max_drawdown_pct=0.0,
)
wins = [p for p in pnls if p > 0]
losses = [p for p in pnls if p < 0]
total = sum(pnls, Decimal(0))
avg_win = sum(wins, Decimal(0)) / Decimal(len(wins)) if wins else Decimal(0)
avg_loss = sum(losses, Decimal(0)) / Decimal(len(losses)) if losses else Decimal(0)
curve = compute_equity_curve(positions)
if curve:
max_dd = max((p.drawdown_usd for p in curve), default=Decimal(0))
max_dd_pct = max((p.drawdown_pct for p in curve), default=0.0)
else: # pragma: no cover — defensive, curve is empty iff pnls empty
max_dd = Decimal(0)
max_dd_pct = 0.0
return PortfolioKpis(
n_trades=n,
n_wins=len(wins),
win_rate=len(wins) / n,
total_pnl_usd=total,
avg_win_usd=avg_win,
avg_loss_usd=avg_loss,
edge_per_trade_usd=total / Decimal(n),
max_drawdown_usd=max_dd,
max_drawdown_pct=max_dd_pct,
)
def compute_monthly_stats(positions: list[PositionRecord]) -> list[MonthlyStats]:
"""Aggregate per calendar month (UTC), oldest → newest."""
buckets: dict[str, list[Decimal]] = {}
for pos in positions:
if pos.pnl_usd is None or pos.closed_at is None:
continue
key = pos.closed_at.astimezone(UTC).strftime("%Y-%m")
buckets.setdefault(key, []).append(pos.pnl_usd)
out: list[MonthlyStats] = []
for key in sorted(buckets):
pnls = buckets[key]
n = len(pnls)
wins = sum(1 for p in pnls if p > 0)
total = sum(pnls, Decimal(0))
out.append(
MonthlyStats(
year_month=key,
n_trades=n,
n_wins=wins,
win_rate=wins / n if n else 0.0,
pnl_usd=total,
avg_pnl_usd=total / Decimal(n) if n else Decimal(0),
)
)
return out
def load_position_by_id(
proposal_id: UUID,
*,
db_path: Path | str = DEFAULT_DB_PATH,
) -> PositionRecord | None:
db_path = Path(db_path)
if not db_path.exists():
return None
repo = Repository()
conn = connect(db_path)
try:
return repo.get_position(conn, proposal_id)
finally:
conn.close()
def load_decisions_for_position(
proposal_id: UUID,
*,
db_path: Path | str = DEFAULT_DB_PATH,
limit: int = 200,
) -> list[DecisionRecord]:
"""Decisions for ``proposal_id`` newest-first."""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_decisions(conn, proposal_id=proposal_id, limit=limit)
finally:
conn.close()
# ---------------------------------------------------------------------------
# Payoff math (pure, no live data)
# ---------------------------------------------------------------------------
@dataclass(frozen=True)
class PayoffCurve:
"""At-expiry P&L curve for a credit spread."""
spreads_type: str # "bull_put" / "bear_call" / "iron_condor"
spot_grid: list[float]
pnl_grid_usd: list[float]
breakeven: float | None
max_profit_usd: float
max_loss_usd: float
short_strike: float
long_strike: float
spot_at_entry: float
def compute_payoff_curve(
position: PositionRecord,
*,
grid_points: int = 60,
margin_pct: float = 0.15,
) -> PayoffCurve:
"""Build the at-expiry payoff for a credit spread.
Supported spreads (Cerbero Bite scope):
* ``bull_put``: short put @ ``short_strike``, long put @
``long_strike`` (lower). Max profit = credit. Max loss = width
credit. Breakeven = short_strike credit_per_contract.
* ``bear_call``: short call @ ``short_strike``, long call @
``long_strike`` (higher). Symmetric to bull_put around the strikes.
* Other types fall back to a flat zero curve to avoid breaking the
page if/when iron condors are implemented later.
"""
short = float(position.short_strike)
long_ = float(position.long_strike)
n = position.n_contracts
width_usd = float(position.spread_width_usd)
credit_total_usd = float(position.credit_usd)
credit_per_contract = credit_total_usd / n if n > 0 else 0.0
spot = float(position.eth_price_at_entry)
lo = min(short, long_, spot) * (1 - margin_pct)
hi = max(short, long_, spot) * (1 + margin_pct)
step = (hi - lo) / max(grid_points - 1, 1)
grid = [lo + i * step for i in range(grid_points)]
if position.spread_type == "bull_put":
# short put at higher strike, long put at lower strike
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n # signed (negative)
breakeven = short - credit_per_contract
pnl = []
for s in grid:
if s >= short:
pnl.append(max_profit)
elif s <= long_:
pnl.append(max_loss)
else:
frac = (s - long_) / (short - long_)
pnl.append(max_loss + frac * (max_profit - max_loss))
elif position.spread_type == "bear_call":
# short call at lower strike, long call at higher strike
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n
breakeven = short + credit_per_contract
pnl = []
for s in grid:
if s <= short:
pnl.append(max_profit)
elif s >= long_:
pnl.append(max_loss)
else:
frac = (s - short) / (long_ - short)
pnl.append(max_profit + frac * (max_loss - max_profit))
else:
max_profit = credit_total_usd
max_loss = -(width_usd - credit_total_usd) * n
breakeven = None
pnl = [0.0 for _ in grid]
return PayoffCurve(
spreads_type=position.spread_type,
spot_grid=grid,
pnl_grid_usd=pnl,
breakeven=breakeven,
max_profit_usd=max_profit,
max_loss_usd=max_loss,
short_strike=short,
long_strike=long_,
spot_at_entry=spot,
)
@dataclass(frozen=True)
class PositionDistanceMetrics:
"""Quick distance summary for the position drilldown."""
short_strike_otm_pct: float | None
days_to_expiry: int | None
days_held: int | None
delta_at_entry: float
width_pct_of_spot: float
def compute_distance_metrics(
position: PositionRecord,
*,
now: datetime | None = None,
) -> PositionDistanceMetrics:
spot = float(position.spot_at_entry)
short = float(position.short_strike)
if spot > 0:
if position.spread_type == "bull_put":
otm_pct = (spot - short) / spot
elif position.spread_type == "bear_call":
otm_pct = (short - spot) / spot
else:
otm_pct = None
else:
otm_pct = None
reference = (now or datetime.now(UTC)).astimezone(UTC)
days_to_expiry = (
(position.expiry - reference).days if position.expiry else None
)
days_held = (
(reference - position.opened_at).days if position.opened_at else None
)
return PositionDistanceMetrics(
short_strike_otm_pct=otm_pct,
days_to_expiry=days_to_expiry,
days_held=days_held,
delta_at_entry=float(position.delta_at_entry),
width_pct_of_spot=float(position.spread_width_pct),
)
# ---------------------------------------------------------------------------
# Manual actions queue (the GUI's only write path)
# ---------------------------------------------------------------------------
def _enqueue_action(
*,
db_path: Path | str,
kind: str,
payload: dict[str, object],
proposal_id: UUID | None = None,
) -> int:
"""Insert a row in ``manual_actions``. The engine consumer applies it."""
db_path = Path(db_path)
repo = Repository()
now = datetime.now(UTC)
conn = connect(db_path)
try:
with transaction(conn):
return repo.enqueue_manual_action(
conn,
ManualAction(
kind=kind, # type: ignore[arg-type]
proposal_id=proposal_id,
payload_json=json.dumps(payload),
created_at=now,
),
)
finally:
conn.close()
def enqueue_arm_kill(
*, reason: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue an ``arm_kill`` action for the engine consumer."""
if not reason or not reason.strip():
raise ValueError("reason is required")
return _enqueue_action(
db_path=db_path,
kind="arm_kill",
payload={"reason": reason.strip()},
)
def enqueue_disarm_kill(
*, reason: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue a ``disarm_kill`` action for the engine consumer."""
if not reason or not reason.strip():
raise ValueError("reason is required")
return _enqueue_action(
db_path=db_path,
kind="disarm_kill",
payload={"reason": reason.strip()},
)
def enqueue_run_cycle(
*, cycle: str, db_path: Path | str = DEFAULT_DB_PATH
) -> int:
"""Queue a ``run_cycle`` action — engine must be running.
``cycle`` must be one of ``entry``, ``monitor``, ``health``. The
engine consumer dispatches the corresponding ``Orchestrator.run_*``
method on the next minute tick.
"""
cycle_norm = cycle.strip().lower()
if cycle_norm not in {"entry", "monitor", "health", "market_snapshot"}:
raise ValueError(
f"cycle must be entry|monitor|health|market_snapshot, "
f"got '{cycle}'"
)
return _enqueue_action(
db_path=db_path,
kind="run_cycle",
payload={"cycle": cycle_norm},
)
def load_market_snapshots(
*,
asset: str,
db_path: Path | str = DEFAULT_DB_PATH,
start: datetime | None = None,
end: datetime | None = None,
limit: int = 5000,
) -> list[MarketSnapshotRecord]:
"""Return market_snapshots rows for the asset, newest-first."""
db_path = Path(db_path)
if not db_path.exists():
return []
repo = Repository()
conn = connect(db_path)
try:
return repo.list_market_snapshots(
conn, asset=asset, start=start, end=end, limit=limit
)
finally:
conn.close()
def load_pending_manual_actions(
*, db_path: Path | str = DEFAULT_DB_PATH
) -> list[ManualAction]:
"""All unconsumed actions, oldest first (used for the pending strip)."""
db_path = Path(db_path)
if not db_path.exists():
return []
conn = connect(db_path)
try:
rows = conn.execute(
"SELECT * FROM manual_actions WHERE consumed_at IS NULL "
"ORDER BY created_at ASC"
).fetchall()
finally:
conn.close()
return [_row_to_manual(row) for row in rows]
def load_audit_tail(
*,
audit_path: Path | str = DEFAULT_AUDIT_PATH,
limit: int = 100,
event_filter: str | None = None,
) -> list[AuditEntry]:
"""Return the most recent audit entries (newest first).
For the GUI we walk the entire file (the audit log is append-only and
bounded by daily rotation; reading 100 lines stays cheap). The
optional ``event_filter`` matches by exact event name.
"""
audit_path = Path(audit_path)
entries: list[AuditEntry] = []
if not audit_path.exists():
return entries
for entry in iter_entries(audit_path):
if event_filter and entry.event != event_filter:
continue
entries.append(entry)
entries.reverse() # newest first
return entries[:limit]
def load_audit_chain_status(
*, audit_path: Path | str = DEFAULT_AUDIT_PATH
) -> AuditChainStatus:
audit_path = Path(audit_path)
try:
n = verify_chain(audit_path)
except AuditChainError as exc:
return AuditChainStatus(ok=False, entries_verified=0, error=str(exc))
except Exception as exc: # pragma: no cover — surface unexpected IO errors
return AuditChainStatus(ok=False, entries_verified=0, error=str(exc))
return AuditChainStatus(ok=True, entries_verified=n, error=None)
def humanize_age(seconds: float | None) -> str:
if seconds is None:
return ""
if seconds < 60:
return f"{int(seconds)}s fa"
if seconds < 3600:
return f"{int(seconds / 60)}m fa"
if seconds < 86400:
return f"{seconds / 3600:.1f}h fa"
return f"{seconds / 86400:.1f}g fa"
def humanize_dt(value: datetime | None) -> str:
if value is None:
return ""
return value.astimezone(UTC).strftime("%Y-%m-%d %H:%M:%S UTC")
def humanize_timedelta(value: timedelta | None) -> str: # pragma: no cover
if value is None:
return ""
return f"{value.total_seconds() / 3600:.1f}h"
+230
View File
@@ -0,0 +1,230 @@
"""Live MCP fetch for the GUI (saldi exchange, FX rate).
The original architecture forbade the GUI from calling MCP services
(`docs/11-gui-streamlit.md`). For the "Saldi exchange" panel that
constraint is relaxed: the dashboard fetches balances on demand,
caches the result with Streamlit's TTL cache, and never holds the
async client open between renders. Every fetch is a one-shot:
* read endpoints + token from env (same path used by the CLI),
* spin up a short-lived ``httpx.AsyncClient``,
* query Deribit `get_account_summary` for both ``USDC`` and ``USDT``,
* query Hyperliquid `get_account_summary` (returns ``spot_usdc``,
``perps_equity`` etc.),
* query Macro `get_asset_price("EURUSD")` for FX,
* close the client and return a frozen dataclass to the page.
If a single exchange call fails the row is filled with ``error=...``
and the others are still rendered.
"""
from __future__ import annotations
import asyncio
from dataclasses import dataclass
from datetime import UTC, datetime
from decimal import Decimal
from typing import Any
import httpx
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.config.mcp_endpoints import load_endpoints, load_token
__all__ = [
"BalanceRow",
"BalancesSnapshot",
"fetch_balances_sync",
]
_DERIBIT_CURRENCIES = ("USDC", "USDT")
@dataclass(frozen=True)
class BalanceRow:
"""One row of the balances table."""
exchange: str
currency: str
equity: Decimal | None
available: Decimal | None
unrealized_pnl: Decimal | None
error: str | None = None
@dataclass(frozen=True)
class BalancesSnapshot:
"""Result of one fetch_balances call (rows + meta)."""
rows: list[BalanceRow]
eur_usd_rate: Decimal | None
fetched_at: datetime
fx_error: str | None = None
def total_usd(self) -> Decimal:
total = Decimal(0)
for r in self.rows:
if r.equity is not None:
total += r.equity
return total
def total_eur(self) -> Decimal | None:
if self.eur_usd_rate is None or self.eur_usd_rate <= 0:
return None
return self.total_usd() / self.eur_usd_rate
def _decimal_or_none(value: Any) -> Decimal | None:
if value is None:
return None
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return None
def _resolve_token() -> str:
"""Read the MCP bearer token from the environment.
The token is sourced from ``CERBERO_BITE_MCP_TOKEN``; on Cerbero MCP
V2 the same single token decides whether the upstream environment
is testnet or mainnet.
"""
return load_token()
async def _fetch_deribit_currency(
deribit: DeribitClient, currency: str
) -> BalanceRow:
try:
summary = await deribit.get_account_summary(currency=currency)
except Exception as exc:
return BalanceRow(
exchange="deribit",
currency=currency,
equity=None,
available=None,
unrealized_pnl=None,
error=f"{type(exc).__name__}: {exc}",
)
# Cerbero MCP V2 returns HTTP 200 with a soft ``error`` field when
# the upstream Deribit call failed (e.g. invalid credentials). Treat
# that as a row-level failure so the dashboard surfaces the cause
# instead of showing a misleading equity=0.
soft_error = summary.get("error")
if soft_error:
return BalanceRow(
exchange="deribit",
currency=currency,
equity=None,
available=None,
unrealized_pnl=None,
error=str(soft_error),
)
return BalanceRow(
exchange="deribit",
currency=currency,
equity=_decimal_or_none(summary.get("equity")),
available=_decimal_or_none(summary.get("available_funds")),
unrealized_pnl=_decimal_or_none(summary.get("unrealized_pnl")),
)
async def _fetch_hyperliquid(hl: HyperliquidClient) -> list[BalanceRow]:
try:
summary = await hl.get_account_summary()
except Exception as exc:
return [
BalanceRow(
exchange="hyperliquid",
currency="USDC",
equity=None,
available=None,
unrealized_pnl=None,
error=f"{type(exc).__name__}: {exc}",
)
]
rows: list[BalanceRow] = [
BalanceRow(
exchange="hyperliquid",
currency="USDC",
equity=_decimal_or_none(summary.get("equity")),
available=_decimal_or_none(summary.get("available_balance")),
unrealized_pnl=_decimal_or_none(summary.get("unrealized_pnl")),
)
]
# Hyperliquid spot may also hold USDT; the MCP server exposes it
# under spot_usdt when present. Add a row only if the field is there
# so we don't render a confusing "0.00" against an asset the account
# never held.
spot_usdt = summary.get("spot_usdt")
if spot_usdt is not None:
rows.append(
BalanceRow(
exchange="hyperliquid",
currency="USDT",
equity=_decimal_or_none(spot_usdt),
available=_decimal_or_none(spot_usdt),
unrealized_pnl=Decimal(0),
)
)
return rows
async def _fetch_balances_async(*, timeout_s: float = 8.0) -> BalancesSnapshot:
endpoints = load_endpoints()
token = _resolve_token()
async with httpx.AsyncClient(timeout=timeout_s) as http_client:
def _client(service: str) -> HttpToolClient:
return HttpToolClient(
service=service,
base_url=endpoints.for_service(service),
token=token,
timeout_s=timeout_s,
retry_max=1,
client=http_client,
)
deribit = DeribitClient(_client("deribit"))
hl = HyperliquidClient(_client("hyperliquid"))
macro = MacroClient(_client("macro"))
deribit_results, hl_rows, (fx_value, fx_error) = await asyncio.gather(
asyncio.gather(
*(
_fetch_deribit_currency(deribit, cur)
for cur in _DERIBIT_CURRENCIES
)
),
_fetch_hyperliquid(hl),
_fetch_eur_usd(macro),
)
deribit_rows = list(deribit_results)
return BalancesSnapshot(
rows=[*deribit_rows, *hl_rows],
eur_usd_rate=fx_value,
fetched_at=datetime.now(UTC),
fx_error=fx_error,
)
async def _fetch_eur_usd(
macro: MacroClient,
) -> tuple[Decimal | None, str | None]:
try:
rate = await macro.eur_usd_rate()
except Exception as exc:
return None, f"{type(exc).__name__}: {exc}"
return rate, None
def fetch_balances_sync(*, timeout_s: float = 8.0) -> BalancesSnapshot:
"""Sync wrapper for Streamlit pages (which run in a sync context)."""
return asyncio.run(_fetch_balances_async(timeout_s=timeout_s))
+148
View File
@@ -0,0 +1,148 @@
"""Streamlit entry point for the Cerbero Bite dashboard.
Launch with::
cerbero-bite gui
or directly::
uv run streamlit run src/cerbero_bite/gui/main.py \
--server.address 127.0.0.1 \
--server.port 8765 \
--server.headless true
The dashboard is **read-mostly**: it reads SQLite + the audit log and
never imports ``runtime/`` modules. Each Streamlit page is in
``gui/pages/`` and Streamlit auto-discovers them.
"""
from __future__ import annotations
import os
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
humanize_age,
humanize_dt,
load_engine_snapshot,
)
PAGE_TITLE = "Cerbero Bite — Cruscotto"
PAGE_ICON = str(Path(__file__).parent / "assets" / "cerbero_logo.png")
# ---------------------------------------------------------------------------
# Path resolution
# ---------------------------------------------------------------------------
def _resolve_paths() -> tuple[Path, Path]:
"""Read DB / audit paths from env (settable by ``cerbero-bite gui``)."""
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
# ---------------------------------------------------------------------------
# Sidebar
# ---------------------------------------------------------------------------
_HEALTH_BADGES: dict[str, tuple[str, str]] = {
"running": ("🟢", "ATTIVO"),
"degraded": ("🟡", "DEGRADATO"),
"killed": ("🔴", "KILL SWITCH"),
"stopped": ("", "FERMO"),
"unknown": ("", "SCONOSCIUTO"),
}
def _render_sidebar(db_path: Path, audit_path: Path) -> None:
snap = load_engine_snapshot(db_path=db_path)
icon, label = _HEALTH_BADGES.get(snap.health, ("", "SCONOSCIUTO"))
logo_path = Path(__file__).parent / "assets" / "cerbero_logo.png"
if logo_path.is_file():
st.sidebar.image(str(logo_path), use_container_width=True)
st.sidebar.markdown(f"### {icon} {label}")
if snap.kill_switch_armed:
st.sidebar.error(
f"**Kill switch armato**\n\n"
f"motivo: {snap.kill_reason or ''}\n\n"
f"da: {humanize_dt(snap.kill_at)}"
)
st.sidebar.metric(
"Ultimo health check",
humanize_age(snap.last_health_check_age_s),
)
st.sidebar.metric("Posizioni aperte", snap.open_positions)
st.sidebar.caption(f"config: `{snap.config_version or ''}`")
st.sidebar.divider()
st.sidebar.caption("Sola lettura • solo localhost")
st.sidebar.caption(f"db: `{db_path}`")
st.sidebar.caption(f"audit: `{audit_path}`")
# ---------------------------------------------------------------------------
# Home page
# ---------------------------------------------------------------------------
def main() -> None:
st.set_page_config(
page_title=PAGE_TITLE,
page_icon=PAGE_ICON,
layout="wide",
initial_sidebar_state="expanded",
)
db_path, audit_path = _resolve_paths()
_render_sidebar(db_path, audit_path)
logo_path = Path(__file__).parent / "assets" / "cerbero_logo.png"
header_cols = st.columns([1, 6])
if logo_path.is_file():
header_cols[0].image(str(logo_path), use_container_width=True)
header_cols[1].title("Cerbero Bite")
st.caption(
"Motore rule-based per credit spread su ETH — cruscotto in sola lettura"
)
st.markdown(
"""
Usa la barra laterale per navigare:
- **Stato** — salute del motore, kill switch, posizioni aperte, ancora audit
- **Audit** — streaming del registro audit + verifica integrità della catena
- **Equity** — P&L cumulato, drawdown, distribuzione per chiusura, statistiche mensili
- **Storico** — trade chiusi con filtri, KPI, esportazione CSV
- **Posizione** — drilldown sulla singola posizione con grafico payoff
Il cruscotto legge `data/state.sqlite` e `data/audit.log` direttamente;
non interroga mai i servizi MCP né il broker. L'unico canale di
scrittura è la coda `manual_actions` per arm/disarm del kill switch.
"""
)
snap = load_engine_snapshot(db_path=db_path)
cols = st.columns(4)
cols[0].metric("Salute motore", _HEALTH_BADGES[snap.health][1])
cols[1].metric(
"Kill switch",
"ARMATO" if snap.kill_switch_armed else "DISARMATO",
)
cols[2].metric("Posizioni aperte", snap.open_positions)
cols[3].metric(
"Ultimo health check",
humanize_age(snap.last_health_check_age_s),
)
if __name__ == "__main__":
main()
+347
View File
@@ -0,0 +1,347 @@
"""Status page — engine health at a glance."""
from __future__ import annotations
import os
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
EngineSnapshot,
enqueue_arm_kill,
enqueue_disarm_kill,
enqueue_run_cycle,
humanize_age,
humanize_dt,
load_engine_snapshot,
load_open_positions,
load_pending_manual_actions,
)
from cerbero_bite.gui.live_data import BalancesSnapshot, fetch_balances_sync
def _resolve_paths() -> tuple[Path, Path]:
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
_HEALTH_COLORS = {
"running": ("🟢", "success"),
"degraded": ("🟡", "warning"),
"killed": ("🔴", "error"),
"stopped": ("", "warning"),
"unknown": ("", "info"),
}
_TYPED_PHRASE = "confermo"
def _render_force_cycle_panel(db_path: Path) -> None:
st.subheader("Forza ciclo")
st.caption(
"Accoda una richiesta di esecuzione immediata di un ciclo. Funziona "
"solo se il motore è in esecuzione (`cerbero-bite start`); il job "
"`manual_actions` consuma la coda ogni minuto."
)
cols = st.columns(4)
if cols[0].button(
"▶ Forza entry",
use_container_width=True,
help="Esegue subito una valutazione del ciclo entry.",
):
aid = enqueue_run_cycle(cycle="entry", db_path=db_path)
st.success(
f"✅ ciclo entry accodato (id #{aid}). "
"Il motore lo eseguirà entro ~1 minuto."
)
if cols[1].button(
"🔍 Forza monitor",
use_container_width=True,
help="Esegue subito un giro del monitor sulle posizioni aperte.",
):
aid = enqueue_run_cycle(cycle="monitor", db_path=db_path)
st.success(f"✅ ciclo monitor accodato (id #{aid}).")
if cols[2].button(
"💓 Forza health",
use_container_width=True,
help="Esegue subito un health check completo.",
):
aid = enqueue_run_cycle(cycle="health", db_path=db_path)
st.success(f"✅ ciclo health accodato (id #{aid}).")
if cols[3].button(
"📐 Forza snapshot",
use_container_width=True,
help="Esegue subito una raccolta market_snapshot (alimenta Calibrazione).",
):
aid = enqueue_run_cycle(cycle="market_snapshot", db_path=db_path)
st.success(f"✅ snapshot accodato (id #{aid}).")
@st.cache_data(ttl=60, show_spinner=False)
def _cached_balances() -> BalancesSnapshot:
"""Fetch balances at most once per minute per Streamlit session."""
return fetch_balances_sync(timeout_s=10.0)
def _render_balances_panel() -> None:
st.subheader("Saldi exchange")
refresh = st.button("🔄 Aggiorna saldi", help="Forza un nuovo fetch dagli MCP.")
if refresh:
_cached_balances.clear()
try:
snap = _cached_balances()
except Exception as exc:
st.error(
f"Impossibile leggere i saldi: {type(exc).__name__}: {exc}"
)
return
rows = []
for r in snap.rows:
rows.append(
{
"exchange": r.exchange,
"valuta": r.currency,
"equity": (
f"{float(r.equity):,.2f}"
if r.equity is not None
else ""
),
"disponibile": (
f"{float(r.available):,.2f}"
if r.available is not None
else ""
),
"P&L non realizzato": (
f"{float(r.unrealized_pnl):+.2f}"
if r.unrealized_pnl is not None
else ""
),
"errore": r.error or "",
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
cols = st.columns(3)
cols[0].metric("Totale USD", f"${float(snap.total_usd()):,.2f}")
eur = snap.total_eur()
cols[1].metric(
"Totale EUR",
f"{float(eur):,.2f}" if eur is not None else "",
)
cols[2].metric(
"Cambio EUR/USD",
f"{float(snap.eur_usd_rate):.4f}"
if snap.eur_usd_rate is not None
else "",
)
if snap.fx_error:
st.warning(f"FX non disponibile: {snap.fx_error}")
age = (
f" · letti {humanize_dt(snap.fetched_at)}"
if snap.fetched_at is not None
else ""
)
st.caption(
f"Cache TTL 60s · saldi letti dal gateway MCP{age}"
)
def _render_kill_switch_panel(db_path: Path, snap: EngineSnapshot) -> None:
st.subheader("Comandi kill switch")
if snap.kill_switch_armed:
st.warning(
"Kill switch **armato**. Disarmandolo viene accodata una "
"azione `disarm_kill`; il consumer del motore la applica al "
"prossimo tick di un minuto e la transizione viene registrata "
"nella catena audit."
)
with st.form("kill_disarm_form", clear_on_submit=True):
reason = st.text_input(
"Motivo (obbligatorio)",
placeholder="es. finestra macro superata",
)
confirm = st.text_input(
f"Scrivi `{_TYPED_PHRASE}` per confermare",
placeholder=_TYPED_PHRASE,
)
submitted = st.form_submit_button(
"🟢 Accoda disarmo",
type="primary",
use_container_width=True,
)
if submitted:
if confirm.strip() != _TYPED_PHRASE:
st.error(
f"Scrivi esattamente `{_TYPED_PHRASE}` per confermare."
)
elif not reason.strip():
st.error("Il motivo è obbligatorio.")
else:
aid = enqueue_disarm_kill(reason=reason, db_path=db_path)
st.success(
f"✅ disarmo accodato (id #{aid}). "
"Il motore lo applicherà entro ~1 minuto."
)
else:
st.info(
"Kill switch **disarmato**. Armandolo viene accodata una "
"azione `arm_kill`; il consumer del motore la applica al "
"prossimo tick di un minuto."
)
with st.form("kill_arm_form", clear_on_submit=True):
reason = st.text_input(
"Motivo (obbligatorio)",
placeholder="es. shock macro — sospendi trading",
)
confirm = st.text_input(
f"Scrivi `{_TYPED_PHRASE}` per confermare",
placeholder=_TYPED_PHRASE,
)
submitted = st.form_submit_button(
"🔴 Accoda armamento",
type="secondary",
use_container_width=True,
)
if submitted:
if confirm.strip() != _TYPED_PHRASE:
st.error(
f"Scrivi esattamente `{_TYPED_PHRASE}` per confermare."
)
elif not reason.strip():
st.error("Il motivo è obbligatorio.")
else:
aid = enqueue_arm_kill(reason=reason, db_path=db_path)
st.success(
f"✅ armamento accodato (id #{aid}). "
"Il motore lo applicherà entro ~1 minuto."
)
def render() -> None:
st.title("📊 Stato")
st.caption(
"Salute del motore, kill switch, posizioni aperte e ancora audit."
)
db_path, _ = _resolve_paths()
snap = load_engine_snapshot(db_path=db_path)
icon, level = _HEALTH_COLORS.get(snap.health, ("", "info"))
banner = f"{icon} **{snap.health_label}**"
if level == "success":
st.success(banner)
elif level == "warning":
st.warning(banner)
elif level == "error":
st.error(banner)
else:
st.info(banner)
if snap.kill_switch_armed:
st.error(
f"**Kill switch armato** — il motore rifiuterà nuove entrate.\n\n"
f"- motivo: `{snap.kill_reason or ''}`\n"
f"- da: `{humanize_dt(snap.kill_at)}`"
)
# Top metrics
cols = st.columns(4)
cols[0].metric("Posizioni aperte", snap.open_positions)
cols[1].metric(
"Ultimo health check", humanize_age(snap.last_health_check_age_s)
)
cols[2].metric("Avviato il", humanize_dt(snap.started_at))
cols[3].metric("Versione config", snap.config_version or "")
st.divider()
# Saldi exchange (live MCP fetch, TTL 60s)
_render_balances_panel()
st.divider()
# Forza ciclo
_render_force_cycle_panel(db_path)
st.divider()
# Kill switch controls
_render_kill_switch_panel(db_path, snap)
st.divider()
# Azioni manuali pendenti
pending = load_pending_manual_actions(db_path=db_path)
if pending:
st.subheader("Azioni manuali pendenti")
st.caption(
"Accodate da questo cruscotto, non ancora consumate. Il motore "
"drena la coda ogni minuto tramite il job `manual_actions`."
)
rows_pending = [
{
"id": a.id,
"tipo": a.kind,
"payload": a.payload_json or "",
"creata il": humanize_dt(a.created_at),
}
for a in pending
]
st.dataframe(rows_pending, use_container_width=True, hide_index=True)
st.divider()
# Ancora audit
st.subheader("Ancora audit")
if snap.last_audit_hash is None:
st.info("Nessuna ancora registrata.")
else:
short = (
f"{snap.last_audit_hash[:12]}{snap.last_audit_hash[-12:]}"
if len(snap.last_audit_hash) > 24
else snap.last_audit_hash
)
st.code(short, language="text")
st.caption(
"Ultima testa della catena hash persistita in "
"`system_state.last_audit_hash`. All'avvio l'orchestrator la "
"confronta con la coda del file audit; un mismatch arma il "
"kill switch (CRITICAL)."
)
st.divider()
# Tabella posizioni aperte
st.subheader("Posizioni aperte")
positions = load_open_positions(db_path=db_path)
if not positions:
st.info("Nessuna posizione aperta.")
else:
rows = [
{
"proposal_id": str(p.proposal_id)[:8],
"spread": p.spread_type,
"asset": p.asset,
"n. contratti": p.n_contracts,
"credito (USD)": f"{p.credit_usd:.2f}",
"max perdita (USD)": f"{p.max_loss_usd:.2f}",
"strike short": f"{p.short_strike}",
"strike long": f"{p.long_strike}",
"stato": p.status,
"aperta il": humanize_dt(p.opened_at),
"scadenza": humanize_dt(p.expiry),
}
for p in positions
]
st.dataframe(rows, use_container_width=True)
render()
+122
View File
@@ -0,0 +1,122 @@
"""Audit page — live audit log stream + chain integrity verification."""
from __future__ import annotations
import json
import os
from collections import Counter
from pathlib import Path
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_AUDIT_PATH,
DEFAULT_DB_PATH,
humanize_dt,
load_audit_chain_status,
load_audit_tail,
)
def _resolve_paths() -> tuple[Path, Path]:
db_path = Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
audit_path = Path(os.environ.get("CERBERO_BITE_GUI_AUDIT", DEFAULT_AUDIT_PATH))
return db_path, audit_path
def render() -> None:
st.title("🔍 Audit")
st.caption(
"Registro audit append-only con hash chain "
"(`data/audit.log`). La lettura non modifica nulla."
)
_, audit_path = _resolve_paths()
col_l, col_r = st.columns([1, 2])
with col_l:
st.subheader("Integrità catena")
if st.button("Verifica catena", type="primary"):
with st.spinner("Sto percorrendo la catena…"):
status = load_audit_chain_status(audit_path=audit_path)
if status.ok:
st.success(
f"✅ catena integra fino a {status.entries_verified} eventi"
)
else:
st.error(
f"❌ tampering rilevato\n\n```\n{status.error}\n```"
)
else:
st.caption(
"Premi per ricalcolare l'hash di ogni riga e verificare il "
"collegamento prev-hash. Mismatch → alert CRITICAL in "
"produzione."
)
with col_r:
st.subheader("Filtri")
limit = st.slider(
"Ultimi N eventi",
min_value=10,
max_value=500,
value=100,
step=10,
)
# Build event list from the available tail
all_recent = load_audit_tail(audit_path=audit_path, limit=limit)
events_present = sorted({e.event for e in all_recent})
event_filter = st.selectbox(
"Filtro per evento",
options=["(tutti)", *events_present],
index=0,
)
st.divider()
# Statistics strip
counter: Counter[str] = Counter(e.event for e in all_recent)
if counter:
cols = st.columns(min(len(counter), 6))
for col, (event, count) in zip(cols, counter.most_common(6), strict=False):
col.metric(event, count)
st.divider()
# Tail filtrata
filtered = (
all_recent
if event_filter == "(tutti)"
else [e for e in all_recent if e.event == event_filter]
)
st.subheader(f"Ultimi eventi ({len(filtered)} mostrati)")
if not filtered:
st.info("Nessun evento corrisponde ai filtri.")
return
rows = []
for entry in filtered:
try:
payload_pretty = json.dumps(
entry.payload, ensure_ascii=False, sort_keys=True
)
except (TypeError, ValueError):
payload_pretty = str(entry.payload)
rows.append(
{
"timestamp": humanize_dt(entry.timestamp),
"evento": entry.event,
"payload": payload_pretty,
"hash": (
f"{entry.hash[:8]}{entry.hash[-8:]}"
if len(entry.hash) > 16
else entry.hash
),
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
render()
+178
View File
@@ -0,0 +1,178 @@
"""Equity page — cumulative PnL, drawdown, distributions."""
from __future__ import annotations
import os
from collections import Counter
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_equity_curve,
compute_kpis,
compute_monthly_stats,
load_closed_positions,
)
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _date_window(label: str) -> tuple[datetime | None, datetime | None]:
"""Selettore della finestra temporale per l'analitica."""
options = {
"Tutto lo storico": (None, None),
"Ultimi 30 giorni": (datetime.now(UTC) - timedelta(days=30), None),
"Ultimi 90 giorni": (datetime.now(UTC) - timedelta(days=90), None),
"Da inizio anno": (
datetime(datetime.now(UTC).year, 1, 1, tzinfo=UTC),
None,
),
}
pick = st.selectbox(label, list(options.keys()), index=0)
return options[pick]
def render() -> None:
st.title("📈 Equity")
st.caption(
"P&L realizzato cumulato, drawdown e distribuzione per trade. "
"Calcolato dalle posizioni chiuse in `data/state.sqlite`."
)
start, end = _date_window("Finestra")
db_path = _resolve_db()
positions = load_closed_positions(db_path=db_path, start=start, end=end)
if not positions:
st.info(
"Nessuna posizione chiusa nella finestra selezionata. "
"La curva equity si popolerà non appena il motore chiuderà "
"il primo trade."
)
return
# Striscia KPI
kpis = compute_kpis(positions)
cols = st.columns(5)
cols[0].metric("Trade chiusi", kpis.n_trades)
cols[1].metric("Win rate", f"{kpis.win_rate:.0%}")
cols[2].metric("P&L totale", f"${float(kpis.total_pnl_usd):+.2f}")
cols[3].metric("Edge / trade", f"${float(kpis.edge_per_trade_usd):+.2f}")
cols[4].metric(
"Max drawdown",
f"${float(kpis.max_drawdown_usd):.2f}",
delta=f"{kpis.max_drawdown_pct:.1%}",
delta_color="inverse",
)
st.divider()
# Equity curve + drawdown
curve = compute_equity_curve(positions)
df = pd.DataFrame(
{
"timestamp": [p.timestamp for p in curve],
"cumulative_pnl_usd": [float(p.cumulative_pnl_usd) for p in curve],
"drawdown_usd": [float(p.drawdown_usd) for p in curve],
"realized_pnl_usd": [float(p.realized_pnl_usd) for p in curve],
}
)
st.subheader("P&L cumulato (USD)")
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=df["timestamp"],
y=df["cumulative_pnl_usd"],
mode="lines+markers",
name="P&L cumulato",
line={"color": "#2ecc71", "width": 2},
)
)
fig.add_hline(y=0, line_dash="dot", line_color="grey", opacity=0.5)
fig.update_layout(
height=320,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=None,
yaxis_title="USD",
)
st.plotly_chart(fig, use_container_width=True)
st.subheader("Drawdown (USD)")
dd_fig = go.Figure()
dd_fig.add_trace(
go.Scatter(
x=df["timestamp"],
y=-df["drawdown_usd"],
mode="lines",
fill="tozeroy",
name="drawdown",
line={"color": "#e74c3c", "width": 1.5},
)
)
dd_fig.update_layout(
height=220,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=None,
yaxis_title="USD",
)
st.plotly_chart(dd_fig, use_container_width=True)
# Distribuzione P&L
st.subheader("Distribuzione P&L per motivo di chiusura")
by_reason: dict[str, list[float]] = {}
for pos in positions:
if pos.pnl_usd is None:
continue
by_reason.setdefault(pos.close_reason or "(sconosciuto)", []).append(
float(pos.pnl_usd)
)
counts = Counter(
(pos.close_reason or "(sconosciuto)") for pos in positions
)
cols = st.columns(min(len(counts), 6) or 1)
for col, (reason, count) in zip(cols, counts.most_common(6), strict=False):
col.metric(reason, count)
hist_fig = go.Figure()
for reason, pnls in by_reason.items():
hist_fig.add_trace(
go.Histogram(x=pnls, name=reason, opacity=0.6, nbinsx=30)
)
hist_fig.update_layout(
barmode="overlay",
height=320,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title="P&L (USD)",
yaxis_title="numero trade",
legend={"orientation": "h", "y": 1.1},
)
st.plotly_chart(hist_fig, use_container_width=True)
# Tabella mensile
st.subheader("Statistiche mensili")
months = compute_monthly_stats(positions)
rows = [
{
"mese": m.year_month,
"trade": m.n_trades,
"vittorie": m.n_wins,
"win rate": f"{m.win_rate:.0%}",
"P&L (USD)": f"{float(m.pnl_usd):+.2f}",
"media / trade": f"{float(m.avg_pnl_usd):+.2f}",
}
for m in months
]
st.dataframe(rows, use_container_width=True, hide_index=True)
render()
@@ -0,0 +1,135 @@
"""History page — closed-trade table with filters and CSV export."""
from __future__ import annotations
import io
import os
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_kpis,
humanize_dt,
load_closed_positions,
)
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _date_window() -> tuple[datetime | None, datetime | None]:
presets = {
"Tutto lo storico": (None, None),
"Ultimi 7 giorni": (datetime.now(UTC) - timedelta(days=7), None),
"Ultimi 30 giorni": (datetime.now(UTC) - timedelta(days=30), None),
"Ultimi 90 giorni": (datetime.now(UTC) - timedelta(days=90), None),
"Da inizio anno": (
datetime(datetime.now(UTC).year, 1, 1, tzinfo=UTC),
None,
),
}
pick = st.selectbox("Finestra", list(presets.keys()), index=0)
return presets[pick]
def render() -> None:
st.title("📜 Storico")
st.caption(
"Trade chiusi con filtri, striscia KPI ed esportazione CSV."
)
db_path = _resolve_db()
start, end = _date_window()
positions = load_closed_positions(db_path=db_path, start=start, end=end)
# Sotto-filtri per motivo di chiusura e segno P&L
reason_options = sorted(
{p.close_reason or "(sconosciuto)" for p in positions}
)
chosen_reasons = st.multiselect(
"Motivi di chiusura",
options=reason_options,
default=reason_options,
)
pnl_filter = st.radio(
"Filtro P&L",
options=["tutti", "vincenti", "perdenti"],
horizontal=True,
index=0,
)
filtered = []
for p in positions:
reason = p.close_reason or "(sconosciuto)"
if reason not in chosen_reasons:
continue
if pnl_filter == "vincenti" and (p.pnl_usd is None or p.pnl_usd <= 0):
continue
if pnl_filter == "perdenti" and (p.pnl_usd is None or p.pnl_usd >= 0):
continue
filtered.append(p)
# Striscia KPI
kpis = compute_kpis(filtered)
cols = st.columns(6)
cols[0].metric("Trade", kpis.n_trades)
cols[1].metric("Win rate", f"{kpis.win_rate:.0%}")
cols[2].metric("P&L totale", f"${float(kpis.total_pnl_usd):+.2f}")
cols[3].metric("Vittoria media", f"${float(kpis.avg_win_usd):+.2f}")
cols[4].metric("Perdita media", f"${float(kpis.avg_loss_usd):+.2f}")
cols[5].metric("Edge / trade", f"${float(kpis.edge_per_trade_usd):+.2f}")
st.divider()
if not filtered:
st.info("Nessun trade corrisponde ai filtri correnti.")
return
# DataFrame per visualizzazione + esportazione
rows = []
for p in filtered:
days_held = (
(p.closed_at - p.opened_at).days
if p.opened_at and p.closed_at
else None
)
rows.append(
{
"proposal_id": str(p.proposal_id)[:8],
"spread": p.spread_type,
"asset": p.asset,
"n. contratti": p.n_contracts,
"strike short": float(p.short_strike),
"strike long": float(p.long_strike),
"credito (USD)": float(p.credit_usd),
"max perdita (USD)": float(p.max_loss_usd),
"P&L (USD)": (
float(p.pnl_usd) if p.pnl_usd is not None else None
),
"motivo chiusura": p.close_reason or "(sconosciuto)",
"giorni tenuta": days_held,
"aperta il": humanize_dt(p.opened_at),
"chiusa il": humanize_dt(p.closed_at),
"scadenza": humanize_dt(p.expiry),
}
)
df = pd.DataFrame(rows)
st.dataframe(df, use_container_width=True, hide_index=True)
# Esportazione CSV
buf = io.StringIO()
df.to_csv(buf, index=False)
st.download_button(
"⬇ Scarica CSV",
data=buf.getvalue(),
file_name=f"cerbero_bite_storico_{datetime.now(UTC).date()}.csv",
mime="text/csv",
)
render()
@@ -0,0 +1,244 @@
"""Position page — drilldown on a single open or recently-closed trade."""
from __future__ import annotations
import json
import os
from pathlib import Path
from uuid import UUID
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
compute_distance_metrics,
compute_payoff_curve,
humanize_dt,
load_closed_positions,
load_decisions_for_position,
load_open_positions,
load_position_by_id,
)
from cerbero_bite.state.models import PositionRecord
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
def _position_label(p: PositionRecord) -> str:
short = (
f"{int(p.short_strike)}/{int(p.long_strike)}"
if p.short_strike and p.long_strike
else ""
)
return f"{str(p.proposal_id)[:8]} · {p.spread_type} · {short} · {p.status}"
def _render_header(position: PositionRecord) -> None:
cols = st.columns(4)
cols[0].metric("stato", position.status)
cols[1].metric("spread", position.spread_type)
cols[2].metric("contratti", position.n_contracts)
cols[3].metric("credito (USD)", f"${float(position.credit_usd):+.2f}")
st.caption(
f"`{position.proposal_id}` · aperta il "
f"{humanize_dt(position.opened_at)} · scadenza "
f"{humanize_dt(position.expiry)}"
)
def _render_legs(position: PositionRecord) -> None:
st.subheader("Gambe (snapshot all'entrata)")
rows = [
{
"gamba": "short",
"strumento": position.short_instrument,
"strike": float(position.short_strike),
"lato": "VENDI",
"size": position.n_contracts,
"delta all'entrata": float(position.delta_at_entry),
},
{
"gamba": "long",
"strumento": position.long_instrument,
"strike": float(position.long_strike),
"lato": "COMPRA",
"size": position.n_contracts,
"delta all'entrata": "",
},
]
st.dataframe(rows, use_container_width=True, hide_index=True)
st.caption(
"Mid e greche live non vengono richiesti agli MCP dal cruscotto. "
"Il refresh è demandato al motore: visibile nella pagina Audit."
)
def _render_distance(position: PositionRecord) -> None:
metrics = compute_distance_metrics(position)
cols = st.columns(5)
cols[0].metric(
"Short OTM %",
f"{metrics.short_strike_otm_pct:.1%}"
if metrics.short_strike_otm_pct is not None
else "",
)
cols[1].metric(
"Giorni a scadenza",
metrics.days_to_expiry if metrics.days_to_expiry is not None else "",
)
cols[2].metric(
"Giorni in tenuta",
metrics.days_held if metrics.days_held is not None else "",
)
cols[3].metric("Δ all'entrata", f"{metrics.delta_at_entry:+.3f}")
cols[4].metric("Larghezza % spot", f"{metrics.width_pct_of_spot:.1%}")
def _render_payoff(position: PositionRecord) -> None:
st.subheader("Payoff a scadenza")
curve = compute_payoff_curve(position)
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=curve.spot_grid,
y=curve.pnl_grid_usd,
mode="lines",
line={"color": "#3498db", "width": 2.5},
name="P&L a scadenza",
fill="tozeroy",
fillcolor="rgba(52,152,219,0.10)",
)
)
fig.add_hline(y=0, line_dash="dot", line_color="grey", opacity=0.5)
fig.add_vline(
x=curve.short_strike,
line_dash="dash",
line_color="#27ae60",
opacity=0.7,
annotation_text=f"short {curve.short_strike:.0f}",
annotation_position="top",
)
fig.add_vline(
x=curve.long_strike,
line_dash="dash",
line_color="#c0392b",
opacity=0.7,
annotation_text=f"long {curve.long_strike:.0f}",
annotation_position="top",
)
if curve.breakeven is not None:
fig.add_vline(
x=curve.breakeven,
line_dash="dot",
line_color="orange",
opacity=0.7,
annotation_text=f"BE {curve.breakeven:.2f}",
annotation_position="bottom",
)
fig.add_vline(
x=curve.spot_at_entry,
line_dash="solid",
line_color="#7f8c8d",
opacity=0.4,
annotation_text=f"spot all'entrata {curve.spot_at_entry:.0f}",
annotation_position="bottom",
)
fig.update_layout(
height=380,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title="ETH spot a scadenza (USD)",
yaxis_title="P&L (USD)",
legend={"orientation": "h", "y": 1.1},
)
st.plotly_chart(fig, use_container_width=True)
cols = st.columns(3)
cols[0].metric("Profitto massimo", f"${curve.max_profit_usd:+.2f}")
cols[1].metric("Perdita massima", f"${curve.max_loss_usd:+.2f}")
cols[2].metric(
"Breakeven",
f"{curve.breakeven:.2f}" if curve.breakeven is not None else "",
)
def _render_decisions(position: PositionRecord) -> None:
st.subheader("Storico decisioni")
decisions = load_decisions_for_position(position.proposal_id)
if not decisions:
st.info("Nessuna decisione registrata per questa posizione.")
return
rows = []
for d in decisions:
try:
outputs = json.loads(d.outputs_json)
except (TypeError, ValueError):
outputs = {}
rows.append(
{
"timestamp": humanize_dt(d.timestamp),
"tipo decisione": d.decision_type,
"azione": d.action_taken or "",
"note": d.notes or "",
"output": json.dumps(outputs, sort_keys=True) if outputs else "",
}
)
st.dataframe(rows, use_container_width=True, hide_index=True)
def render() -> None:
st.title("💼 Posizione")
st.caption(
"Drilldown sul trade: gambe, payoff a scadenza, storico decisioni. "
"Tutti i dati arrivano da SQLite — nessuna chiamata MCP live."
)
db_path = _resolve_db()
open_pos = load_open_positions(db_path=db_path)
closed_recent = load_closed_positions(db_path=db_path)[-10:]
candidates: list[PositionRecord] = list(open_pos) + list(reversed(closed_recent))
if not candidates:
st.info(
"Nessuna posizione da mostrare. La pagina si popolerà non "
"appena il motore aprirà il primo trade."
)
return
labels = {_position_label(p): p for p in candidates}
pick = st.selectbox(
"Posizione",
options=list(labels.keys()),
index=0,
)
position = labels[pick]
# Deep-link via ?proposal_id=…
qp = st.query_params.get("proposal_id")
if qp:
try:
qp_uuid = UUID(qp)
override = load_position_by_id(qp_uuid, db_path=db_path)
if override is not None:
position = override
except ValueError:
st.warning(f"Parametro proposal_id non valido: {qp}")
st.divider()
_render_header(position)
st.divider()
_render_distance(position)
st.divider()
_render_legs(position)
st.divider()
_render_payoff(position)
st.divider()
_render_decisions(position)
render()
@@ -0,0 +1,309 @@
"""Calibrazione page — distribuzioni storiche dei segnali per tarare le soglie.
Legge dalla tabella ``market_snapshots`` (popolata dal job dedicato cron
``*/15``). Per ogni metrica osservabile mostra:
* istogramma + linea verticale della soglia attuale di config,
* percentili P5/P10/P25/P50/P75/P90/P95,
* percentuale di tick che la soglia attuale avrebbe filtrato.
L'idea è scegliere le soglie sui percentili reali del proprio
ambiente (testnet o mainnet), invece di valori fissati a istinto.
"""
from __future__ import annotations
import os
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from pathlib import Path
import pandas as pd
import plotly.graph_objects as go
import streamlit as st
from cerbero_bite.config.loader import load_strategy
from cerbero_bite.gui.data_layer import (
DEFAULT_DB_PATH,
humanize_dt,
load_market_snapshots,
)
from cerbero_bite.state.models import MarketSnapshotRecord
def _resolve_db() -> Path:
return Path(os.environ.get("CERBERO_BITE_GUI_DB", DEFAULT_DB_PATH))
@dataclass(frozen=True)
class MetricSpec:
"""Descrittore della metrica da plottare."""
field: str
title: str
unit: str
threshold_label: str | None
threshold_value: float | None
threshold_direction: str # "below" o "above" (filtra se valore è X soglia)
def _metric_specs(strategy: object | None) -> list[MetricSpec]:
"""Costruisce gli spec leggendo le soglie correnti da strategy.yaml."""
funding_max: float | None = None
dealer_min: float | None = None
dvol_min: float | None = None
if strategy is not None:
try:
funding_max = float(strategy.entry.funding_max_abs_annualized) # type: ignore[attr-defined]
except Exception:
funding_max = None
try:
dealer_min = float(strategy.entry.dealer_gamma_min) # type: ignore[attr-defined]
except Exception:
dealer_min = None
try:
dvol_min = float(strategy.entry.dvol_min) # type: ignore[attr-defined]
except Exception:
dvol_min = None
specs: list[MetricSpec] = [
MetricSpec(
field="dvol",
title="DVOL",
unit="%",
threshold_label=(
f"DVOL min={dvol_min:.0f}" if dvol_min is not None else None
),
threshold_value=dvol_min,
threshold_direction="below",
),
MetricSpec(
field="realized_vol_30d",
title="Realized vol 30d",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
MetricSpec(
field="iv_minus_rv",
title="IV RV (30d)",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
MetricSpec(
field="funding_perp_annualized",
title="Funding perp annualized",
unit="frazione",
threshold_label=(
f"|funding| max={funding_max:.2f}"
if funding_max is not None
else None
),
threshold_value=funding_max,
threshold_direction="above_abs",
),
MetricSpec(
field="funding_cross_annualized",
title="Funding cross median annualized",
unit="frazione",
threshold_label=None,
threshold_value=None,
threshold_direction="above_abs",
),
MetricSpec(
field="dealer_net_gamma",
title="Dealer net gamma",
unit="USD",
threshold_label=(
f"min={dealer_min:.0f}"
if dealer_min is not None
else None
),
threshold_value=dealer_min,
threshold_direction="below",
),
MetricSpec(
field="oi_delta_pct_4h",
title="OI delta % (4h)",
unit="%",
threshold_label=None,
threshold_value=None,
threshold_direction="below",
),
]
return specs
def _series(records: list[MarketSnapshotRecord], field: str) -> pd.Series:
values: list[float] = []
for r in records:
v = getattr(r, field, None)
if v is None:
continue
try:
values.append(float(v))
except (TypeError, ValueError):
continue
return pd.Series(values, dtype="float64")
def _percent_blocked(s: pd.Series, spec: MetricSpec) -> float | None:
if spec.threshold_value is None or s.empty:
return None
if spec.threshold_direction == "below":
return float((s < spec.threshold_value).mean())
if spec.threshold_direction == "above_abs":
return float((s.abs() > spec.threshold_value).mean())
if spec.threshold_direction == "above":
return float((s > spec.threshold_value).mean())
return None
def _percentiles_strip(s: pd.Series) -> None:
if s.empty:
st.caption("(nessun dato)")
return
quantiles = [0.05, 0.10, 0.25, 0.50, 0.75, 0.90, 0.95]
cols = st.columns(len(quantiles))
for col, q in zip(cols, quantiles, strict=False):
col.metric(f"P{int(q * 100)}", f"{s.quantile(q):.4g}")
def _render_metric(spec: MetricSpec, records: list[MarketSnapshotRecord]) -> None:
s = _series(records, spec.field)
if s.empty:
st.subheader(f"{spec.title}")
st.info(
f"Nessun valore disponibile per `{spec.field}`. "
"Avvia il job `market_snapshot` (engine attivo, cron */15) per "
"popolare la tabella."
)
return
st.subheader(f"{spec.title} ({spec.unit})")
pct_blocked = _percent_blocked(s, spec)
cols = st.columns(4)
cols[0].metric("Tick raccolti", len(s))
cols[1].metric("Min", f"{s.min():.4g}")
cols[2].metric("Max", f"{s.max():.4g}")
cols[3].metric(
"% bloccato dalla soglia",
f"{pct_blocked:.0%}" if pct_blocked is not None else "",
help=(
"Frazione di tick che la soglia di config avrebbe filtrato"
f" se applicata a questa serie ({spec.threshold_direction})."
),
)
fig = go.Figure()
fig.add_trace(go.Histogram(x=s, nbinsx=40, opacity=0.85, name="distrib."))
if spec.threshold_value is not None:
fig.add_vline(
x=spec.threshold_value,
line_dash="dash",
line_color="red",
line_width=2,
annotation_text=spec.threshold_label or f"soglia {spec.threshold_value}",
annotation_position="top",
)
if spec.threshold_direction == "above_abs":
# Disegna anche il bound negativo per i filtri simmetrici.
fig.add_vline(
x=-spec.threshold_value,
line_dash="dash",
line_color="red",
line_width=2,
annotation_text=None,
)
fig.update_layout(
height=280,
margin={"l": 10, "r": 10, "t": 30, "b": 10},
xaxis_title=spec.unit,
yaxis_title="numero tick",
)
st.plotly_chart(fig, use_container_width=True)
_percentiles_strip(s)
def render() -> None:
st.title("📐 Calibrazione")
st.caption(
"Distribuzioni storiche dei segnali raccolti dal job "
"`market_snapshot` (cron */15). Usa i percentili reali per "
"tarare le soglie in `strategy.yaml` invece di valori a istinto."
)
db_path = _resolve_db()
col_a, col_b = st.columns(2)
asset = col_a.selectbox("Asset", options=["ETH", "BTC"], index=0)
window = col_b.selectbox(
"Finestra",
options=[
"Tutto lo storico",
"Ultime 24h",
"Ultimi 7 giorni",
"Ultimi 30 giorni",
],
index=0,
)
now = datetime.now(UTC)
start: datetime | None = None
if window == "Ultime 24h":
start = now - timedelta(hours=24)
elif window == "Ultimi 7 giorni":
start = now - timedelta(days=7)
elif window == "Ultimi 30 giorni":
start = now - timedelta(days=30)
records = load_market_snapshots(
asset=asset, db_path=db_path, start=start, limit=5000
)
if not records:
st.info(
"Nessun snapshot disponibile in questa finestra per "
f"`{asset}`. Avvia l'engine (`cerbero-bite start`) e attendi "
"almeno un tick del job `market_snapshot` (cron */15)."
)
return
st.caption(
f"{len(records)} snapshot · primo {humanize_dt(records[-1].timestamp)} "
f"· ultimo {humanize_dt(records[0].timestamp)}"
)
# Conteggio fetch_ok per qualità delle serie
n_ok = sum(1 for r in records if r.fetch_ok)
cols = st.columns(3)
cols[0].metric("Snapshot totali", len(records))
cols[1].metric("fetch_ok = true", n_ok)
cols[2].metric(
"Tasso ok",
f"{n_ok / len(records):.0%}" if records else "",
)
st.divider()
# Carica strategy.yaml per leggere le soglie correnti
try:
strategy = load_strategy(Path("strategy.yaml"))
except Exception as exc:
st.warning(
f"Impossibile leggere `strategy.yaml`: {type(exc).__name__}: {exc}"
)
strategy = None
specs = _metric_specs(strategy)
for spec in specs:
_render_metric(spec, records)
st.divider()
render()
+4 -1
View File
@@ -71,8 +71,11 @@ class AlertManager:
return return
if severity == Severity.MEDIUM: if severity == Severity.MEDIUM:
# The TelegramClient already prefixes [PRIORITY][tag] in the
# rendered text, so we pass the raw message and let the
# client compose the final form.
await self._telegram.notify( await self._telegram.notify(
f"[{source}] {message}", priority="high", tag=source message, priority="high", tag=source
) )
return return
+24 -8
View File
@@ -16,13 +16,13 @@ from pathlib import Path
import httpx import httpx
from cerbero_bite.clients._base import HttpToolClient from cerbero_bite.clients._base import DEFAULT_BOT_TAG, HttpToolClient
from cerbero_bite.clients.deribit import DeribitClient from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.clients.hyperliquid import HyperliquidClient from cerbero_bite.clients.hyperliquid import HyperliquidClient
from cerbero_bite.clients.macro import MacroClient from cerbero_bite.clients.macro import MacroClient
from cerbero_bite.clients.portfolio import PortfolioClient from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.clients.sentiment import SentimentClient from cerbero_bite.clients.sentiment import SentimentClient
from cerbero_bite.clients.telegram import TelegramClient from cerbero_bite.clients.telegram import TelegramClient, load_telegram_credentials
from cerbero_bite.config.mcp_endpoints import McpEndpoints from cerbero_bite.config.mcp_endpoints import McpEndpoints
from cerbero_bite.config.schema import StrategyConfig from cerbero_bite.config.schema import StrategyConfig
from cerbero_bite.runtime.alert_manager import AlertManager from cerbero_bite.runtime.alert_manager import AlertManager
@@ -78,6 +78,7 @@ def build_runtime(
token: str, token: str,
db_path: Path | str, db_path: Path | str,
audit_path: Path | str, audit_path: Path | str,
bot_tag: str = DEFAULT_BOT_TAG,
timeout_s: float = 8.0, timeout_s: float = 8.0,
retry_max: int = 3, retry_max: int = 3,
clock: Callable[[], datetime] | None = None, clock: Callable[[], datetime] | None = None,
@@ -140,16 +141,31 @@ def build_runtime(
service=service, service=service,
base_url=endpoints.for_service(service), base_url=endpoints.for_service(service),
token=token, token=token,
bot_tag=bot_tag,
timeout_s=timeout_s, timeout_s=timeout_s,
retry_max=retry_max, retry_max=retry_max,
client=http_client, client=http_client,
) )
telegram = TelegramClient(_client("telegram")) bot_token, chat_id = load_telegram_credentials()
telegram = TelegramClient(
bot_token=bot_token,
chat_id=chat_id,
http_client=http_client,
timeout_s=timeout_s,
)
alert_manager = AlertManager( alert_manager = AlertManager(
telegram=telegram, audit_log=audit_log, kill_switch=kill_switch telegram=telegram, audit_log=audit_log, kill_switch=kill_switch
) )
deribit = DeribitClient(_client("deribit"))
macro = MacroClient(_client("macro"))
sentiment = SentimentClient(_client("sentiment"))
hyperliquid = HyperliquidClient(_client("hyperliquid"))
portfolio = PortfolioClient(
deribit=deribit, hyperliquid=hyperliquid, macro=macro
)
return RuntimeContext( return RuntimeContext(
cfg=cfg, cfg=cfg,
db_path=db_path, db_path=db_path,
@@ -158,11 +174,11 @@ def build_runtime(
audit_log=audit_log, audit_log=audit_log,
kill_switch=kill_switch, kill_switch=kill_switch,
alert_manager=alert_manager, alert_manager=alert_manager,
deribit=DeribitClient(_client("deribit")), deribit=deribit,
macro=MacroClient(_client("macro")), macro=macro,
sentiment=SentimentClient(_client("sentiment")), sentiment=sentiment,
hyperliquid=HyperliquidClient(_client("hyperliquid")), hyperliquid=hyperliquid,
portfolio=PortfolioClient(_client("portfolio")), portfolio=portfolio,
telegram=telegram, telegram=telegram,
http_client=http_client, http_client=http_client,
clock=clk, clock=clk,
-1
View File
@@ -66,7 +66,6 @@ class HealthCheck:
_probe("macro", self._ctx.macro.get_calendar(days=1)), _probe("macro", self._ctx.macro.get_calendar(days=1)),
_probe("sentiment", self._probe_sentiment()), _probe("sentiment", self._probe_sentiment()),
_probe("hyperliquid", self._ctx.hyperliquid.funding_rate_annualized("ETH")), _probe("hyperliquid", self._ctx.hyperliquid.funding_rate_annualized("ETH")),
_probe("portfolio", self._ctx.portfolio.total_equity_eur()),
) )
# SQLite health: lightweight transaction. # SQLite health: lightweight transaction.
@@ -0,0 +1,138 @@
"""Consumer of the ``manual_actions`` queue.
The GUI (and other out-of-band tooling) records operator intent in the
SQLite ``manual_actions`` table; this consumer pulls those rows and
dispatches them through the same primitives the engine uses internally
(``KillSwitch.arm`` / ``disarm``, ``Orchestrator.run_*``) so the audit
chain remains the single source of truth for state transitions.
Supported kinds:
* ``arm_kill`` — payload ``{"reason": str}``; arms the kill switch.
* ``disarm_kill`` — payload ``{"reason": str}``; disarms it.
* ``run_cycle`` — payload ``{"cycle": "entry"|"monitor"|"health"}``;
forces an immediate run of the named cycle. Only available when the
consumer is invoked with a ``cycle_runners`` mapping (the orchestrator
populates it at scheduler-install time).
Future kinds (``force_close``, ``approve_proposal``,
``reject_proposal``) are recognised by the ``ManualAction`` schema but
not yet wired up — the consumer marks them as
``result="not_supported"`` so they don't sit in the queue forever.
"""
from __future__ import annotations
import json
import logging
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime
from typing import TYPE_CHECKING
from cerbero_bite.safety.kill_switch import KillSwitchError
from cerbero_bite.state import connect, transaction
if TYPE_CHECKING:
from cerbero_bite.runtime.dependencies import RuntimeContext
__all__ = ["CycleRunner", "consume_manual_actions"]
CycleRunner = Callable[[], Awaitable[object]]
_log = logging.getLogger("cerbero_bite.runtime.manual_actions")
_CONSUMER_ID = "engine"
def _parse_payload(raw: str | None) -> dict[str, object]:
if not raw:
return {}
try:
parsed = json.loads(raw)
except (TypeError, ValueError):
return {}
return parsed if isinstance(parsed, dict) else {}
async def consume_manual_actions(
ctx: RuntimeContext,
*,
cycle_runners: dict[str, CycleRunner] | None = None,
now: datetime | None = None,
) -> int:
"""Drain the queue. Return the number of actions processed.
The function is synchronous at heart (SQLite + KillSwitch), but kept
``async def`` so the orchestrator can register it as an APScheduler
coroutine without an extra wrapper. Each iteration fetches the next
unconsumed row and processes it; the loop terminates when the queue
is empty so a single tick can catch up after a long pause.
"""
reference = (now or datetime.now(UTC)).astimezone(UTC)
processed = 0
while True:
conn = connect(ctx.db_path)
try:
action = ctx.repository.next_unconsumed_action(conn)
finally:
conn.close()
if action is None:
break
if action.id is None:
_log.warning("manual_action without id, skipping")
break
payload = _parse_payload(action.payload_json)
result = "ok"
try:
if action.kind == "arm_kill":
reason = str(payload.get("reason", "manual via GUI"))
ctx.kill_switch.arm(reason=reason, source="manual_gui")
elif action.kind == "disarm_kill":
reason = str(payload.get("reason", "manual via GUI"))
ctx.kill_switch.disarm(reason=reason, source="manual_gui")
elif action.kind == "run_cycle":
cycle = str(payload.get("cycle", "")).strip().lower()
if cycle_runners is None:
result = "not_supported"
_log.warning(
"run_cycle dispatched without cycle_runners; "
"falling back to not_supported"
)
elif cycle not in cycle_runners:
result = f"error: unknown cycle '{cycle}'"
else:
await cycle_runners[cycle]()
result = f"ok: ran {cycle}"
else:
result = "not_supported"
_log.warning(
"manual_action kind=%s not supported yet", action.kind
)
except KillSwitchError as exc:
_log.exception("kill switch transition failed")
result = f"error: {type(exc).__name__}: {exc}"
except Exception as exc: # pragma: no cover — defensive
_log.exception("manual_action dispatch failed")
result = f"error: {type(exc).__name__}: {exc}"
conn = connect(ctx.db_path)
try:
with transaction(conn):
ctx.repository.mark_action_consumed(
conn,
action.id,
consumed_by=_CONSUMER_ID,
result=result,
now=reference,
)
finally:
conn.close()
processed += 1
if processed:
_log.info("processed %d manual_actions", processed)
return processed
@@ -0,0 +1,192 @@
"""Periodic market-snapshot collector.
Drives the ``market_snapshots`` table populated by the scheduler job
``market_snapshot`` (cron */15 by default). For every traded asset the
collector calls the same MCP feeds the entry/monitor cycles consume,
but in **best-effort mode**: a single failure leaves the corresponding
column NULL and the row is still persisted, with an error map in
``fetch_errors_json`` for debugging. This keeps the time series
continuous even when one of the feeds is briefly down — the
distributions are what matters for threshold calibration, not the
real-time correctness of any single tick.
"""
from __future__ import annotations
import json
import logging
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime
from decimal import Decimal
from typing import TYPE_CHECKING, Any
from cerbero_bite.clients._exceptions import McpError
from cerbero_bite.state import connect, transaction
from cerbero_bite.state.models import MarketSnapshotRecord
if TYPE_CHECKING:
from cerbero_bite.runtime.dependencies import RuntimeContext
__all__ = ["DEFAULT_ASSETS", "collect_market_snapshot"]
_log = logging.getLogger("cerbero_bite.runtime.market_snapshot")
DEFAULT_ASSETS: tuple[str, ...] = ("ETH", "BTC")
async def _safe_call(
label: str,
factory: Callable[[], Awaitable[Any]],
errors: dict[str, str],
) -> Any:
try:
return await factory()
except (McpError, Exception) as exc: # pragma: no branch — best-effort
errors[label] = f"{type(exc).__name__}: {exc}"
return None
def _decimal_or_none(value: Any) -> Decimal | None:
if value is None:
return None
if isinstance(value, Decimal):
return value
try:
return Decimal(str(value))
except (ValueError, ArithmeticError):
return None
async def _collect_one(
ctx: RuntimeContext, asset: str, *, when: datetime
) -> MarketSnapshotRecord:
errors: dict[str, str] = {}
asset_upper = asset.upper()
spot = await _safe_call(
"spot",
lambda: ctx.deribit.spot_perp_price(asset_upper),
errors,
)
dvol_value = await _safe_call(
"dvol",
lambda: ctx.deribit.latest_dvol(currency=asset_upper, now=when),
errors,
)
rv = await _safe_call(
"realized_vol",
lambda: ctx.deribit.realized_vol(asset_upper),
errors,
)
gamma = await _safe_call(
"dealer_gamma",
lambda: ctx.deribit.dealer_gamma_profile(asset_upper),
errors,
)
funding_perp = await _safe_call(
"funding_perp",
lambda: ctx.hyperliquid.funding_rate_annualized(asset_upper),
errors,
)
funding_cross = await _safe_call(
"funding_cross",
lambda: ctx.sentiment.funding_cross_median_annualized(asset_upper),
errors,
)
heatmap = await _safe_call(
"liquidation",
lambda: ctx.sentiment.liquidation_heatmap(asset_upper),
errors,
)
macro_days = await _safe_call(
"macro",
lambda: ctx.macro.next_high_severity_within(
days=ctx.cfg.structure.dte_target,
countries=list(ctx.cfg.entry.exclude_macro_countries),
now=when,
),
errors,
)
rv_30 = (rv or {}).get("rv_30d") if isinstance(rv, dict) else None
iv_minus_rv_30 = (
(rv or {}).get("iv_minus_rv_30d") if isinstance(rv, dict) else None
)
return MarketSnapshotRecord(
timestamp=when,
asset=asset_upper,
spot=_decimal_or_none(spot),
dvol=_decimal_or_none(dvol_value),
realized_vol_30d=_decimal_or_none(rv_30),
iv_minus_rv=_decimal_or_none(iv_minus_rv_30),
funding_perp_annualized=_decimal_or_none(funding_perp),
funding_cross_annualized=_decimal_or_none(funding_cross),
dealer_net_gamma=(
_decimal_or_none(gamma.total_net_dealer_gamma)
if gamma is not None
else None
),
gamma_flip_level=(
_decimal_or_none(gamma.gamma_flip_level)
if gamma is not None
else None
),
oi_delta_pct_4h=(
_decimal_or_none(heatmap.oi_delta_pct_4h)
if heatmap is not None
else None
),
liquidation_long_risk=(
heatmap.long_squeeze_risk if heatmap is not None else None
),
liquidation_short_risk=(
heatmap.short_squeeze_risk if heatmap is not None else None
),
macro_days_to_event=(
int(macro_days) if isinstance(macro_days, int) else None
),
fetch_ok=not errors,
fetch_errors_json=(json.dumps(errors) if errors else None),
)
async def collect_market_snapshot(
ctx: RuntimeContext,
*,
assets: tuple[str, ...] = DEFAULT_ASSETS,
now: datetime | None = None,
) -> int:
"""Collect + persist one snapshot per asset. Returns count persisted.
The function is sync at heart (sequential per asset to keep MCP
load light) but kept ``async def`` so APScheduler can schedule it
directly. A single asset failing does not abort the loop — the
other assets are still snapshotted.
"""
when = (now or datetime.now(UTC)).astimezone(UTC)
persisted = 0
for asset in assets:
try:
record = await _collect_one(ctx, asset, when=when)
except Exception: # pragma: no cover — defensive
_log.exception("snapshot for %s failed catastrophically", asset)
continue
try:
conn = connect(ctx.db_path)
try:
with transaction(conn):
ctx.repository.record_market_snapshot(conn, record)
finally:
conn.close()
persisted += 1
except Exception: # pragma: no cover — defensive
_log.exception("persist snapshot for %s failed", asset)
if persisted:
_log.info("market_snapshot persisted %d row(s)", persisted)
return persisted
+100 -13
View File
@@ -23,11 +23,17 @@ import structlog
from apscheduler.schedulers.asyncio import AsyncIOScheduler from apscheduler.schedulers.asyncio import AsyncIOScheduler
from cerbero_bite.config.mcp_endpoints import McpEndpoints from cerbero_bite.config.mcp_endpoints import McpEndpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.config.schema import StrategyConfig from cerbero_bite.config.schema import StrategyConfig
from cerbero_bite.runtime.dependencies import RuntimeContext, build_runtime from cerbero_bite.runtime.dependencies import RuntimeContext, build_runtime
from cerbero_bite.runtime.entry_cycle import EntryCycleResult, run_entry_cycle from cerbero_bite.runtime.entry_cycle import EntryCycleResult, run_entry_cycle
from cerbero_bite.runtime.health_check import HealthCheck, HealthCheckResult from cerbero_bite.runtime.health_check import HealthCheck, HealthCheckResult
from cerbero_bite.runtime.lockfile import EngineLock from cerbero_bite.runtime.lockfile import EngineLock
from cerbero_bite.runtime.manual_actions_consumer import consume_manual_actions
from cerbero_bite.runtime.market_snapshot_cycle import (
DEFAULT_ASSETS,
collect_market_snapshot,
)
from cerbero_bite.runtime.monitor_cycle import MonitorCycleResult, run_monitor_cycle from cerbero_bite.runtime.monitor_cycle import MonitorCycleResult, run_monitor_cycle
from cerbero_bite.runtime.recovery import recover_state from cerbero_bite.runtime.recovery import recover_state
from cerbero_bite.runtime.scheduler import JobSpec, build_scheduler from cerbero_bite.runtime.scheduler import JobSpec, build_scheduler
@@ -45,6 +51,8 @@ _CRON_ENTRY = "0 14 * * MON"
_CRON_MONITOR = "0 2,14 * * *" _CRON_MONITOR = "0 2,14 * * *"
_CRON_HEALTH = "*/5 * * * *" _CRON_HEALTH = "*/5 * * * *"
_CRON_BACKUP = "0 * * * *" _CRON_BACKUP = "0 * * * *"
_CRON_MANUAL_ACTIONS = "*/1 * * * *"
_CRON_MARKET_SNAPSHOT = "*/15 * * * *"
_BACKUP_RETENTION_DAYS = 30 _BACKUP_RETENTION_DAYS = 30
@@ -63,10 +71,12 @@ class Orchestrator:
*, *,
expected_environment: Environment, expected_environment: Environment,
eur_to_usd: Decimal, eur_to_usd: Decimal,
flags: RuntimeFlags | None = None,
) -> None: ) -> None:
self._ctx = ctx self._ctx = ctx
self._expected_env = expected_environment self._expected_env = expected_environment
self._eur_to_usd = eur_to_usd self._eur_to_usd = eur_to_usd
self._flags = flags or RuntimeFlags()
self._health = HealthCheck(ctx, expected_environment=expected_environment) self._health = HealthCheck(ctx, expected_environment=expected_environment)
self._scheduler: AsyncIOScheduler | None = None self._scheduler: AsyncIOScheduler | None = None
@@ -78,6 +88,10 @@ class Orchestrator:
def expected_environment(self) -> Environment: def expected_environment(self) -> Environment:
return self._expected_env return self._expected_env
@property
def flags(self) -> RuntimeFlags:
return self._flags
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Boot # Boot
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@@ -106,9 +120,18 @@ class Orchestrator:
"environment": info.environment, "environment": info.environment,
"health": health.state, "health": health.state,
"config_version": self._ctx.cfg.config_version, "config_version": self._ctx.cfg.config_version,
"data_analysis_enabled": self._flags.data_analysis_enabled,
"strategy_enabled": self._flags.strategy_enabled,
}, },
now=when, now=when,
) )
_log.info(
"engine started: env=%s health=%s data_analysis=%s strategy=%s",
info.environment,
health.state,
self._flags.data_analysis_enabled,
self._flags.strategy_enabled,
)
return _BootResult(environment=info.environment, health=health) return _BootResult(environment=info.environment, health=health)
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@@ -191,6 +214,9 @@ class Orchestrator:
monitor_cron: str = _CRON_MONITOR, monitor_cron: str = _CRON_MONITOR,
health_cron: str = _CRON_HEALTH, health_cron: str = _CRON_HEALTH,
backup_cron: str = _CRON_BACKUP, backup_cron: str = _CRON_BACKUP,
manual_actions_cron: str = _CRON_MANUAL_ACTIONS,
market_snapshot_cron: str = _CRON_MARKET_SNAPSHOT,
market_snapshot_assets: tuple[str, ...] = DEFAULT_ASSETS,
backup_dir: Path | None = None, backup_dir: Path | None = None,
backup_retention_days: int = _BACKUP_RETENTION_DAYS, backup_retention_days: int = _BACKUP_RETENTION_DAYS,
) -> AsyncIOScheduler: ) -> AsyncIOScheduler:
@@ -229,14 +255,67 @@ class Orchestrator:
await _safe("backup", _do) await _safe("backup", _do)
self._scheduler = build_scheduler( async def _run_market_snapshot_via_action() -> None:
[ await collect_market_snapshot(
JobSpec(name="entry", cron=entry_cron, coro_factory=_entry), self._ctx, assets=market_snapshot_assets
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor), )
async def _manual_actions() -> None:
async def _do() -> None:
await consume_manual_actions(
self._ctx,
cycle_runners={
"entry": self.run_entry,
"monitor": self.run_monitor,
"health": self.run_health,
"market_snapshot": _run_market_snapshot_via_action,
},
)
await _safe("manual_actions", _do)
async def _market_snapshot() -> None:
async def _do() -> None:
await collect_market_snapshot(
self._ctx, assets=market_snapshot_assets
)
await _safe("market_snapshot", _do)
jobs: list[JobSpec] = [
JobSpec(name="health", cron=health_cron, coro_factory=_health), JobSpec(name="health", cron=health_cron, coro_factory=_health),
JobSpec(name="backup", cron=backup_cron, coro_factory=_backup), JobSpec(name="backup", cron=backup_cron, coro_factory=_backup),
JobSpec(
name="manual_actions",
cron=manual_actions_cron,
coro_factory=_manual_actions,
),
] ]
if self._flags.strategy_enabled:
jobs.append(JobSpec(name="entry", cron=entry_cron, coro_factory=_entry))
jobs.append(
JobSpec(name="monitor", cron=monitor_cron, coro_factory=_monitor)
) )
else:
_log.warning(
"strategy disabled (CERBERO_BITE_ENABLE_STRATEGY=false): "
"entry and monitor cycles are NOT scheduled"
)
if self._flags.data_analysis_enabled:
jobs.append(
JobSpec(
name="market_snapshot",
cron=market_snapshot_cron,
coro_factory=_market_snapshot,
)
)
else:
_log.warning(
"data analysis disabled (CERBERO_BITE_ENABLE_DATA_ANALYSIS="
"false): market_snapshot job is NOT scheduled"
)
self._scheduler = build_scheduler(jobs)
return self._scheduler return self._scheduler
async def run_forever(self, *, lock_path: Path | None = None) -> None: async def run_forever(self, *, lock_path: Path | None = None) -> None:
@@ -329,17 +408,25 @@ def make_orchestrator(
audit_path: Path, audit_path: Path,
expected_environment: Environment, expected_environment: Environment,
eur_to_usd: Decimal, eur_to_usd: Decimal,
bot_tag: str | None = None,
flags: RuntimeFlags | None = None,
clock: Callable[[], datetime] | None = None, clock: Callable[[], datetime] | None = None,
) -> Orchestrator: ) -> Orchestrator:
"""Build a fresh :class:`Orchestrator` ready for ``boot``/``run_*``.""" """Build a fresh :class:`Orchestrator` ready for ``boot``/``run_*``."""
ctx = build_runtime( build_kwargs: dict[str, object] = {
cfg=cfg, "cfg": cfg,
endpoints=endpoints, "endpoints": endpoints,
token=token, "token": token,
db_path=db_path, "db_path": db_path,
audit_path=audit_path, "audit_path": audit_path,
clock=clock or (lambda: datetime.now(UTC)), "clock": clock or (lambda: datetime.now(UTC)),
) }
if bot_tag is not None:
build_kwargs["bot_tag"] = bot_tag
ctx = build_runtime(**build_kwargs) # type: ignore[arg-type]
return Orchestrator( return Orchestrator(
ctx, expected_environment=expected_environment, eur_to_usd=eur_to_usd ctx,
expected_environment=expected_environment,
eur_to_usd=eur_to_usd,
flags=flags,
) )
@@ -0,0 +1,38 @@
-- 0003_market_snapshots.sql — periodic market snapshot table.
--
-- Populated by the `market_snapshot` scheduler job (cron */15) for
-- every asset traded by the engine (ETH primary, BTC as benchmark).
-- The table backs the "Calibrazione" GUI page: histograms, percentiles
-- and "% of ticks the current threshold would have blocked" let the
-- operator pick filter thresholds from observed distributions instead
-- of guessing.
--
-- Every column except (timestamp, asset, fetch_ok) is NULL-able: a
-- single MCP call may fail and we still want to keep the row so the
-- time series stays continuous. fetch_errors_json carries the per-feed
-- error messages for offline debugging.
CREATE TABLE market_snapshots (
timestamp TEXT NOT NULL,
asset TEXT NOT NULL,
spot NUMERIC,
dvol NUMERIC,
realized_vol_30d NUMERIC,
iv_minus_rv NUMERIC,
funding_perp_annualized NUMERIC,
funding_cross_annualized NUMERIC,
dealer_net_gamma NUMERIC,
gamma_flip_level NUMERIC,
oi_delta_pct_4h NUMERIC,
liquidation_long_risk TEXT,
liquidation_short_risk TEXT,
macro_days_to_event INTEGER,
fetch_ok INTEGER NOT NULL,
fetch_errors_json TEXT,
PRIMARY KEY (timestamp, asset)
);
CREATE INDEX idx_market_snapshots_asset_ts
ON market_snapshots(asset, timestamp DESC);
PRAGMA user_version = 3;
+31
View File
@@ -21,6 +21,7 @@ __all__ = [
"DvolSnapshot", "DvolSnapshot",
"InstructionRecord", "InstructionRecord",
"ManualAction", "ManualAction",
"MarketSnapshotRecord",
"PositionRecord", "PositionRecord",
"PositionStatus", "PositionStatus",
"SystemStateRecord", "SystemStateRecord",
@@ -118,6 +119,35 @@ class DvolSnapshot(BaseModel):
eth_spot: Decimal eth_spot: Decimal
class MarketSnapshotRecord(BaseModel):
"""Row of the ``market_snapshots`` table.
Single point in time, single asset. Every numeric field is
optional because the ``market_snapshot`` collector is best-effort:
a single MCP failure NULLs the affected metric without dropping
the row.
"""
model_config = ConfigDict(extra="forbid")
timestamp: datetime
asset: str # "ETH", "BTC"
spot: Decimal | None = None
dvol: Decimal | None = None
realized_vol_30d: Decimal | None = None
iv_minus_rv: Decimal | None = None
funding_perp_annualized: Decimal | None = None
funding_cross_annualized: Decimal | None = None
dealer_net_gamma: Decimal | None = None
gamma_flip_level: Decimal | None = None
oi_delta_pct_4h: Decimal | None = None
liquidation_long_risk: str | None = None
liquidation_short_risk: str | None = None
macro_days_to_event: int | None = None
fetch_ok: bool
fetch_errors_json: str | None = None
class ManualAction(BaseModel): class ManualAction(BaseModel):
"""Row of the ``manual_actions`` table.""" """Row of the ``manual_actions`` table."""
@@ -130,6 +160,7 @@ class ManualAction(BaseModel):
"force_close", "force_close",
"arm_kill", "arm_kill",
"disarm_kill", "disarm_kill",
"run_cycle",
] ]
proposal_id: UUID | None = None proposal_id: UUID | None = None
payload_json: str | None = None payload_json: str | None = None
+86
View File
@@ -23,6 +23,7 @@ from cerbero_bite.state.models import (
DvolSnapshot, DvolSnapshot,
InstructionRecord, InstructionRecord,
ManualAction, ManualAction,
MarketSnapshotRecord,
PositionRecord, PositionRecord,
PositionStatus, PositionStatus,
SystemStateRecord, SystemStateRecord,
@@ -346,6 +347,66 @@ class Repository:
), ),
) )
# ------------------------------------------------------------------
# market_snapshots
# ------------------------------------------------------------------
def record_market_snapshot(
self, conn: sqlite3.Connection, snapshot: MarketSnapshotRecord
) -> None:
conn.execute(
"INSERT OR REPLACE INTO market_snapshots("
"timestamp, asset, spot, dvol, realized_vol_30d, iv_minus_rv, "
"funding_perp_annualized, funding_cross_annualized, "
"dealer_net_gamma, gamma_flip_level, oi_delta_pct_4h, "
"liquidation_long_risk, liquidation_short_risk, "
"macro_days_to_event, fetch_ok, fetch_errors_json) "
"VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
(
_enc_dt(snapshot.timestamp),
snapshot.asset,
_enc_dec(snapshot.spot),
_enc_dec(snapshot.dvol),
_enc_dec(snapshot.realized_vol_30d),
_enc_dec(snapshot.iv_minus_rv),
_enc_dec(snapshot.funding_perp_annualized),
_enc_dec(snapshot.funding_cross_annualized),
_enc_dec(snapshot.dealer_net_gamma),
_enc_dec(snapshot.gamma_flip_level),
_enc_dec(snapshot.oi_delta_pct_4h),
snapshot.liquidation_long_risk,
snapshot.liquidation_short_risk,
snapshot.macro_days_to_event,
1 if snapshot.fetch_ok else 0,
snapshot.fetch_errors_json,
),
)
def list_market_snapshots(
self,
conn: sqlite3.Connection,
*,
asset: str,
start: datetime | None = None,
end: datetime | None = None,
limit: int = 5000,
) -> list[MarketSnapshotRecord]:
clauses: list[str] = ["asset = ?"]
params: list[Any] = [asset]
if start is not None:
clauses.append("timestamp >= ?")
params.append(_enc_dt(start))
if end is not None:
clauses.append("timestamp <= ?")
params.append(_enc_dt(end))
params.append(int(limit))
rows = conn.execute(
f"SELECT * FROM market_snapshots WHERE {' AND '.join(clauses)} "
f"ORDER BY timestamp DESC LIMIT ?",
params,
).fetchall()
return [_row_to_market_snapshot(r) for r in rows]
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# manual_actions # manual_actions
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@@ -559,6 +620,31 @@ def _row_to_manual(row: sqlite3.Row) -> ManualAction:
) )
def _row_to_market_snapshot(row: sqlite3.Row) -> MarketSnapshotRecord:
return MarketSnapshotRecord(
timestamp=_dec_dt_required(row["timestamp"]),
asset=row["asset"],
spot=_dec_dec(row["spot"]),
dvol=_dec_dec(row["dvol"]),
realized_vol_30d=_dec_dec(row["realized_vol_30d"]),
iv_minus_rv=_dec_dec(row["iv_minus_rv"]),
funding_perp_annualized=_dec_dec(row["funding_perp_annualized"]),
funding_cross_annualized=_dec_dec(row["funding_cross_annualized"]),
dealer_net_gamma=_dec_dec(row["dealer_net_gamma"]),
gamma_flip_level=_dec_dec(row["gamma_flip_level"]),
oi_delta_pct_4h=_dec_dec(row["oi_delta_pct_4h"]),
liquidation_long_risk=row["liquidation_long_risk"],
liquidation_short_risk=row["liquidation_short_risk"],
macro_days_to_event=(
int(row["macro_days_to_event"])
if row["macro_days_to_event"] is not None
else None
),
fetch_ok=bool(int(row["fetch_ok"])),
fetch_errors_json=row["fetch_errors_json"],
)
def _dec_dec_required(value: Any) -> Decimal: def _dec_dec_required(value: Any) -> Decimal:
out = _dec_dec(value) out = _dec_dec(value)
if out is None: if out is None:
-11
View File
@@ -71,11 +71,6 @@ def _wire_boot_dependencies(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -115,11 +110,5 @@ async def test_boot_detects_audit_truncation(
orch = _build(tmp_path) orch = _build(tmp_path)
_wire_boot_dependencies(httpx_mock) _wire_boot_dependencies(httpx_mock)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await orch.boot() await orch.boot()
assert orch.context.kill_switch.is_armed() is True assert orch.context.kill_switch.is_armed() is True
+32 -30
View File
@@ -154,18 +154,39 @@ def _wire_market_snapshot(
json={"events": macro_events or []}, json={"events": macro_events or []},
is_reusable=True, is_reusable=True,
) )
# In-process portfolio aggregator: wire the underlying exchange and
# macro endpoints so total_equity_eur and asset_pct_of_portfolio
# produce the requested ``portfolio_eur`` and ``eth_pct``.
# FX rate fixed at 1.0 → EUR amount equals USD amount in tests.
portfolio_eur_f = float(portfolio_eur) portfolio_eur_f = float(portfolio_eur)
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_holdings", url="http://mcp-macro:9013/tools/get_asset_price",
json={"ticker": "EURUSD", "price": 1.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/get_account_summary",
json={"equity": portfolio_eur_f, "currency": "USDC"},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/get_positions",
json=[ json=[
{"ticker": "AAPL", "current_value_eur": portfolio_eur_f * (1 - eth_pct)}, {
{"ticker": "ETH-USD", "current_value_eur": portfolio_eur_f * eth_pct}, "instrument_name": "ETH-15MAY26-2475-P",
"notional_usd": portfolio_eur_f * eth_pct,
}
], ],
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value", url="http://mcp-hyperliquid:9012/tools/get_account_summary",
json={"total_value_eur": portfolio_eur_f}, json={"equity": 0.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-hyperliquid:9012/tools/get_positions",
json=[],
is_reusable=True, is_reusable=True,
) )
@@ -262,11 +283,12 @@ def _wire_combo_order(
def _wire_telegram_notify_position_opened(httpx_mock: HTTPXMock) -> None: def _wire_telegram_notify_position_opened(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response( """No-op: Telegram is now an in-process client with disabled mode in tests.
url="http://mcp-telegram:9017/tools/notify_position_opened",
json={"ok": True}, Kept for call-site compatibility; the function used to register an MCP
is_reusable=True, notify mock but post-refactor there is no HTTP endpoint to mock when
) the bot has no Telegram credentials configured.
"""
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -355,11 +377,6 @@ async def test_below_capital_minimum_returns_no_entry(
now: datetime, now: datetime,
httpx_mock: HTTPXMock, httpx_mock: HTTPXMock,
) -> None: ) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
# 500 EUR × 1.075 = 537 USD < 720 cfg minimum # 500 EUR × 1.075 = 537 USD < 720 cfg minimum
_wire_market_snapshot(httpx_mock, portfolio_eur=500.0) _wire_market_snapshot(httpx_mock, portfolio_eur=500.0)
ctx = _ctx(cfg, runtime_paths, now) ctx = _ctx(cfg, runtime_paths, now)
@@ -377,11 +394,6 @@ async def test_macro_event_within_dte_blocks_entry(
now: datetime, now: datetime,
httpx_mock: HTTPXMock, httpx_mock: HTTPXMock,
) -> None: ) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
macro_events = [ macro_events = [
{ {
"name": "FOMC", "name": "FOMC",
@@ -406,11 +418,6 @@ async def test_no_bias_returns_no_entry(
now: datetime, now: datetime,
httpx_mock: HTTPXMock, httpx_mock: HTTPXMock,
) -> None: ) -> None:
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify",
json={"ok": True},
is_reusable=True,
)
# Funding cross neutral (=0) and DVOL 40 → no IC, no directional; # Funding cross neutral (=0) and DVOL 40 → no IC, no directional;
# entry validates clean otherwise. # entry validates clean otherwise.
_wire_market_snapshot( _wire_market_snapshot(
@@ -507,11 +514,6 @@ async def test_broker_reject_marks_position_cancelled(
}, },
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_alert",
json={"ok": True},
is_reusable=True,
)
bull_cfg = golden_config( bull_cfg = golden_config(
entry=type(cfg.entry)( entry=type(cfg.entry)(
**{**cfg.entry.model_dump(), "trend_bull_threshold_pct": Decimal("0")} **{**cfg.entry.model_dump(), "trend_bull_threshold_pct": Decimal("0")}
-26
View File
@@ -60,11 +60,6 @@ def _wire_all_ok(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -112,11 +107,6 @@ async def test_environment_mismatch_counts_as_failure(
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
res = await hc.run() res = await hc.run()
assert res.state == "degraded" assert res.state == "degraded"
assert any("environment mismatch" in r for _s, r in res.failures) assert any("environment mismatch" in r for _s, r in res.failures)
@@ -149,17 +139,6 @@ async def test_three_consecutive_failures_arm_kill_switch(
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_alert",
json={"ok": True},
is_reusable=True,
)
for _ in range(2): for _ in range(2):
await hc.run() await hc.run()
assert ctx.kill_switch.is_armed() is False assert ctx.kill_switch.is_armed() is False
@@ -197,11 +176,6 @@ async def test_recovered_run_resets_counter(
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
res = await hc.run() res = await hc.run()
assert res.state == "degraded" assert res.state == "degraded"
assert res.consecutive_failures == 1 assert res.consecutive_failures == 1
-10
View File
@@ -231,11 +231,6 @@ async def test_monitor_closes_position_on_profit_take(
}, },
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_closed",
json={"ok": True},
is_reusable=True,
)
res = await run_monitor_cycle(ctx, now=now) res = await run_monitor_cycle(ctx, now=now)
assert len(res.outcomes) == 1 assert len(res.outcomes) == 1
@@ -296,11 +291,6 @@ async def test_monitor_uses_dvol_history_for_return_4h(
}, },
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_position_closed",
json={"ok": True},
is_reusable=True,
)
res = await run_monitor_cycle(ctx, now=now) res = await run_monitor_cycle(ctx, now=now)
assert res.outcomes[0].action == "CLOSE_AVERSE" assert res.outcomes[0].action == "CLOSE_AVERSE"
+55 -13
View File
@@ -11,6 +11,7 @@ from pytest_httpx import HTTPXMock
from cerbero_bite.config import golden_config from cerbero_bite.config import golden_config
from cerbero_bite.config.mcp_endpoints import load_endpoints from cerbero_bite.config.mcp_endpoints import load_endpoints
from cerbero_bite.config.runtime_flags import RuntimeFlags
from cerbero_bite.runtime import Orchestrator from cerbero_bite.runtime import Orchestrator
from cerbero_bite.runtime.dependencies import build_runtime from cerbero_bite.runtime.dependencies import build_runtime
@@ -56,14 +57,14 @@ def _wire_health_probes(httpx_mock: HTTPXMock) -> None:
json={"asset": "ETH", "current_funding_rate": 0.0001}, json={"asset": "ETH", "current_funding_rate": 0.0001},
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 1000.0},
is_reusable=True,
)
def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator: def _build_orch(
tmp_path: Path,
*,
expected: str = "testnet",
flags: RuntimeFlags | None = None,
) -> Orchestrator:
ctx = build_runtime( ctx = build_runtime(
cfg=golden_config(), cfg=golden_config(),
endpoints=load_endpoints(env={}), endpoints=load_endpoints(env={}),
@@ -77,6 +78,8 @@ def _build_orch(tmp_path: Path, *, expected: str = "testnet") -> Orchestrator:
ctx, ctx,
expected_environment=expected, # type: ignore[arg-type] expected_environment=expected, # type: ignore[arg-type]
eur_to_usd=Decimal("1.075"), eur_to_usd=Decimal("1.075"),
flags=flags
or RuntimeFlags(data_analysis_enabled=True, strategy_enabled=True),
) )
@@ -110,12 +113,6 @@ async def test_boot_arms_kill_switch_on_environment_mismatch(
json=[], json=[],
is_reusable=True, is_reusable=True,
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
orch = _build_orch(tmp_path, expected="testnet") orch = _build_orch(tmp_path, expected="testnet")
await orch.boot() await orch.boot()
assert orch.context.kill_switch.is_armed() is True assert orch.context.kill_switch.is_armed() is True
@@ -125,4 +122,49 @@ def test_install_scheduler_registers_canonical_jobs(tmp_path: Path) -> None:
orch = _build_orch(tmp_path) orch = _build_orch(tmp_path)
sched = orch.install_scheduler() sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()} job_ids = {j.id for j in sched.get_jobs()}
assert job_ids == {"entry", "monitor", "health", "backup"} assert job_ids == {
"entry",
"monitor",
"health",
"backup",
"manual_actions",
"market_snapshot",
}
def test_install_scheduler_skips_strategy_jobs_when_disabled(tmp_path: Path) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=True, strategy_enabled=False),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
# data analysis stays on, plus the always-on infra jobs.
assert {"health", "backup", "manual_actions", "market_snapshot"}.issubset(job_ids)
def test_install_scheduler_skips_market_snapshot_when_data_analysis_off(
tmp_path: Path,
) -> None:
orch = _build_orch(
tmp_path,
flags=RuntimeFlags(data_analysis_enabled=False, strategy_enabled=True),
)
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "market_snapshot" not in job_ids
assert {"entry", "monitor", "health", "backup", "manual_actions"}.issubset(
job_ids
)
def test_install_scheduler_analysis_only_default(tmp_path: Path) -> None:
"""The default RuntimeFlags profile (analysis only) drops entry/monitor."""
orch = _build_orch(tmp_path, flags=RuntimeFlags())
sched = orch.install_scheduler()
job_ids = {j.id for j in sched.get_jobs()}
assert "entry" not in job_ids
assert "monitor" not in job_ids
assert "market_snapshot" in job_ids
-10
View File
@@ -115,11 +115,6 @@ async def test_recovery_cancels_awaiting_fill_when_broker_lacks_legs(
url="http://mcp-deribit:9011/tools/get_positions", url="http://mcp-deribit:9011/tools/get_positions",
json=[], json=[],
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await recover_state(ctx, now=_now()) await recover_state(ctx, now=_now())
@@ -154,11 +149,6 @@ async def test_recovery_alerts_on_open_position_missing_on_broker(
url="http://mcp-deribit:9011/tools/get_positions", url="http://mcp-deribit:9011/tools/get_positions",
json=[], json=[],
) )
httpx_mock.add_response(
url="http://mcp-telegram:9017/tools/notify_system_error",
json={"ok": True},
is_reusable=True,
)
await recover_state(ctx, now=_now()) await recover_state(ctx, now=_now())
assert ctx.kill_switch.is_armed() is True assert ctx.kill_switch.is_armed() is True
+14 -31
View File
@@ -9,13 +9,14 @@ from pathlib import Path
import pytest import pytest
from pytest_httpx import HTTPXMock from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients.telegram import TelegramClient from cerbero_bite.clients.telegram import TelegramClient
from cerbero_bite.runtime.alert_manager import AlertManager, Severity from cerbero_bite.runtime.alert_manager import AlertManager, Severity
from cerbero_bite.safety import AuditLog, iter_entries from cerbero_bite.safety import AuditLog, iter_entries
from cerbero_bite.safety.kill_switch import KillSwitch from cerbero_bite.safety.kill_switch import KillSwitch
from cerbero_bite.state import Repository, connect, run_migrations, transaction from cerbero_bite.state import Repository, connect, run_migrations, transaction
SEND_URL = "https://api.telegram.org/botTOK/sendMessage"
def _make_alert_manager(tmp_path: Path) -> tuple[AlertManager, Path, Path, KillSwitch]: def _make_alert_manager(tmp_path: Path) -> tuple[AlertManager, Path, Path, KillSwitch]:
db_path = tmp_path / "state.sqlite" db_path = tmp_path / "state.sqlite"
@@ -39,14 +40,7 @@ def _make_alert_manager(tmp_path: Path) -> tuple[AlertManager, Path, Path, KillS
audit_log=audit, audit_log=audit,
clock=lambda: next(times), clock=lambda: next(times),
) )
telegram = TelegramClient( telegram = TelegramClient(bot_token="TOK", chat_id="42")
HttpToolClient(
service="telegram",
base_url="http://mcp-telegram:9017",
token="t",
retry_max=1,
)
)
return AlertManager(telegram=telegram, audit_log=audit, kill_switch=ks), audit_path, db_path, ks return AlertManager(telegram=telegram, audit_log=audit, kill_switch=ks), audit_path, db_path, ks
@@ -65,17 +59,13 @@ async def test_low_emits_audit_only(tmp_path: Path, httpx_mock: HTTPXMock) -> No
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_medium_calls_telegram_notify(tmp_path: Path, httpx_mock: HTTPXMock) -> None: async def test_medium_calls_telegram_notify(tmp_path: Path, httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response( httpx_mock.add_response(url=SEND_URL, json={"ok": True})
url="http://mcp-telegram:9017/tools/notify", json={"ok": True}
)
am, audit_path, _, ks = _make_alert_manager(tmp_path) am, audit_path, _, ks = _make_alert_manager(tmp_path)
await am.medium(source="entry_cycle", message="snapshot delayed") await am.medium(source="entry_cycle", message="snapshot delayed")
requests = httpx_mock.get_requests() requests = httpx_mock.get_requests()
assert len(requests) == 1 assert len(requests) == 1
body = json.loads(requests[0].read()) body = json.loads(requests[0].read())
assert body["message"] == "[entry_cycle] snapshot delayed" assert body["text"] == "[HIGH][entry_cycle] snapshot delayed"
assert body["priority"] == "high"
assert body["tag"] == "entry_cycle"
assert ks.is_armed() is False assert ks.is_armed() is False
assert any(e.payload["severity"] == "medium" for e in iter_entries(audit_path)) assert any(e.payload["severity"] == "medium" for e in iter_entries(audit_path))
@@ -84,17 +74,13 @@ async def test_medium_calls_telegram_notify(tmp_path: Path, httpx_mock: HTTPXMoc
async def test_high_arms_kill_switch_and_calls_notify_alert( async def test_high_arms_kill_switch_and_calls_notify_alert(
tmp_path: Path, httpx_mock: HTTPXMock tmp_path: Path, httpx_mock: HTTPXMock
) -> None: ) -> None:
httpx_mock.add_response( httpx_mock.add_response(url=SEND_URL, json={"ok": True})
url="http://mcp-telegram:9017/tools/notify_alert", json={"ok": True}
)
am, _, _, ks = _make_alert_manager(tmp_path) am, _, _, ks = _make_alert_manager(tmp_path)
await am.high(source="health", message="3 consecutive MCP failures") await am.high(source="health", message="3 consecutive MCP failures")
body = json.loads(httpx_mock.get_request().read()) body = json.loads(httpx_mock.get_request().read())
assert body == { text = body["text"]
"source": "health", assert "ALERT [HIGH]" in text
"message": "3 consecutive MCP failures", assert "health" in text and "3 consecutive MCP failures" in text
"priority": "high",
}
assert ks.is_armed() is True assert ks.is_armed() is True
@@ -102,9 +88,7 @@ async def test_high_arms_kill_switch_and_calls_notify_alert(
async def test_critical_arms_kill_switch_and_calls_notify_system_error( async def test_critical_arms_kill_switch_and_calls_notify_system_error(
tmp_path: Path, httpx_mock: HTTPXMock tmp_path: Path, httpx_mock: HTTPXMock
) -> None: ) -> None:
httpx_mock.add_response( httpx_mock.add_response(url=SEND_URL, json={"ok": True})
url="http://mcp-telegram:9017/tools/notify_system_error", json={"ok": True}
)
am, _, _, ks = _make_alert_manager(tmp_path) am, _, _, ks = _make_alert_manager(tmp_path)
await am.critical( await am.critical(
source="audit_chain", source="audit_chain",
@@ -112,8 +96,9 @@ async def test_critical_arms_kill_switch_and_calls_notify_system_error(
component="safety.audit_log", component="safety.audit_log",
) )
body = json.loads(httpx_mock.get_request().read()) body = json.loads(httpx_mock.get_request().read())
assert body["component"] == "safety.audit_log" text = body["text"]
assert body["priority"] == "critical" assert "SYSTEM ERROR [CRITICAL]" in text
assert "safety.audit_log" in text
assert ks.is_armed() is True assert ks.is_armed() is True
@@ -121,9 +106,7 @@ async def test_critical_arms_kill_switch_and_calls_notify_system_error(
async def test_critical_when_already_armed_is_idempotent( async def test_critical_when_already_armed_is_idempotent(
tmp_path: Path, httpx_mock: HTTPXMock tmp_path: Path, httpx_mock: HTTPXMock
) -> None: ) -> None:
httpx_mock.add_response( httpx_mock.add_response(url=SEND_URL, json={"ok": True})
url="http://mcp-telegram:9017/tools/notify_system_error", json={"ok": True}
)
am, _, _, ks = _make_alert_manager(tmp_path) am, _, _, ks = _make_alert_manager(tmp_path)
ks.arm(reason="prior", source="manual") ks.arm(reason="prior", source="manual")
assert ks.is_armed() is True assert ks.is_armed() is True
+15 -33
View File
@@ -7,25 +7,14 @@ contains the expected statuses.
from __future__ import annotations from __future__ import annotations
from pathlib import Path import pytest
from click.testing import CliRunner from click.testing import CliRunner
from pytest_httpx import HTTPXMock from pytest_httpx import HTTPXMock
from cerbero_bite.cli import main as cli_main from cerbero_bite.cli import main as cli_main
def _seed_token(tmp_path: Path) -> Path: def test_ping_reports_each_service(httpx_mock: HTTPXMock) -> None:
target = tmp_path / "core_token"
target.write_text("super-secret\n", encoding="utf-8")
return target
def test_ping_reports_each_service(
tmp_path: Path, httpx_mock: HTTPXMock
) -> None:
token_file = _seed_token(tmp_path)
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info", url="http://mcp-deribit:9011/tools/environment_info",
json={ json={
@@ -49,29 +38,24 @@ def test_ping_reports_each_service(
url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding", url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding",
json={"snapshot": {"ETH": {"binance": 0.0001}}}, json={"snapshot": {"ETH": {"binance": 0.0001}}},
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 5000.0},
)
result = CliRunner().invoke( result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"] cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
) )
assert result.exit_code == 0, result.output assert result.exit_code == 0, result.output
assert "deribit" in result.output assert "deribit" in result.output
assert "hyperliquid" in result.output assert "hyperliquid" in result.output
assert "macro" in result.output assert "macro" in result.output
assert "sentiment" in result.output assert "sentiment" in result.output
assert "portfolio" in result.output # Telegram and Portfolio are no longer MCP services and are not
assert "telegram" in result.output # listed even if skipped # listed by the ping command.
# at least 5 OK statuses assert "portfolio" not in result.output
assert result.output.count("OK") >= 5 assert "OK" in result.output
def test_ping_reports_failure_when_service_unreachable( def test_ping_reports_failure_when_service_unreachable(
tmp_path: Path, httpx_mock: HTTPXMock httpx_mock: HTTPXMock,
) -> None: ) -> None:
token_file = _seed_token(tmp_path)
httpx_mock.add_response( httpx_mock.add_response(
url="http://mcp-deribit:9011/tools/environment_info", url="http://mcp-deribit:9011/tools/environment_info",
status_code=500, status_code=500,
@@ -90,21 +74,19 @@ def test_ping_reports_failure_when_service_unreachable(
url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding", url="http://mcp-sentiment:9014/tools/get_cross_exchange_funding",
json={"snapshot": {"ETH": {"binance": 0.0001}}}, json={"snapshot": {"ETH": {"binance": 0.0001}}},
) )
httpx_mock.add_response(
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value",
json={"total_value_eur": 0.0},
)
result = CliRunner().invoke( result = CliRunner().invoke(
cli_main, ["ping", "--token-file", str(token_file), "--timeout", "1.0"] cli_main, ["ping", "--token", "super-secret", "--timeout", "1.0"]
) )
assert result.exit_code == 0 assert result.exit_code == 0
assert "FAIL" in result.output assert "FAIL" in result.output
def test_ping_token_missing_exits_nonzero(tmp_path: Path) -> None: def test_ping_token_missing_exits_nonzero(
result = CliRunner().invoke( monkeypatch: pytest.MonkeyPatch,
cli_main, ["ping", "--token-file", str(tmp_path / "nope")] ) -> None:
) # Ensure no env var leaks into the CLI invocation.
monkeypatch.delenv("CERBERO_BITE_MCP_TOKEN", raising=False)
result = CliRunner().invoke(cli_main, ["ping"])
assert result.exit_code == 1 assert result.exit_code == 1
assert "token error" in result.output assert "token error" in result.output
+22
View File
@@ -47,6 +47,28 @@ async def test_call_attaches_bearer_token(httpx_mock: HTTPXMock) -> None:
assert request is not None assert request is not None
assert request.headers["Authorization"] == "Bearer abc123" assert request.headers["Authorization"] == "Bearer abc123"
assert request.headers["Content-Type"] == "application/json" assert request.headers["Content-Type"] == "application/json"
# Default bot tag is sent on every request.
assert request.headers["X-Bot-Tag"] == "BOT__CERBERO_BITE"
@pytest.mark.asyncio
async def test_call_attaches_custom_bot_tag(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True})
client = _make_client(bot_tag="BOT__SHADOW")
await client.call("any")
request = httpx_mock.get_request()
assert request is not None
assert request.headers["X-Bot-Tag"] == "BOT__SHADOW"
def test_init_rejects_blank_bot_tag() -> None:
with pytest.raises(ValueError, match="non-empty"):
_make_client(bot_tag=" ")
def test_init_rejects_too_long_bot_tag() -> None:
with pytest.raises(ValueError, match="64"):
_make_client(bot_tag="x" * 65)
@pytest.mark.asyncio @pytest.mark.asyncio
+203 -58
View File
@@ -1,95 +1,240 @@
"""Tests for PortfolioClient.""" """Tests for in-process PortfolioClient (composes deribit + hyperliquid + macro)."""
from __future__ import annotations from __future__ import annotations
from decimal import Decimal from decimal import Decimal
from typing import Any
import pytest import pytest
from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient
from cerbero_bite.clients._exceptions import McpDataAnomalyError from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.portfolio import PortfolioClient from cerbero_bite.clients.portfolio import PortfolioClient
# ---------------------------------------------------------------------------
# Test doubles
# ---------------------------------------------------------------------------
def _client() -> PortfolioClient:
http = HttpToolClient( class _FakeDeribit:
service="portfolio", SERVICE = "deribit"
base_url="http://mcp-portfolio:9018",
token="t", def __init__(
retry_max=1, self,
*,
equity_usd: Decimal | float = Decimal("0"),
positions: list[dict[str, Any]] | None = None,
) -> None:
self._equity = Decimal(str(equity_usd))
self._positions = positions or []
async def get_account_summary(self, currency: str = "USDC") -> dict[str, Any]:
assert currency == "USDC"
return {"equity": float(self._equity), "currency": "USDC"}
async def get_positions(self, currency: str = "USDC") -> list[dict[str, Any]]:
assert currency == "USDC"
return list(self._positions)
class _FakeHyperliquid:
SERVICE = "hyperliquid"
def __init__(
self,
*,
equity_usd: Decimal | float = Decimal("0"),
positions: list[dict[str, Any]] | None = None,
) -> None:
self._equity = Decimal(str(equity_usd))
self._positions = positions or []
async def get_account_summary(self) -> dict[str, Any]:
return {"equity": float(self._equity)}
async def get_positions(self) -> list[dict[str, Any]]:
return list(self._positions)
class _FakeMacro:
SERVICE = "macro"
def __init__(self, *, eur_usd: Decimal | float | None = Decimal("1.10")) -> None:
self._eur_usd = eur_usd
async def eur_usd_rate(self) -> Decimal:
if self._eur_usd is None:
raise McpDataAnomalyError(
"missing", service="macro", tool="get_asset_price"
) )
return PortfolioClient(http) return Decimal(str(self._eur_usd))
@pytest.mark.asyncio def _make(
async def test_total_equity_eur(httpx_mock: HTTPXMock) -> None: *,
httpx_mock.add_response( deribit_eq: Decimal | float = 0,
url="http://mcp-portfolio:9018/tools/get_total_portfolio_value", hl_eq: Decimal | float = 0,
json={"total_value_eur": 12345.67}, deribit_pos: list[dict[str, Any]] | None = None,
hl_pos: list[dict[str, Any]] | None = None,
eur_usd: Decimal | float | None = Decimal("1.10"),
) -> PortfolioClient:
return PortfolioClient(
deribit=_FakeDeribit(equity_usd=deribit_eq, positions=deribit_pos),
hyperliquid=_FakeHyperliquid(equity_usd=hl_eq, positions=hl_pos),
macro=_FakeMacro(eur_usd=eur_usd),
) )
out = await _client().total_equity_eur()
assert out == Decimal("12345.67")
# ---------------------------------------------------------------------------
# total_equity_usd / total_equity_eur
# ---------------------------------------------------------------------------
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_total_equity_anomaly_when_missing(httpx_mock: HTTPXMock) -> None: async def test_total_equity_usd_sums_both_exchanges() -> None:
httpx_mock.add_response(json={}) p = _make(deribit_eq="1500.50", hl_eq="982.50")
with pytest.raises(McpDataAnomalyError, match="total_value_eur"): assert await p.total_equity_usd() == Decimal("2483.00")
await _client().total_equity_eur()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_total_equity_anomaly_on_unexpected_shape(httpx_mock: HTTPXMock) -> None: async def test_total_equity_eur_converts_with_fx() -> None:
httpx_mock.add_response(json=[1, 2, 3]) p = _make(deribit_eq="1100", hl_eq="0", eur_usd="1.10")
with pytest.raises(McpDataAnomalyError, match="unexpected shape"): # 1100 USD / 1.10 = 1000 EUR
await _client().total_equity_eur() assert await p.total_equity_eur() == Decimal("1000")
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_asset_pct_aggregates_matching_tickers(httpx_mock: HTTPXMock) -> None: async def test_total_equity_eur_zero_when_no_balance() -> None:
httpx_mock.add_response( p = _make(deribit_eq=0, hl_eq=0, eur_usd="1.20")
url="http://mcp-portfolio:9018/tools/get_holdings", assert await p.total_equity_eur() == Decimal("0")
json=[
{"ticker": "ETH-USD", "current_value_eur": 3000.0},
{"ticker": "ETHE", "current_value_eur": 1000.0}, # ETH ticker variant @pytest.mark.asyncio
{"ticker": "AAPL", "current_value_eur": 6000.0}, async def test_total_equity_eur_raises_on_non_positive_fx() -> None:
p = _make(deribit_eq="100", hl_eq="0", eur_usd="0")
with pytest.raises(McpDataAnomalyError, match="non-positive EURUSD"):
await p.total_equity_eur()
@pytest.mark.asyncio
async def test_total_equity_eur_propagates_macro_anomaly() -> None:
p = _make(deribit_eq="100", hl_eq="0", eur_usd=None)
with pytest.raises(McpDataAnomalyError):
await p.total_equity_eur()
# ---------------------------------------------------------------------------
# asset_pct_of_portfolio
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_asset_pct_aggregates_eth_across_both_exchanges() -> None:
p = _make(
deribit_eq="5000",
hl_eq="5000",
deribit_pos=[
{
"instrument_name": "ETH-15MAY26-2475-P",
"size": 10,
"mark_price": 100,
},
# BTC position should be ignored when asking for ETH
{
"instrument_name": "BTC-PERPETUAL",
"size": 1,
"mark_price": 75000,
},
],
hl_pos=[
{"coin": "ETH", "notional_usd": 1000},
], ],
) )
pct = await _client().asset_pct_of_portfolio("ETH") # ETH exposure: 10×100 (deribit) + 1000 (hl) = 2000
# 4000 / 10000 = 0.4 # total equity: 10000
assert pct == Decimal("0.4") pct = await p.asset_pct_of_portfolio("ETH")
assert pct == Decimal("0.2")
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_asset_pct_returns_zero_for_empty_portfolio( async def test_asset_pct_returns_zero_when_no_positions() -> None:
httpx_mock: HTTPXMock, p = _make(deribit_eq="1000", hl_eq="0")
) -> None: assert await p.asset_pct_of_portfolio("ETH") == Decimal("0")
httpx_mock.add_response(json=[])
assert await _client().asset_pct_of_portfolio("ETH") == Decimal("0")
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_asset_pct_skips_entries_without_value(httpx_mock: HTTPXMock) -> None: async def test_asset_pct_returns_zero_when_no_equity() -> None:
httpx_mock.add_response( p = _make(
json=[ deribit_eq=0,
{"ticker": "ETH", "current_value_eur": None}, hl_eq=0,
{"ticker": "AAPL", "current_value_eur": 1000.0}, deribit_pos=[
] {"instrument_name": "ETH-PERP", "notional_usd": 100},
],
) )
assert await _client().asset_pct_of_portfolio("ETH") == Decimal("0") assert await p.asset_pct_of_portfolio("ETH") == Decimal("0")
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_asset_pct_anomaly_when_response_not_list(httpx_mock: HTTPXMock) -> None: async def test_asset_pct_uses_explicit_notional_when_present() -> None:
httpx_mock.add_response(json={"holdings": []}) p = _make(
with pytest.raises(McpDataAnomalyError, match="unexpected shape"): deribit_eq="1000",
await _client().asset_pct_of_portfolio("ETH") hl_eq=0,
deribit_pos=[
# explicit notional_usd takes precedence over size×mark
def test_portfolio_client_rejects_wrong_service() -> None: {
bad = HttpToolClient( "instrument_name": "ETH-XYZ",
service="macro", base_url="http://x:1", token="t", retry_max=1 "notional_usd": 250,
"size": 999,
"mark_price": 999,
},
],
) )
with pytest.raises(ValueError, match="requires service 'portfolio'"): assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.25")
PortfolioClient(bad)
@pytest.mark.asyncio
async def test_asset_pct_falls_back_to_size_times_mark() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
{"instrument_name": "ETH-XYZ", "size": 5, "mark_price": 40},
],
)
# 5×40 / 1000 = 0.2
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.2")
@pytest.mark.asyncio
async def test_asset_pct_takes_absolute_value_for_short_positions() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
hl_pos=[{"coin": "ETH", "size": -10, "mark_price": 50}],
)
# |-10×50| / 1000 = 0.5
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.5")
@pytest.mark.asyncio
async def test_asset_pct_case_insensitive_match() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
{"instrument_name": "eth-perpetual", "notional_usd": 300},
],
)
assert await p.asset_pct_of_portfolio("eth") == Decimal("0.3")
@pytest.mark.asyncio
async def test_asset_pct_skips_non_dict_entries() -> None:
p = _make(
deribit_eq="1000",
hl_eq=0,
deribit_pos=[
"not a dict", # type: ignore[list-item]
{"instrument_name": "ETH", "notional_usd": 100},
],
)
assert await p.asset_pct_of_portfolio("ETH") == Decimal("0.1")
+170 -56
View File
@@ -1,25 +1,27 @@
"""Tests for TelegramClient (notify-only mode).""" """Tests for in-process TelegramClient (Bot API, notify-only)."""
from __future__ import annotations from __future__ import annotations
import json import json
from decimal import Decimal from decimal import Decimal
import httpx
import pytest import pytest
from pytest_httpx import HTTPXMock from pytest_httpx import HTTPXMock
from cerbero_bite.clients._base import HttpToolClient from cerbero_bite.clients.telegram import (
from cerbero_bite.clients.telegram import TelegramClient TelegramClient,
TelegramError,
load_telegram_credentials,
)
SEND_URL = "https://api.telegram.org/botTOK/sendMessage"
def _client() -> TelegramClient: def _client(**kw) -> TelegramClient:
http = HttpToolClient( defaults = {"bot_token": "TOK", "chat_id": "42"}
service="telegram", defaults.update(kw)
base_url="http://mcp-telegram:9017", return TelegramClient(**defaults)
token="t",
retry_max=1,
)
return TelegramClient(http)
def _request_body(httpx_mock: HTTPXMock) -> dict: def _request_body(httpx_mock: HTTPXMock) -> dict:
@@ -28,34 +30,66 @@ def _request_body(httpx_mock: HTTPXMock) -> dict:
return json.loads(request.read()) return json.loads(request.read())
# ---------------------------------------------------------------------------
# enabled / disabled
# ---------------------------------------------------------------------------
def test_enabled_when_both_token_and_chat_id_present() -> None:
assert _client().enabled is True
def test_disabled_when_token_missing() -> None:
c = TelegramClient(bot_token=None, chat_id="42")
assert c.enabled is False
def test_disabled_when_chat_id_missing() -> None:
c = TelegramClient(bot_token="TOK", chat_id=None)
assert c.enabled is False
def test_disabled_when_token_blank() -> None:
c = TelegramClient(bot_token=" ", chat_id="42")
assert c.enabled is False
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_sends_message_with_priority(httpx_mock: HTTPXMock) -> None: async def test_disabled_notify_is_noop(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response( c = TelegramClient(bot_token=None, chat_id=None)
url="http://mcp-telegram:9017/tools/notify", await c.notify("hello")
json={"ok": True}, assert httpx_mock.get_requests() == []
)
# ---------------------------------------------------------------------------
# notify formatting
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_notify_sends_with_priority_and_tag(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True, "result": {}})
await _client().notify("hello", priority="high", tag="entry") await _client().notify("hello", priority="high", tag="entry")
body = _request_body(httpx_mock) body = _request_body(httpx_mock)
assert body == {"message": "hello", "priority": "high", "tag": "entry"} assert body["chat_id"] == "42"
assert body["parse_mode"] == "HTML"
assert body["text"] == "[HIGH][entry] hello"
assert body["disable_web_page_preview"] is True
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_default_priority_normal(httpx_mock: HTTPXMock) -> None: async def test_notify_default_priority_normal(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True}) httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify("plain") await _client().notify("plain")
body = _request_body(httpx_mock) body = _request_body(httpx_mock)
assert body["priority"] == "normal" assert body["text"] == "[NORMAL] plain"
assert "tag" not in body
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_position_opened_serialises_decimals( async def test_notify_position_opened_formats_decimals(
httpx_mock: HTTPXMock, httpx_mock: HTTPXMock,
) -> None: ) -> None:
httpx_mock.add_response( httpx_mock.add_response(url=SEND_URL, json={"ok": True})
url="http://mcp-telegram:9017/tools/notify_position_opened",
json={"ok": True},
)
await _client().notify_position_opened( await _client().notify_position_opened(
instrument="ETH-15MAY26-2475-P", instrument="ETH-15MAY26-2475-P",
side="SELL", side="SELL",
@@ -64,59 +98,139 @@ async def test_notify_position_opened_serialises_decimals(
greeks={"delta": Decimal("-0.04"), "vega": Decimal("0.20")}, greeks={"delta": Decimal("-0.04"), "vega": Decimal("0.20")},
expected_pnl_usd=Decimal("45.00"), expected_pnl_usd=Decimal("45.00"),
) )
body = _request_body(httpx_mock) text = _request_body(httpx_mock)["text"]
assert body["instrument"] == "ETH-15MAY26-2475-P" assert "POSITION OPENED" in text
assert body["greeks"] == {"delta": -0.04, "vega": 0.20} assert "ETH-15MAY26-2475-P" in text
assert body["expected_pnl"] == 45.0 assert "SELL" in text and "size: 2" in text and "bull_put" in text
assert body["size"] == 2.0 assert "delta=-0.0400" in text and "vega=+0.2000" in text
assert "$+45.00" in text
@pytest.mark.asyncio
async def test_notify_position_opened_without_greeks(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_opened(
instrument="BTC-PERPETUAL", side="BUY", size=1, strategy="hedge"
)
text = _request_body(httpx_mock)["text"]
assert "greeks" not in text
assert "expected pnl" not in text
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_position_closed(httpx_mock: HTTPXMock) -> None: async def test_notify_position_closed(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True}) httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_closed( await _client().notify_position_closed(
instrument="ETH-15MAY26-2475-P_2350-P", instrument="ETH-15MAY26-2475-P_2350-P",
realized_pnl_usd=Decimal("32.50"), realized_pnl_usd=Decimal("32.50"),
reason="CLOSE_PROFIT", reason="CLOSE_PROFIT",
) )
body = _request_body(httpx_mock) text = _request_body(httpx_mock)["text"]
assert body == { assert "POSITION CLOSED" in text
"instrument": "ETH-15MAY26-2475-P_2350-P", assert "ETH-15MAY26-2475-P_2350-P" in text
"realized_pnl": 32.5, assert "$+32.50" in text
"reason": "CLOSE_PROFIT", assert "CLOSE_PROFIT" in text
}
@pytest.mark.asyncio
async def test_notify_position_closed_negative_pnl(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_position_closed(
instrument="X", realized_pnl_usd=Decimal("-12.5"), reason="STOP"
)
text = _request_body(httpx_mock)["text"]
assert "$-12.50" in text
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_alert(httpx_mock: HTTPXMock) -> None: async def test_notify_alert(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True}) httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_alert( await _client().notify_alert(
source="kill_switch", message="armed manually", priority="critical" source="kill_switch", message="armed manually", priority="critical"
) )
body = _request_body(httpx_mock) text = _request_body(httpx_mock)["text"]
assert body == { assert "ALERT [CRITICAL]" in text
"source": "kill_switch", assert "kill_switch" in text and "armed manually" in text
"message": "armed manually",
"priority": "critical",
}
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_notify_system_error(httpx_mock: HTTPXMock) -> None: async def test_notify_system_error(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(json={"ok": True}) httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_system_error( await _client().notify_system_error(
message="deribit feed anomaly", message="deribit feed anomaly", component="clients.deribit"
component="clients.deribit",
) )
body = _request_body(httpx_mock) text = _request_body(httpx_mock)["text"]
assert body["message"] == "deribit feed anomaly" assert "SYSTEM ERROR [CRITICAL]" in text
assert body["component"] == "clients.deribit" assert "deribit feed anomaly" in text
assert body["priority"] == "critical" assert "clients.deribit" in text
def test_telegram_client_rejects_wrong_service() -> None: @pytest.mark.asyncio
bad = HttpToolClient( async def test_notify_system_error_without_component(httpx_mock: HTTPXMock) -> None:
service="macro", base_url="http://x:1", token="t", retry_max=1 httpx_mock.add_response(url=SEND_URL, json={"ok": True})
await _client().notify_system_error(message="boom")
text = _request_body(httpx_mock)["text"]
assert "component" not in text
# ---------------------------------------------------------------------------
# error paths
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_http_non_200_raises(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, status_code=500, text="upstream")
with pytest.raises(TelegramError, match="HTTP 500"):
await _client().notify("x")
@pytest.mark.asyncio
async def test_api_ok_false_raises(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(
url=SEND_URL, json={"ok": False, "description": "chat not found"}
) )
with pytest.raises(ValueError, match="requires service 'telegram'"): with pytest.raises(TelegramError, match="chat not found"):
TelegramClient(bad) await _client().notify("x")
# ---------------------------------------------------------------------------
# shared httpx client
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_uses_shared_http_client(httpx_mock: HTTPXMock) -> None:
httpx_mock.add_response(url=SEND_URL, json={"ok": True})
shared = httpx.AsyncClient()
try:
c = _client(http_client=shared)
await c.notify("x")
finally:
await shared.aclose()
assert len(httpx_mock.get_requests()) == 1
# ---------------------------------------------------------------------------
# env-var loader
# ---------------------------------------------------------------------------
def test_load_credentials_returns_none_when_unset() -> None:
assert load_telegram_credentials(env={}) == (None, None)
def test_load_credentials_strips_whitespace() -> None:
env = {
"CERBERO_BITE_TELEGRAM_BOT_TOKEN": " abc ",
"CERBERO_BITE_TELEGRAM_CHAT_ID": " -100 ",
}
assert load_telegram_credentials(env=env) == ("abc", "-100")
def test_load_credentials_treats_empty_as_none() -> None:
env = {
"CERBERO_BITE_TELEGRAM_BOT_TOKEN": "",
"CERBERO_BITE_TELEGRAM_CHAT_ID": " ",
}
assert load_telegram_credentials(env=env) == (None, None)
+99
View File
@@ -0,0 +1,99 @@
"""Tests for the GUI live-balances fetcher (soft-error handling)."""
from __future__ import annotations
from decimal import Decimal
from typing import Any
import pytest
from cerbero_bite.clients.deribit import DeribitClient
from cerbero_bite.gui.live_data import _fetch_deribit_currency
class _FakeDeribit:
def __init__(self, payload: dict[str, Any] | Exception) -> None:
self._payload = payload
async def get_account_summary(self, currency: str) -> dict[str, Any]:
del currency # not used by the fake; kept for signature parity
if isinstance(self._payload, Exception):
raise self._payload
return self._payload
@pytest.mark.asyncio
async def test_soft_error_payload_becomes_row_error() -> None:
"""MCP V2 returns 200 + ``error`` field when upstream auth fails."""
fake = _FakeDeribit(
{
"equity": 0,
"balance": 0,
"available_funds": 0,
"unrealized_pnl": 0,
"error": "Deribit auth failed (code=13004): invalid_credentials",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.exchange == "deribit"
assert row.currency == "USDC"
assert row.equity is None
assert row.available is None
assert row.unrealized_pnl is None
assert row.error is not None
assert "invalid_credentials" in row.error
@pytest.mark.asyncio
async def test_clean_payload_populates_balance_fields() -> None:
fake = _FakeDeribit(
{
"equity": "12.5",
"available_funds": "10.0",
"unrealized_pnl": "-0.25",
}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("12.5")
assert row.available == Decimal("10.0")
assert row.unrealized_pnl == Decimal("-0.25")
@pytest.mark.asyncio
async def test_exception_becomes_row_error() -> None:
fake = _FakeDeribit(RuntimeError("boom"))
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.equity is None
assert row.error is not None
assert "RuntimeError" in row.error
assert "boom" in row.error
@pytest.mark.asyncio
async def test_blank_error_field_is_ignored() -> None:
"""An ``error`` field that is empty/None must not trigger the soft-error path."""
fake = _FakeDeribit(
{"equity": "1.0", "available_funds": "1.0", "unrealized_pnl": "0.0", "error": None}
)
row = await _fetch_deribit_currency(
deribit=fake, # type: ignore[arg-type]
currency="USDC",
)
assert row.error is None
assert row.equity == Decimal("1.0")
# Sanity-check: the production class signature is what we expect to be drop-in
# replaceable by ``_FakeDeribit``.
def test_fake_matches_production_signature() -> None:
assert hasattr(DeribitClient, "get_account_summary")
+205
View File
@@ -0,0 +1,205 @@
"""Tests for runtime.manual_actions_consumer."""
from __future__ import annotations
import json
from datetime import UTC, datetime
from pathlib import Path
from unittest.mock import MagicMock
import pytest
from cerbero_bite.runtime.manual_actions_consumer import consume_manual_actions
from cerbero_bite.safety.audit_log import AuditLog
from cerbero_bite.safety.kill_switch import KillSwitch, KillSwitchError
from cerbero_bite.state import Repository, connect, run_migrations, transaction
from cerbero_bite.state.models import ManualAction
def _now() -> datetime:
return datetime(2026, 4, 30, 12, 0, tzinfo=UTC)
def _ctx(tmp_path: Path):
db_path = tmp_path / "state.sqlite"
audit_path = tmp_path / "audit.log"
repo = Repository()
conn = connect(db_path)
run_migrations(conn)
with transaction(conn):
repo.init_system_state(conn, config_version="1.0.0", now=_now())
conn.close()
audit = AuditLog(audit_path)
ks = KillSwitch(
connection_factory=lambda: connect(db_path),
repository=repo,
audit_log=audit,
clock=_now,
)
ctx = MagicMock()
ctx.db_path = db_path
ctx.repository = repo
ctx.kill_switch = ks
ctx.audit_log = audit
return ctx
def _enqueue(ctx, kind: str, payload: dict[str, object]) -> int:
conn = connect(ctx.db_path)
try:
with transaction(conn):
return ctx.repository.enqueue_manual_action(
conn,
ManualAction(
kind=kind, # type: ignore[arg-type]
payload_json=json.dumps(payload),
created_at=_now(),
),
)
finally:
conn.close()
def _fetch_action(ctx, action_id: int):
conn = connect(ctx.db_path)
try:
row = conn.execute(
"SELECT consumed_at, consumed_by, result FROM manual_actions WHERE id = ?",
(action_id,),
).fetchone()
finally:
conn.close()
return row
@pytest.mark.asyncio
async def test_arm_kill_arms_kill_switch(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "arm_kill", {"reason": "GUI typed yes"})
assert ctx.kill_switch.is_armed() is False
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is True
row = _fetch_action(ctx, aid)
assert row["consumed_by"] == "engine"
assert row["result"] == "ok"
assert row["consumed_at"] is not None
@pytest.mark.asyncio
async def test_disarm_kill_disarms_kill_switch(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
ctx.kill_switch.arm(reason="prior", source="manual")
assert ctx.kill_switch.is_armed() is True
aid = _enqueue(ctx, "disarm_kill", {"reason": "operator override"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is False
row = _fetch_action(ctx, aid)
assert row["result"] == "ok"
@pytest.mark.asyncio
async def test_consumer_drains_queue(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
_enqueue(ctx, "arm_kill", {"reason": "first"})
_enqueue(ctx, "disarm_kill", {"reason": "second"})
_enqueue(ctx, "arm_kill", {"reason": "third"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 3
assert ctx.kill_switch.is_armed() is True
@pytest.mark.asyncio
async def test_unsupported_kind_marked_not_supported(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "force_close", {"proposal_id": "abc"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert row["result"] == "not_supported"
@pytest.mark.asyncio
async def test_missing_payload_uses_default_reason(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
_enqueue(ctx, "arm_kill", {})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
assert ctx.kill_switch.is_armed() is True
@pytest.mark.asyncio
async def test_kill_switch_error_caught_and_recorded(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
# Replace the kill switch with one whose arm raises.
bad_ks = MagicMock()
bad_ks.arm.side_effect = KillSwitchError("simulated")
bad_ks.is_armed.return_value = False
ctx.kill_switch = bad_ks
aid = _enqueue(ctx, "arm_kill", {"reason": "x"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert "KillSwitchError" in (row["result"] or "")
@pytest.mark.asyncio
async def test_empty_queue_returns_zero(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await consume_manual_actions(ctx, now=_now())
assert n == 0
@pytest.mark.asyncio
async def test_run_cycle_dispatches_to_runner(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
calls: list[str] = []
async def _entry() -> None:
calls.append("entry")
aid = _enqueue(ctx, "run_cycle", {"cycle": "entry"})
n = await consume_manual_actions(
ctx, cycle_runners={"entry": _entry}, now=_now()
)
assert n == 1
assert calls == ["entry"]
row = _fetch_action(ctx, aid)
assert row["result"] == "ok: ran entry"
@pytest.mark.asyncio
async def test_run_cycle_unknown_marked_error(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
async def _entry() -> None:
raise AssertionError("should not run")
aid = _enqueue(ctx, "run_cycle", {"cycle": "monitor"})
n = await consume_manual_actions(
ctx, cycle_runners={"entry": _entry}, now=_now()
)
assert n == 1
row = _fetch_action(ctx, aid)
assert "unknown cycle" in (row["result"] or "")
@pytest.mark.asyncio
async def test_run_cycle_without_runners_marks_not_supported(
tmp_path: Path,
) -> None:
ctx = _ctx(tmp_path)
aid = _enqueue(ctx, "run_cycle", {"cycle": "entry"})
n = await consume_manual_actions(ctx, now=_now())
assert n == 1
row = _fetch_action(ctx, aid)
assert row["result"] == "not_supported"
+166
View File
@@ -0,0 +1,166 @@
"""Tests for runtime.market_snapshot_cycle (best-effort collector)."""
from __future__ import annotations
import json
from datetime import UTC, datetime
from decimal import Decimal
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock
import pytest
from cerbero_bite.clients._exceptions import McpDataAnomalyError
from cerbero_bite.clients.deribit import DealerGammaSnapshot
from cerbero_bite.clients.sentiment import LiquidationHeatmap
from cerbero_bite.config import golden_config
from cerbero_bite.runtime.market_snapshot_cycle import collect_market_snapshot
from cerbero_bite.state import Repository, connect, run_migrations, transaction
def _now() -> datetime:
return datetime(2026, 4, 30, 12, 0, tzinfo=UTC)
def _ctx(tmp_path: Path) -> MagicMock:
db_path = tmp_path / "state.sqlite"
repo = Repository()
conn = connect(db_path)
run_migrations(conn)
with transaction(conn):
repo.init_system_state(conn, config_version="1.0.0", now=_now())
conn.close()
ctx = MagicMock()
ctx.db_path = db_path
ctx.repository = repo
ctx.cfg = golden_config()
# Default: every feed succeeds with sane mock values.
ctx.deribit = MagicMock()
ctx.deribit.spot_perp_price = AsyncMock(return_value=Decimal("3000"))
ctx.deribit.latest_dvol = AsyncMock(return_value=Decimal("55"))
ctx.deribit.realized_vol = AsyncMock(
return_value={
"rv_14d": Decimal("28"),
"rv_30d": Decimal("35"),
"iv_minus_rv_30d": Decimal("20"),
}
)
ctx.deribit.dealer_gamma_profile = AsyncMock(
return_value=DealerGammaSnapshot(
spot_price=Decimal("3000"),
total_net_dealer_gamma=Decimal("-66000000"),
gamma_flip_level=Decimal("2900"),
strikes_analyzed=42,
)
)
ctx.hyperliquid = MagicMock()
ctx.hyperliquid.funding_rate_annualized = AsyncMock(
return_value=Decimal("0.45")
)
ctx.sentiment = MagicMock()
ctx.sentiment.funding_cross_median_annualized = AsyncMock(
return_value=Decimal("0.30")
)
ctx.sentiment.liquidation_heatmap = AsyncMock(
return_value=LiquidationHeatmap(
asset="ETH",
avg_funding_rate=Decimal("0.0003"),
oi_delta_pct_4h=Decimal("1.2"),
oi_delta_pct_24h=None,
long_squeeze_risk="low",
short_squeeze_risk="low",
)
)
ctx.macro = MagicMock()
ctx.macro.next_high_severity_within = AsyncMock(return_value=3)
return ctx
def _read_snapshots(ctx: MagicMock, asset: str) -> list[dict]:
import sqlite3
conn = connect(ctx.db_path)
conn.row_factory = sqlite3.Row
try:
rows = conn.execute(
"SELECT * FROM market_snapshots WHERE asset = ? ORDER BY timestamp",
(asset,),
).fetchall()
finally:
conn.close()
return [dict(r) for r in rows]
@pytest.mark.asyncio
async def test_happy_path_persists_one_row_per_asset(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await collect_market_snapshot(ctx, assets=("ETH", "BTC"), now=_now())
assert n == 2
eth_rows = _read_snapshots(ctx, "ETH")
btc_rows = _read_snapshots(ctx, "BTC")
assert len(eth_rows) == 1
assert len(btc_rows) == 1
eth = eth_rows[0]
assert eth["fetch_ok"] == 1
assert eth["fetch_errors_json"] is None
assert Decimal(str(eth["spot"])) == Decimal("3000")
assert Decimal(str(eth["dealer_net_gamma"])) == Decimal("-66000000")
assert eth["macro_days_to_event"] == 3
@pytest.mark.asyncio
async def test_failure_in_one_metric_keeps_row_with_error(
tmp_path: Path,
) -> None:
ctx = _ctx(tmp_path)
ctx.deribit.dealer_gamma_profile = AsyncMock(
side_effect=McpDataAnomalyError(
"boom", service="deribit", tool="get_dealer_gamma_profile"
)
)
n = await collect_market_snapshot(ctx, assets=("ETH",), now=_now())
assert n == 1
rows = _read_snapshots(ctx, "ETH")
assert len(rows) == 1
assert rows[0]["fetch_ok"] == 0
errors = json.loads(rows[0]["fetch_errors_json"])
assert "dealer_gamma" in errors
assert rows[0]["dealer_net_gamma"] is None
# Other metrics still populated.
assert Decimal(str(rows[0]["spot"])) == Decimal("3000")
@pytest.mark.asyncio
async def test_btc_uses_btc_in_calls(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
await collect_market_snapshot(ctx, assets=("BTC",), now=_now())
ctx.deribit.spot_perp_price.assert_awaited_with("BTC")
ctx.hyperliquid.funding_rate_annualized.assert_awaited_with("BTC")
ctx.sentiment.liquidation_heatmap.assert_awaited_with("BTC")
@pytest.mark.asyncio
async def test_macro_failure_only_nulls_macro(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
ctx.macro.next_high_severity_within = AsyncMock(
side_effect=RuntimeError("calendar down")
)
await collect_market_snapshot(ctx, assets=("ETH",), now=_now())
rows = _read_snapshots(ctx, "ETH")
assert rows[0]["macro_days_to_event"] is None
assert rows[0]["fetch_ok"] == 0
errors = json.loads(rows[0]["fetch_errors_json"])
assert "macro" in errors
@pytest.mark.asyncio
async def test_returns_zero_for_empty_assets(tmp_path: Path) -> None:
ctx = _ctx(tmp_path)
n = await collect_market_snapshot(ctx, assets=(), now=_now())
assert n == 0
+59 -20
View File
@@ -1,14 +1,14 @@
"""Tests for the MCP endpoint and token resolver.""" """Tests for the MCP endpoint, token and bot-tag resolver."""
from __future__ import annotations from __future__ import annotations
from pathlib import Path
import pytest import pytest
from cerbero_bite.config.mcp_endpoints import ( from cerbero_bite.config.mcp_endpoints import (
DEFAULT_BOT_TAG,
DEFAULT_ENDPOINTS, DEFAULT_ENDPOINTS,
MCP_SERVICES, MCP_SERVICES,
load_bot_tag,
load_endpoints, load_endpoints,
load_token, load_token,
) )
@@ -16,7 +16,7 @@ from cerbero_bite.config.mcp_endpoints import (
def test_defaults_match_known_docker_dns() -> None: def test_defaults_match_known_docker_dns() -> None:
assert DEFAULT_ENDPOINTS["deribit"] == "http://mcp-deribit:9011" assert DEFAULT_ENDPOINTS["deribit"] == "http://mcp-deribit:9011"
assert DEFAULT_ENDPOINTS["telegram"] == "http://mcp-telegram:9017" assert DEFAULT_ENDPOINTS["sentiment"] == "http://mcp-sentiment:9014"
def test_load_endpoints_uses_defaults_when_env_empty() -> None: def test_load_endpoints_uses_defaults_when_env_empty() -> None:
@@ -46,31 +46,70 @@ def test_for_service_unknown_raises_key_error() -> None:
endpoints.for_service("nope") endpoints.for_service("nope")
def test_load_token_uses_explicit_path(tmp_path: Path) -> None: def test_load_token_uses_explicit_value() -> None:
target = tmp_path / "core.token" assert load_token(value="abcdef") == "abcdef"
target.write_text("abcdef\n", encoding="utf-8")
assert load_token(path=target) == "abcdef"
def test_load_token_uses_env_var(tmp_path: Path) -> None: def test_load_token_strips_whitespace_in_explicit_value() -> None:
target = tmp_path / "core.token" assert load_token(value=" abcdef\n") == "abcdef"
target.write_text("xyz", encoding="utf-8")
token = load_token(env={"CERBERO_BITE_CORE_TOKEN_FILE": str(target)})
def test_load_token_uses_env_var() -> None:
token = load_token(env={"CERBERO_BITE_MCP_TOKEN": "xyz"})
assert token == "xyz" assert token == "xyz"
def test_load_token_raises_when_file_missing(tmp_path: Path) -> None: def test_load_token_strips_whitespace_in_env_var() -> None:
with pytest.raises(FileNotFoundError): token = load_token(env={"CERBERO_BITE_MCP_TOKEN": " xyz\n"})
load_token(path=tmp_path / "missing") assert token == "xyz"
def test_load_token_raises_when_file_empty(tmp_path: Path) -> None: def test_load_token_raises_when_missing() -> None:
target = tmp_path / "empty" with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
target.write_text("", encoding="utf-8") load_token(env={})
def test_load_token_raises_when_empty() -> None:
with pytest.raises(ValueError, match="CERBERO_BITE_MCP_TOKEN"):
load_token(env={"CERBERO_BITE_MCP_TOKEN": " "})
def test_load_token_raises_when_explicit_value_blank() -> None:
with pytest.raises(ValueError, match="empty"): with pytest.raises(ValueError, match="empty"):
load_token(path=target) load_token(value=" ")
def test_load_bot_tag_default_when_unset() -> None:
assert load_bot_tag(env={}) == DEFAULT_BOT_TAG
def test_load_bot_tag_explicit_value_overrides_env() -> None:
tag = load_bot_tag(value="BOT__CUSTOM", env={"CERBERO_BITE_MCP_BOT_TAG": "x"})
assert tag == "BOT__CUSTOM"
def test_load_bot_tag_uses_env_when_set() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": "BOT__SHADOW"})
assert tag == "BOT__SHADOW"
def test_load_bot_tag_strips_whitespace() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " BOT__X\n"})
assert tag == "BOT__X"
def test_load_bot_tag_falls_back_to_default_when_blank_env() -> None:
tag = load_bot_tag(env={"CERBERO_BITE_MCP_BOT_TAG": " "})
assert tag == DEFAULT_BOT_TAG
def test_load_bot_tag_rejects_too_long() -> None:
with pytest.raises(ValueError, match="exceeds 64"):
load_bot_tag(value="x" * 65)
def test_mcp_services_table_is_complete() -> None: def test_mcp_services_table_is_complete() -> None:
expected = {"deribit", "hyperliquid", "macro", "sentiment", "telegram", "portfolio"} # Telegram and Portfolio are now in-process and must NOT be listed
# as shared MCP services.
expected = {"deribit", "hyperliquid", "macro", "sentiment"}
assert set(MCP_SERVICES) == expected assert set(MCP_SERVICES) == expected
+6 -2
View File
@@ -5,6 +5,7 @@ from __future__ import annotations
from datetime import UTC, datetime from datetime import UTC, datetime
from pathlib import Path from pathlib import Path
from cerbero_bite.clients.portfolio import PortfolioClient
from cerbero_bite.config import golden_config from cerbero_bite.config import golden_config
from cerbero_bite.config.mcp_endpoints import load_endpoints from cerbero_bite.config.mcp_endpoints import load_endpoints
from cerbero_bite.runtime import build_runtime from cerbero_bite.runtime import build_runtime
@@ -51,5 +52,8 @@ def test_build_runtime_clients_pinned_to_endpoints(tmp_path: Path) -> None:
assert ctx.macro.SERVICE == "macro" assert ctx.macro.SERVICE == "macro"
assert ctx.sentiment.SERVICE == "sentiment" assert ctx.sentiment.SERVICE == "sentiment"
assert ctx.hyperliquid.SERVICE == "hyperliquid" assert ctx.hyperliquid.SERVICE == "hyperliquid"
assert ctx.portfolio.SERVICE == "portfolio" # Portfolio is now an in-process aggregator over deribit/hyperliquid/macro;
assert ctx.telegram.SERVICE == "telegram" # it has no SERVICE attribute. Telegram is also in-process and disabled
# when env vars are unset.
assert isinstance(ctx.portfolio, PortfolioClient)
assert ctx.telegram.enabled is False
+63
View File
@@ -0,0 +1,63 @@
"""Tests for the runtime flag loader."""
from __future__ import annotations
import pytest
from cerbero_bite.config.runtime_flags import (
DATA_ANALYSIS_ENV,
STRATEGY_ENV,
RuntimeFlags,
load_runtime_flags,
)
def test_default_profile_is_analysis_only() -> None:
flags = load_runtime_flags(env={})
assert flags == RuntimeFlags(
data_analysis_enabled=True, strategy_enabled=False
)
def test_strategy_can_be_explicitly_enabled() -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: "true"})
assert flags.strategy_enabled is True
assert flags.data_analysis_enabled is True
def test_data_analysis_can_be_disabled() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: "false"})
assert flags.data_analysis_enabled is False
assert flags.strategy_enabled is False
@pytest.mark.parametrize(
"raw,expected",
[
("1", True),
("0", False),
("yes", True),
("no", False),
("on", True),
("OFF", False),
("ENABLED", True),
("Disabled", False),
("True", True),
("False", False),
(" true ", True),
],
)
def test_parses_common_truthy_falsy_tokens(raw: str, expected: bool) -> None:
flags = load_runtime_flags(env={STRATEGY_ENV: raw})
assert flags.strategy_enabled is expected
def test_blank_value_falls_back_to_default() -> None:
flags = load_runtime_flags(env={DATA_ANALYSIS_ENV: " ", STRATEGY_ENV: ""})
assert flags.data_analysis_enabled is True
assert flags.strategy_enabled is False
def test_unknown_token_raises() -> None:
with pytest.raises(ValueError, match=DATA_ANALYSIS_ENV):
load_runtime_flags(env={DATA_ANALYSIS_ENV: "maybe"})
Generated
+2
View File
@@ -111,6 +111,7 @@ dependencies = [
{ name = "pydantic" }, { name = "pydantic" },
{ name = "pydantic-settings" }, { name = "pydantic-settings" },
{ name = "python-dateutil" }, { name = "python-dateutil" },
{ name = "python-dotenv" },
{ name = "pyyaml" }, { name = "pyyaml" },
{ name = "rich" }, { name = "rich" },
{ name = "sqlalchemy" }, { name = "sqlalchemy" },
@@ -161,6 +162,7 @@ requires-dist = [
{ name = "pydantic", specifier = ">=2.9" }, { name = "pydantic", specifier = ">=2.9" },
{ name = "pydantic-settings", specifier = ">=2.5" }, { name = "pydantic-settings", specifier = ">=2.5" },
{ name = "python-dateutil", specifier = ">=2.9" }, { name = "python-dateutil", specifier = ">=2.9" },
{ name = "python-dotenv", specifier = ">=1.2.2" },
{ name = "pyyaml", specifier = ">=6.0" }, { name = "pyyaml", specifier = ">=6.0" },
{ name = "rich", specifier = ">=13.9" }, { name = "rich", specifier = ">=13.9" },
{ name = "scipy", marker = "extra == 'backtest'", specifier = ">=1.14" }, { name = "scipy", marker = "extra == 'backtest'", specifier = ">=1.14" },