Phase 4: orchestrator + cycles auto-execute
Componente runtime/ che cabla core+clients+state+safety in un engine autonomo notify-only: nessuna conferma manuale, ordini combo piazzati direttamente quando le regole passano. 311 test pass, copertura totale 94%, runtime/ 90%, mypy strict pulito, ruff clean. Moduli: - runtime/alert_manager.py: escalation tree LOW/MEDIUM/HIGH/CRITICAL → audit + Telegram + kill switch. - runtime/dependencies.py: build_runtime() costruisce RuntimeContext con tutti i client MCP, repository, audit log, kill switch, alert manager. - runtime/entry_cycle.py: flusso settimanale (snapshot parallelo spot/dvol/funding/macro/holdings/equity → validate_entry → compute_bias → options_chain → select_strikes → liquidity_gate → sizing_engine → combo_builder.build → place_combo_order → notify_position_opened). - runtime/monitor_cycle.py: loop 12h con dvol_history per il return_4h, exit_decision.evaluate, close auto-execute. - runtime/health_check.py: probe parallelo MCP + SQLite + environment match; 3 strikes consecutivi → kill switch HIGH. - runtime/recovery.py: riconciliazione SQLite vs broker all'avvio; mismatch → kill switch CRITICAL. - runtime/scheduler.py: AsyncIOScheduler builder con cron entry (lun 14:00), monitor (02/14), health (5min). - runtime/orchestrator.py: façade boot() + run_entry/monitor/health + install_scheduler + run_forever, con env check vs strategy. CLI: - start: avvia engine bloccante (asyncio.run + scheduler). - dry-run --cycle entry|monitor|health: esegue un singolo ciclo per debug/test in produzione. - stop: documenta lo shutdown via SIGTERM al container. Documentazione: - docs/06-operational-flow.md riscritto per il modello notify-only auto-execute (no conferma manuale, no memory, no brain-bridge). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,55 @@
|
||||
"""Tests for the runtime dependency container."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
|
||||
from cerbero_bite.config import golden_config
|
||||
from cerbero_bite.config.mcp_endpoints import load_endpoints
|
||||
from cerbero_bite.runtime import build_runtime
|
||||
from cerbero_bite.state import connect
|
||||
|
||||
|
||||
def test_build_runtime_creates_state_and_audit_files(tmp_path: Path) -> None:
|
||||
db_path = tmp_path / "state.sqlite"
|
||||
audit_path = tmp_path / "audit.log"
|
||||
|
||||
ctx = build_runtime(
|
||||
cfg=golden_config(),
|
||||
endpoints=load_endpoints(env={}),
|
||||
token="t",
|
||||
db_path=db_path,
|
||||
audit_path=audit_path,
|
||||
clock=lambda: datetime(2026, 4, 27, 14, 0, tzinfo=UTC),
|
||||
)
|
||||
|
||||
assert db_path.exists()
|
||||
assert ctx.audit_log.path == audit_path
|
||||
# system_state singleton initialised
|
||||
conn = connect(db_path)
|
||||
try:
|
||||
state = ctx.repository.get_system_state(conn)
|
||||
finally:
|
||||
conn.close()
|
||||
assert state is not None
|
||||
assert state.config_version == ctx.cfg.config_version
|
||||
|
||||
|
||||
def test_build_runtime_clients_pinned_to_endpoints(tmp_path: Path) -> None:
|
||||
ctx = build_runtime(
|
||||
cfg=golden_config(),
|
||||
endpoints=load_endpoints(
|
||||
env={"CERBERO_BITE_MCP_DERIBIT_URL": "http://localhost:9911"}
|
||||
),
|
||||
token="t",
|
||||
db_path=tmp_path / "state.sqlite",
|
||||
audit_path=tmp_path / "audit.log",
|
||||
)
|
||||
# type checks: every client is the right concrete type
|
||||
assert ctx.deribit.SERVICE == "deribit"
|
||||
assert ctx.macro.SERVICE == "macro"
|
||||
assert ctx.sentiment.SERVICE == "sentiment"
|
||||
assert ctx.hyperliquid.SERVICE == "hyperliquid"
|
||||
assert ctx.portfolio.SERVICE == "portfolio"
|
||||
assert ctx.telegram.SERVICE == "telegram"
|
||||
Reference in New Issue
Block a user