feat(ga): fitness continua v1 con tanh(sharpe) + penalita' moltiplicativa di drawdown

Phase 1 v0 usava `max(0, dsr - 0.5*max_dd)` che azzerava brutalmente la fitness
quando max_dd > 2*dsr. Real run v4 aveva 55/55 strategie a fitness=0 (DSR ~0.001,
max_dd > 0.5), zero pressione selettiva sul GA.

v1: base = 0.5*dsr + 0.5*0.5*(tanh(sharpe)+1) in [0,1], modulata da penalty
moltiplicativa 1/(1+k*max_dd) in (0,1]. Hard kill (no-trade, HIGH adversarial)
preservati. Fitness sempre >0 per strategie con almeno 1 trade -> il GA
puo' preferire "meno cattivo" a "catastrofico" anche su sharpe negativo.

Tests: +3 nuovi (continuous mediocre, bounded, monotonic drawdown), 4 esistenti
restano verdi. Suite 138 -> 141 passed. ruff + mypy strict puliti.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-10 21:24:05 +02:00
parent d4fcb42fc5
commit d159075182
2 changed files with 88 additions and 15 deletions
+40 -13
View File
@@ -1,17 +1,31 @@
"""Fitness function v0 della Phase 1.
"""Fitness function v1 della Phase 1.
Combina :class:`FalsificationReport` (metriche di robustezza) e
:class:`AdversarialReport` (findings euristici) in uno scalare ``>= 0`` che il
GA usa per selezione e ranking.
Logica deliberatamente coarse: DSR penalizzato dal max drawdown, con due
kill-switch hard (no-trade, finding HIGH adversarial) che azzerano la fitness.
La penalita' lineare sul drawdown e' un compromesso volutamente semplice;
versioni successive potranno usare Calmar o utility convessa.
Versione v1: rispetto alla v0 (DSR meno penalita' lineare di drawdown, clamp
a zero) la formula e' continua e quasi sempre strettamente positiva, in modo
da fornire un gradient anche su strategie mediocri o con Sharpe negativo.
Restano due kill-switch hard (no-trade, finding HIGH adversarial) che azzerano
la fitness.
Formula::
sharpe_norm = 0.5 * (tanh(sharpe) + 1.0) # in [0, 1]
base = dsr_weight * dsr + sharpe_weight * sharpe_norm
penalty = 1.0 / (1.0 + drawdown_penalty * max_drawdown)
fitness = max(0.0, base * penalty)
Con i default ``dsr_weight = sharpe_weight = 0.5`` la base e' in ``[0, 1]`` e
``penalty`` in ``(0, 1]``: fitness e' bounded in ``[0, 1]`` per input sani e
mai esattamente zero finche' Sharpe e' finito e ``max_dd`` finito.
"""
from __future__ import annotations
import math
from ..agents.adversarial import AdversarialReport, Severity
from ..agents.falsification import FalsificationReport
@@ -19,26 +33,39 @@ from ..agents.falsification import FalsificationReport
def compute_fitness(
falsification: FalsificationReport,
adversarial: AdversarialReport,
drawdown_penalty: float = 0.5,
drawdown_penalty: float = 1.0,
dsr_weight: float = 0.5,
sharpe_weight: float = 0.5,
) -> float:
"""Calcola la fitness scalare di una strategia.
"""Calcola la fitness scalare di una strategia (v1, continua).
Args:
falsification: report con DSR, max_drawdown, n_trades.
falsification: report con DSR, Sharpe, max_drawdown, n_trades.
adversarial: report con eventuali findings euristici.
drawdown_penalty: peso lineare sul max drawdown (default 0.5).
drawdown_penalty: peso del max drawdown nel denominatore della
penalita' moltiplicativa (default 1.0). Valori piu' alti
penalizzano piu' severamente strategie con DD alto.
dsr_weight: peso del DSR nella base (default 0.5).
sharpe_weight: peso dello Sharpe normalizzato nella base
(default 0.5).
Returns:
Fitness ``>= 0``. Zero indica strategia da scartare.
Fitness ``>= 0``. Zero indica strategia da scartare (no-trade o
kill adversarial). Valori tipici per strategie sane: ``[0.05, 1.0]``.
Logica:
1. ``n_trades == 0`` → 0 (nessuna evidenza, sega subito).
2. Almeno un finding ``HIGH`` adversarial → 0 (kill).
3. Altrimenti: ``dsr - drawdown_penalty * max_drawdown``, clamped a 0.
3. Altrimenti combina DSR e ``tanh(sharpe)`` normalizzato in
``[0, 1]``, modulato da una penalita' continua del drawdown
``1 / (1 + k * max_dd)``.
"""
if falsification.n_trades == 0:
return 0.0
if any(f.severity == Severity.HIGH for f in adversarial.findings):
return 0.0
raw = falsification.dsr - drawdown_penalty * falsification.max_drawdown
return max(0.0, float(raw))
dsr = max(0.0, min(1.0, float(falsification.dsr)))
sharpe_norm = 0.5 * (math.tanh(float(falsification.sharpe)) + 1.0)
base = dsr_weight * dsr + sharpe_weight * sharpe_norm
penalty = 1.0 / (1.0 + drawdown_penalty * float(falsification.max_drawdown))
return max(0.0, float(base * penalty))
+48 -2
View File
@@ -1,13 +1,18 @@
from itertools import pairwise
from multi_swarm.agents.adversarial import AdversarialReport, Finding, Severity
from multi_swarm.agents.falsification import FalsificationReport
from multi_swarm.ga.fitness import compute_fitness
def make_falsification(
dsr: float = 0.7, max_dd: float = 0.2, n_trades: int = 30
dsr: float = 0.7,
max_dd: float = 0.2,
n_trades: int = 30,
sharpe: float = 1.5,
) -> FalsificationReport:
return FalsificationReport(
sharpe=1.5,
sharpe=sharpe,
dsr=dsr,
dsr_pvalue=0.05,
max_drawdown=max_dd,
@@ -43,3 +48,44 @@ def test_fitness_zeroed_by_high_severity_finding() -> None:
findings=[Finding(name="degenerate", severity=Severity.HIGH, detail="x")]
)
assert compute_fitness(f, a) == 0.0
def test_fitness_continuous_signal_for_mediocre() -> None:
"""Strategie mediocri (DSR ~0, Sharpe negativo) hanno comunque fitness>0
e la meno cattiva e' preferita."""
a = AdversarialReport()
less_bad = make_falsification(dsr=0.001, sharpe=-0.5, max_dd=0.3)
worse = make_falsification(dsr=0.001, sharpe=-2.0, max_dd=0.3)
f_less = compute_fitness(less_bad, a)
f_worse = compute_fitness(worse, a)
assert f_less > 0.0
assert f_worse > 0.0
assert f_less > f_worse
def test_fitness_bounded() -> None:
"""Fitness e' bounded in [0, 2.0] per input tipici."""
a = AdversarialReport()
cases = [
make_falsification(dsr=0.0, sharpe=-5.0, max_dd=0.0),
make_falsification(dsr=0.0, sharpe=0.0, max_dd=0.0),
make_falsification(dsr=0.5, sharpe=1.0, max_dd=0.2),
make_falsification(dsr=0.9, sharpe=2.0, max_dd=0.15),
make_falsification(dsr=1.0, sharpe=5.0, max_dd=0.0),
make_falsification(dsr=1.0, sharpe=10.0, max_dd=5.0),
]
for f in cases:
v = compute_fitness(f, a)
assert 0.0 <= v <= 2.0, f"fitness {v} fuori range per {f}"
def test_fitness_normalizes_drawdown() -> None:
"""Con DSR e Sharpe fissi, fitness e' monotona decrescente in max_dd."""
a = AdversarialReport()
dds = [0.0, 0.1, 0.5, 1.0, 2.0, 5.0]
fitnesses = [
compute_fitness(make_falsification(dsr=0.5, sharpe=1.0, max_dd=dd), a)
for dd in dds
]
for prev, curr in pairwise(fitnesses):
assert prev > curr, f"non monotona: {fitnesses}"