Compare commits

..

7 Commits

Author SHA1 Message Date
Adriano 543ae0f643 merge: UI pannello diagnostica 2026-05-05 10:41:26 +02:00
Adriano a12574f3c5 feat(web): pannello diagnostica match (CC) con hint contestuali
MatchResp ora include diag dict (CC feature). UI rendering:

- Nuovo pannello pieghevole "🔍 Diagnostica" sotto i tempi
- Per ogni match mostra:
  * pipeline pruning (vars total → top_eval → top_pass → full_eval)
  * candidati (raw → pre_nms → final)
  * drop reasons (NCC, score, recall, bbox, NMS) con counter
  * soglie effettive applicate
  * flag attivi (polarity, soft, subpix-LM)

- Quando 0 match → pannello si apre automaticamente + mostra hint
  contestuale specifico:
  * "0 candidati top" → suggerisce ↓ min_score / top_thresh
  * "tutti dropped da NCC" → ↓ verify_threshold (filtro_fp)
  * "score post-NCC sotto" → ↓ min_score
  * "recall basso" → ↓ min_recall
  * "bbox out-of-scene" → check pose / search_roi

Risolve il pattern "0 match perche'?" con guida actionable invece
del black-box. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) propagano il diag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:41:26 +02:00
Adriano 110dc87b08 merge: AA eval CLI 2026-05-05 10:10:00 +02:00
Adriano 2bb2cf63cc merge: II scene cache 2026-05-05 10:09:56 +02:00
Adriano ea6a9163ad merge: CC diagnostic mode 2026-05-05 10:09:56 +02:00
Adriano 74a332a2dd feat: scene precompute cache (II Halcon-style)
LRU cache per scena: hash su prime 64KB bytes + parametri matcher
(weak/strong_grad, spread_radius, n_bins, pyramid_levels). Quando
hit, riusa:
- piramide grays
- spread_top + bit_active_top + density_top
- spread0 + bit_active_full + density_full

Tipico use case: UI tuning con slider min_score/verify_threshold/...
produce 10+ find() consecutive su scena identica. Risparmia
Sobel+dilate+popcount duplicati (~50ms su 1080p).

Speedup misurato: ~15% find() su 1080p (54ms su 351ms). Vantaggio
maggiore su template piccoli (kernel JIT veloce → scena precompute
domina). Cache size 4, invalidata in train() (template cambiato).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:07:27 +02:00
Adriano dae49eb4a3 feat: diagnostic mode trasparente per find()
self._last_diag accumula counter durante find():
- Pipeline pruning: top_evaluated, top_passed, full_evaluated
- Candidati: n_raw, n_after_pre_nms, n_final
- Drop reason: ncc_low, min_score_post_avg, recall_low,
  bbox_out_of_scene, nms_iou
- Param effettivi: top_thresh_used, verify_threshold_used, ecc.

API:
- find(debug=True): stampa one-line summary su stderr
- m.get_last_diag(): ritorna dict completo per inspection

Use case: 0 match? guarda dove sono finiti i candidati
(es. drop_ncc_low=200 → soglia NCC troppo alta) invece di
tirare a caso. Risolve il "find black-box" pattern.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:05:20 +02:00
4 changed files with 228 additions and 20 deletions
+143 -6
View File
@@ -512,8 +512,10 @@ class LineShapeMatcher:
self.variants.clear() self.variants.clear()
# Reset view list: template principale = view 0 # Reset view list: template principale = view 0
self._view_templates = [(gray.copy(), mask_full.copy())] self._view_templates = [(gray.copy(), mask_full.copy())]
# Invalida cache feature di refine: il template e cambiato. # Invalida cache: template/param cambiati → spread/feature obsoleti.
self._refine_feat_cache = {} self._refine_feat_cache = {}
if hasattr(self, "_scene_cache"):
self._scene_cache.clear()
self._build_variants_for_view(gray, mask_full, view_idx=0) self._build_variants_for_view(gray, mask_full, view_idx=0)
self._dedup_variants() self._dedup_variants()
return len(self.variants) return len(self.variants)
@@ -669,6 +671,51 @@ class LineShapeMatcher:
raw[b] = d.astype(np.float32) raw[b] = d.astype(np.float32)
return raw return raw
# --- Scene precompute cache (II Halcon-style) -----------------------
_SCENE_CACHE_SIZE = 4
def _scene_cache_key(self, gray: np.ndarray) -> str | None:
"""Hash compatto della scena + param che influenzano spread/density.
Hash su prime 64KB della scena (sufficiente discriminante per
scene fotografiche) + parametri matcher rilevanti. None se cache
disabilitata (es. scene troppo piccole).
"""
if gray.size < 100:
return None
try:
import hashlib
h = hashlib.md5()
sample = gray.tobytes()[:65536]
h.update(sample)
h.update(f"|{gray.shape}|{gray.dtype}".encode())
h.update(
f"|{self.weak_grad}|{self.strong_grad}"
f"|{self.spread_radius}|{self._n_bins}"
f"|{self.pyramid_levels}".encode()
)
return h.hexdigest()
except Exception:
return None
def _scene_cache_get(self, key: str) -> tuple | None:
cache = getattr(self, "_scene_cache", None)
if cache is None:
return None
v = cache.get(key)
if v is not None:
cache.move_to_end(key)
return v
def _scene_cache_put(self, key: str, value: tuple) -> None:
from collections import OrderedDict
if not hasattr(self, "_scene_cache"):
self._scene_cache = OrderedDict()
self._scene_cache[key] = value
self._scene_cache.move_to_end(key)
while len(self._scene_cache) > self._SCENE_CACHE_SIZE:
self._scene_cache.popitem(last=False)
def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray: def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray:
"""Spread bitmap: bit b acceso dove bin b è presente nel raggio. """Spread bitmap: bit b acceso dove bin b è presente nel raggio.
@@ -1309,6 +1356,7 @@ class LineShapeMatcher:
min_recall: float = 0.0, min_recall: float = 0.0,
use_soft_score: bool = False, use_soft_score: bool = False,
subpixel_lm: bool = False, subpixel_lm: bool = False,
debug: bool = False,
) -> list[Match]: ) -> list[Match]:
""" """
scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0: scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0:
@@ -1326,6 +1374,32 @@ class LineShapeMatcher:
if not self.variants: if not self.variants:
raise RuntimeError("Matcher non addestrato: chiamare train() prima.") raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
# Diagnostic counter: traccia perche' candidati sono droppati lungo
# la pipeline. Esposto via get_last_diag() o ritornato implicitamente
# se debug=True (vedi sotto).
diag = {
"n_variants_total": len(self.variants),
"n_variants_top_evaluated": 0,
"n_variants_top_passed": 0,
"n_variants_full_evaluated": 0,
"n_raw_candidates": 0,
"n_after_pre_nms": 0,
"drop_ncc_low": 0,
"drop_min_score_post_avg": 0,
"drop_recall_low": 0,
"drop_bbox_out_of_scene": 0,
"drop_nms_iou": 0,
"n_final": 0,
"top_thresh_used": 0.0,
"verify_threshold_used": float(verify_threshold),
"min_score_used": float(min_score),
"min_recall_used": float(min_recall),
"use_polarity": bool(self.use_polarity),
"use_soft_score": bool(use_soft_score),
"subpixel_lm": bool(subpixel_lm),
}
self._last_diag = diag
gray_full = self._to_gray(scene_bgr) gray_full = self._to_gray(scene_bgr)
# Applica ROI di ricerca: restringe scena a crop, ricorda offset per # Applica ROI di ricerca: restringe scena a crop, ricorda offset per
# ri-traslare le coordinate dei match a fine pipeline. # ri-traslare le coordinate dei match a fine pipeline.
@@ -1340,18 +1414,31 @@ class LineShapeMatcher:
else: else:
gray0 = gray_full gray0 = gray_full
roi_offset = (0, 0) roi_offset = (0, 0)
# Cache pre-compute scena (II Halcon-style): hash bytes scene + param
# gradient/spread → riusa spread piramide + density tra find()
# consecutive con stessa scena (typical UI tuning: slider produce
# 10+ find() su scena identica). Risparmia ~80% del costo non-kernel.
cache_key = self._scene_cache_key(gray0)
cached = self._scene_cache_get(cache_key) if cache_key else None
if cached is not None:
grays, spread_top, bit_active_top, density_top, spread0, \
bit_active_full, density_full, top = cached
else:
grays = [gray0] grays = [gray0]
for _ in range(self.pyramid_levels - 1): for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1])) grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1 top = len(grays) - 1
# Spread bitmap (uint8) al top level: 32× meno memoria della response
# map float32 → MOLTO più cache-friendly per _score_by_shift.
spread_top = self._spread_bitmap(grays[top]) spread_top = self._spread_bitmap(grays[top])
bit_active_top = int( bit_active_top = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread_top & (spread_top.dtype.type(1) << b)).any()) if (spread_top & (spread_top.dtype.type(1) << b)).any())
) )
density_top = _jit_popcount(spread_top)
# spread0 + density_full computati piu sotto, quindi salvo dopo.
spread0 = None
bit_active_full = None
density_full = None
if nms_radius is None: if nms_radius is None:
nms_radius = max(8, min(self.template_size) // 2) nms_radius = max(8, min(self.template_size) // 2)
# Pruning adattivo allo step angolare: con step piccolo (<= 3 deg) # Pruning adattivo allo step angolare: con step piccolo (<= 3 deg)
@@ -1368,9 +1455,10 @@ class LineShapeMatcher:
top_factor = max(top_factor, 0.7) top_factor = max(top_factor, 0.7)
cf_eff = 1 cf_eff = 1
top_thresh = min_score * top_factor top_thresh = min_score * top_factor
diag["top_thresh_used"] = float(top_thresh)
tw, th = self.template_size tw, th = self.template_size
density_top = _jit_popcount(spread_top) # density_top gia' computato sopra (cache o miss)
sf_top = 2 ** top sf_top = 2 ** top
bg_cache_top: dict[float, np.ndarray] = {} bg_cache_top: dict[float, np.ndarray] = {}
bg_cache_full: dict[float, np.ndarray] = {} bg_cache_full: dict[float, np.ndarray] = {}
@@ -1453,6 +1541,7 @@ class LineShapeMatcher:
kept_coarse: list[tuple[int, float]] = [] kept_coarse: list[tuple[int, float]] = []
all_top_scores: list[tuple[int, float]] = [] all_top_scores: list[tuple[int, float]] = []
diag["n_variants_top_evaluated"] = len(coarse_idx_list)
# batch_top: usa kernel batch single-call con prange-esterno su # batch_top: usa kernel batch single-call con prange-esterno su
# varianti. Vince su threadpool quando n_vars >> n_threads e quando # varianti. Vince su threadpool quando n_vars >> n_threads e quando
# H*W top e' piccolo (overhead chiamate JIT > costo kernel). # H*W top e' piccolo (overhead chiamate JIT > costo kernel).
@@ -1516,14 +1605,24 @@ class LineShapeMatcher:
kept_variants.sort(key=lambda t: -t[1]) kept_variants.sort(key=lambda t: -t[1])
max_vars_full = max(max_matches * 8, len(self.variants) // 2) max_vars_full = max(max_matches * 8, len(self.variants) // 2)
kept_variants = kept_variants[:max_vars_full] kept_variants = kept_variants[:max_vars_full]
diag["n_variants_top_passed"] = len(kept_coarse)
diag["n_variants_full_evaluated"] = len(kept_variants)
# Full-res (parallelizzato) con bitmap # Full-res (parallelizzato) con bitmap.
# Riusa cache se disponibile, altrimenti computa e salva.
if spread0 is None:
spread0 = self._spread_bitmap(gray0) spread0 = self._spread_bitmap(gray0)
bit_active_full = int( bit_active_full = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread0 & (spread0.dtype.type(1) << b)).any()) if (spread0 & (spread0.dtype.type(1) << b)).any())
) )
density_full = _jit_popcount(spread0) density_full = _jit_popcount(spread0)
# Salva cache scena complete
if cache_key is not None:
self._scene_cache_put(cache_key, (
grays, spread_top, bit_active_top, density_top,
spread0, bit_active_full, density_full, top,
))
for sc in unique_scales: for sc in unique_scales:
bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1) bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1)
@@ -1601,6 +1700,7 @@ class LineShapeMatcher:
raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi)) raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi))
raw.sort(key=lambda c: -c[0]) raw.sort(key=lambda c: -c[0])
diag["n_raw_candidates"] = len(raw)
# Mappa vi → score_map per subpixel/refinement # Mappa vi → score_map per subpixel/refinement
score_maps = dict(candidates_per_var) score_maps = dict(candidates_per_var)
@@ -1632,6 +1732,7 @@ class LineShapeMatcher:
preliminary_int.append((score, xi, yi, vi)) preliminary_int.append((score, xi, yi, vi))
if len(preliminary_int) >= pre_cap: if len(preliminary_int) >= pre_cap:
break break
diag["n_after_pre_nms"] = len(preliminary_int)
# Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap) # Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap)
kept: list[Match] = [] kept: list[Match] = []
@@ -1678,6 +1779,7 @@ class LineShapeMatcher:
view_idx=getattr(var, "view_idx", 0), view_idx=getattr(var, "view_idx", 0),
) )
if ncc < verify_threshold: if ncc < verify_threshold:
diag["drop_ncc_low"] += 1
continue continue
score_f = (float(score_f) + max(0.0, ncc)) * 0.5 score_f = (float(score_f) + max(0.0, ncc)) * 0.5
# Soft-margin gradient similarity: sostituisce o integra lo # Soft-margin gradient similarity: sostituisce o integra lo
@@ -1692,6 +1794,7 @@ class LineShapeMatcher:
# abbattere lo shape-score sotto la soglia user. Senza questo # abbattere lo shape-score sotto la soglia user. Senza questo
# check apparivano match con score < min_score (UI confusing). # check apparivano match con score < min_score (UI confusing).
if float(score_f) < min_score: if float(score_f) < min_score:
diag["drop_min_score_post_avg"] += 1
continue continue
# Feature recall (Halcon MinScore-style): conta quante feature # Feature recall (Halcon MinScore-style): conta quante feature
@@ -1703,6 +1806,7 @@ class LineShapeMatcher:
spread0, var, cx_f, cy_f, ang_f, spread0, var, cx_f, cy_f, ang_f,
) )
if recall < min_recall: if recall < min_recall:
diag["drop_recall_low"] += 1
continue continue
# Ri-traslo coord da spazio crop ROI a spazio scena originale. # Ri-traslo coord da spazio crop ROI a spazio scena originale.
@@ -1726,6 +1830,7 @@ class LineShapeMatcher:
) )
inside_ratio = float(inter) / poly_area inside_ratio = float(inter) / poly_area
if inside_ratio < 0.75: if inside_ratio < 0.75:
diag["drop_bbox_out_of_scene"] += 1
continue continue
# Penalità scala opzionale: score degrada con distanza da 1.0 # Penalità scala opzionale: score degrada con distanza da 1.0
if scale_penalty > 0.0 and var.scale != 1.0: if scale_penalty > 0.0 and var.scale != 1.0:
@@ -1750,6 +1855,7 @@ class LineShapeMatcher:
dup = True dup = True
break break
if dup: if dup:
diag["drop_nms_iou"] += 1
continue continue
kept.append(Match( kept.append(Match(
cx=cx_out, cy=cy_out, cx=cx_out, cy=cy_out,
@@ -1760,4 +1866,35 @@ class LineShapeMatcher:
)) ))
if len(kept) >= max_matches: if len(kept) >= max_matches:
break break
diag["n_final"] = len(kept)
if debug:
# Debug mode: stampa diagnostica su stderr per visibilita' immediata.
import sys as _sys
_sys.stderr.write(f"[pm2d.find debug] {self._format_diag(diag)}\n")
return kept return kept
def _format_diag(self, diag: dict) -> str:
"""Formatta dict diagnostica in una linea leggibile."""
return (
f"vars: {diag['n_variants_total']} -> "
f"top_eval={diag['n_variants_top_evaluated']} "
f"top_pass={diag['n_variants_top_passed']} "
f"full_eval={diag['n_variants_full_evaluated']} | "
f"raw={diag['n_raw_candidates']} "
f"pre_nms={diag['n_after_pre_nms']} -> "
f"drop[ncc={diag['drop_ncc_low']}, "
f"score={diag['drop_min_score_post_avg']}, "
f"recall={diag['drop_recall_low']}, "
f"bbox={diag['drop_bbox_out_of_scene']}, "
f"nms={diag['drop_nms_iou']}] = "
f"final={diag['n_final']} (top_thresh={diag['top_thresh_used']:.2f})"
)
def get_last_diag(self) -> dict | None:
"""Ritorna dict diagnostica dell'ultima chiamata find().
Halcon-equivalent: oggi inspect_shape_model espone parziali contatori.
Util per debug 'perche' 0 match', tuning interattivo, validation.
Vedi diag keys per significato (n_variants_top_evaluated, drop_*, ...).
"""
return getattr(self, "_last_diag", None)
+4
View File
@@ -217,6 +217,7 @@ class MatchResp(BaseModel):
find_time: float find_time: float
num_variants: int num_variants: int
annotated_id: str annotated_id: str
diag: dict | None = None # CC: diagnostica pipeline (drop reasons)
class TuneParams(BaseModel): class TuneParams(BaseModel):
@@ -521,6 +522,7 @@ def match(p: MatchParams):
) for m_ in matches], ) for m_ in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -596,6 +598,7 @@ def match_simple(p: SimpleMatchParams):
) for mt in matches], ) for mt in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -769,6 +772,7 @@ def match_recipe(p: RecipeMatchParams):
) for mt in matches], ) for mt in matches],
train_time=0.0, find_time=t_find, train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id, num_variants=len(m.variants), annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
+57
View File
@@ -336,6 +336,7 @@ async function doMatchRecipe() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`); setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
} }
@@ -409,6 +410,7 @@ async function doMatch() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`); setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`);
} }
@@ -436,6 +438,61 @@ function setStatus(s) {
} }
// ---------- Init ---------- // ---------- Init ----------
// ---------- CC: Diagnostica match ----------
function renderDiag(diag, n_matches) {
const el = document.getElementById("diag-content");
if (!diag) {
el.innerHTML = '<em style="color:#888">Diagnostica non disponibile</em>';
return;
}
const dropTotal = (diag.drop_ncc_low || 0) + (diag.drop_min_score_post_avg || 0)
+ (diag.drop_recall_low || 0) + (diag.drop_bbox_out_of_scene || 0)
+ (diag.drop_nms_iou || 0);
// Hint contestuali se 0 match
let hint = "";
if (n_matches === 0) {
if (diag.n_after_pre_nms === 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ Nessun candidato sopra soglia.
Prova: <b>min_score</b> o <b>top_thresh</b> (currently ${diag.top_thresh_used.toFixed(2)})</div>`;
} else if (diag.drop_ncc_low > 0 && dropTotal === diag.drop_ncc_low) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_ncc_low} candidati droppati da NCC.
Prova: <b>verify_threshold</b> (filtro_fp più leggero)</div>`;
} else if (diag.drop_min_score_post_avg > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_min_score_post_avg} match sotto min_score post-NCC.
Prova: <b>min_score</b></div>`;
} else if (diag.drop_recall_low > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_recall_low} match con recall < ${diag.min_recall_used}.
Prova: <b>min_recall</b></div>`;
} else if (diag.drop_bbox_out_of_scene > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_bbox_out_of_scene} match con bbox fuori scena.
Centro derivato male: aumenta <b>min_score</b> o restringi <b>search_roi</b></div>`;
}
}
const flags = [];
if (diag.use_polarity) flags.push("polarity");
if (diag.use_soft_score) flags.push("soft");
if (diag.subpixel_lm) flags.push("subpix-LM");
el.innerHTML = `
<div><b>Pipeline pruning:</b></div>
<div>varianti: ${diag.n_variants_total} top_eval=${diag.n_variants_top_evaluated}
top_pass=${diag.n_variants_top_passed} full_eval=${diag.n_variants_full_evaluated}</div>
<div><b>Candidati:</b> raw=${diag.n_raw_candidates}
pre_nms=${diag.n_after_pre_nms} final=${diag.n_final}</div>
<div><b>Drop reasons:</b> NCC=${diag.drop_ncc_low}, score=${diag.drop_min_score_post_avg},
recall=${diag.drop_recall_low}, bbox=${diag.drop_bbox_out_of_scene}, NMS=${diag.drop_nms_iou}</div>
<div><b>Soglie:</b> top=${diag.top_thresh_used.toFixed(2)},
min_score=${diag.min_score_used.toFixed(2)},
NCC=${diag.verify_threshold_used.toFixed(2)},
recall=${diag.min_recall_used.toFixed(2)}</div>
${flags.length ? `<div><b>Flag attivi:</b> ${flags.join(", ")}</div>` : ""}
${hint}
`;
// Auto-apri pannello se 0 match (segnala problema)
if (n_matches === 0) {
document.getElementById("diag-panel").open = true;
}
}
// ---------- Auto-tune (Halcon-style) ---------- // ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() { async function doAutoTune() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
+10
View File
@@ -214,6 +214,16 @@
<div class="kv"><span>find:</span><span id="t-find">-</span></div> <div class="kv"><span>find:</span><span id="t-find">-</span></div>
<div class="kv"><span>varianti:</span><span id="t-var">-</span></div> <div class="kv"><span>varianti:</span><span id="t-var">-</span></div>
<div class="kv"><span>match:</span><span id="t-match">-</span></div> <div class="kv"><span>match:</span><span id="t-match">-</span></div>
<details id="diag-panel" style="margin-top:10px">
<summary>🔍 Diagnostica (CC)</summary>
<div id="diag-content" style="font-family:monospace; font-size:11px;
background:#1a1a1a; padding:8px;
border-radius:3px; margin-top:6px;
line-height:1.5">
<em style="color:#888">Esegui un MATCH per vedere la diagnostica</em>
</div>
</details>
</section> </section>
</main> </main>