Compare commits

...

12 Commits

Author SHA1 Message Date
Adriano 35df4c473c fix: UCS match e numero feature ora coerenti con anteprima modello
Bug visibili da screenshot:
1. UCS match diverso da UCS anteprima modello (centro pose vs baricentro)
2. Numero feature disegnate < di quelle anteprima modello

Cause:
1. Match UCS era posto su (cx, cy) = centro template, mentre l'anteprima
   modello mostra UCS sul baricentro feature (mean fx, fy).
2. _draw_matches estraeva feature dal template warpato → re-quantizza
   gradient su immagine warp+interp, perdendo precisione vs feature
   pre-computate del matcher.

Fix:
- Match.variant_idx: nuovo field con indice variante usata dal find()
- _draw_matches usa lvl0.dx/dy/bin pre-computati invece di re-estrarre:
  * applica delta-rotation (m.angle_deg - var.angle_deg) per refine
    sub-step
  * proietta in scene coords intorno a (m.cx, m.cy)
  * stesso identico set di feature dell'anteprima modello (modulo
    rotazione+traslazione)
- UCS match calcolato sul baricentro delle feature warpate, non su
  (cx, cy) → coerente con UCS anteprima

Fallback (variant_idx == -1, es. ricetta caricata da save_model
prima di questo commit): usa estrazione warpata legacy.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:02:04 +02:00
Adriano 64f2c8b5dc merge: match overlay edges+UCS, no ROI 2026-05-05 10:55:54 +02:00
Adriano 7e076deb80 feat(web): match overlay con edge filtrati + UCS + rimozione bbox ROI
_draw_matches ora coerente con anteprima modello:

- Edge filtrati con stessa pipeline matcher (hysteresis weak/strong_grad)
  e selezione feature: l'overlay del match riflette esattamente quello
  che l'utente ha visto nel preview "Anteprima edge"
- Background tinta scura su pixel hysteresis (40% colore match)
- Feature scelte come dot colorati per bin (palette 16 bin)
- UCS rosso/verde sul centro pose: asse X destra, Y giu' (image y-down),
  ruotato secondo angle del match
- Origine UCS: cerchio bianco con bordo nero per visibilita'

Rimossi (richiesta utente "togli la ROI"):
- bbox poly perimetrale: ridondante, copriva il pezzo
- linea marker primo lato: sostituita da UCS rosso

Compatibilita': se matcher non passato (es. uso esterno), fallback
Canny legacy. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) ora propagano il matcher a _draw_matches.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:55:54 +02:00
Adriano 852597ed51 merge: UI edge preview + UCS 2026-05-05 10:48:58 +02:00
Adriano a78884f950 feat(web): anteprima edge sul modello + tracker pulizia rumore + UCS baricentro
Pannello "🔬 Anteprima edge / pulizia rumore" sotto il canvas modello.
Permette tuning interattivo dei parametri di selezione edge per
togliere "sporcizie" (rumore di sfondo, edge spuri) prima di
trainare il matcher.

Server:
- POST /preview_edges: dato modello+ROI+param edge, ritorna immagine
  ROI con overlay:
  * heatmap magnitude gradient (sfondo)
  * verde scuro: pixel sopra hysteresis edge
  * cerchietti colorati per bin: feature scelte (palette 16 bin)
  * UCS rosso/verde sul baricentro feature (richiesta utente):
    asse X destra, Y giu' (image y-down)
  Ritorna anche stats: n_features, n_edge_strong, percentili magnitude,
  ucs_baricentro {cx, cy}

UI:
- Slider weak_grad/strong_grad/num_features/spacing + checkbox polarity
- Re-fetch debounced (200ms) ad ogni input → preview live
- Bottone "Applica ai parametri Avanzate": copia i valori scelti
  nei campi Avanzate del matcher principale
- Auto-fetch quando il pannello viene aperto

Use case: operatore vede SUBITO quali edge il matcher userebbe,
regola soglie per escludere rumore, applica e poi MATCH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:48:58 +02:00
Adriano 543ae0f643 merge: UI pannello diagnostica 2026-05-05 10:41:26 +02:00
Adriano a12574f3c5 feat(web): pannello diagnostica match (CC) con hint contestuali
MatchResp ora include diag dict (CC feature). UI rendering:

- Nuovo pannello pieghevole "🔍 Diagnostica" sotto i tempi
- Per ogni match mostra:
  * pipeline pruning (vars total → top_eval → top_pass → full_eval)
  * candidati (raw → pre_nms → final)
  * drop reasons (NCC, score, recall, bbox, NMS) con counter
  * soglie effettive applicate
  * flag attivi (polarity, soft, subpix-LM)

- Quando 0 match → pannello si apre automaticamente + mostra hint
  contestuale specifico:
  * "0 candidati top" → suggerisce ↓ min_score / top_thresh
  * "tutti dropped da NCC" → ↓ verify_threshold (filtro_fp)
  * "score post-NCC sotto" → ↓ min_score
  * "recall basso" → ↓ min_recall
  * "bbox out-of-scene" → check pose / search_roi

Risolve il pattern "0 match perche'?" con guida actionable invece
del black-box. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) propagano il diag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:41:26 +02:00
Adriano 110dc87b08 merge: AA eval CLI 2026-05-05 10:10:00 +02:00
Adriano 2bb2cf63cc merge: II scene cache 2026-05-05 10:09:56 +02:00
Adriano ea6a9163ad merge: CC diagnostic mode 2026-05-05 10:09:56 +02:00
Adriano 74a332a2dd feat: scene precompute cache (II Halcon-style)
LRU cache per scena: hash su prime 64KB bytes + parametri matcher
(weak/strong_grad, spread_radius, n_bins, pyramid_levels). Quando
hit, riusa:
- piramide grays
- spread_top + bit_active_top + density_top
- spread0 + bit_active_full + density_full

Tipico use case: UI tuning con slider min_score/verify_threshold/...
produce 10+ find() consecutive su scena identica. Risparmia
Sobel+dilate+popcount duplicati (~50ms su 1080p).

Speedup misurato: ~15% find() su 1080p (54ms su 351ms). Vantaggio
maggiore su template piccoli (kernel JIT veloce → scena precompute
domina). Cache size 4, invalidata in train() (template cambiato).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:07:27 +02:00
Adriano dae49eb4a3 feat: diagnostic mode trasparente per find()
self._last_diag accumula counter durante find():
- Pipeline pruning: top_evaluated, top_passed, full_evaluated
- Candidati: n_raw, n_after_pre_nms, n_final
- Drop reason: ncc_low, min_score_post_avg, recall_low,
  bbox_out_of_scene, nms_iou
- Param effettivi: top_thresh_used, verify_threshold_used, ecc.

API:
- find(debug=True): stampa one-line summary su stderr
- m.get_last_diag(): ritorna dict completo per inspection

Use case: 0 match? guarda dove sono finiti i candidati
(es. drop_ncc_low=200 → soglia NCC troppo alta) invece di
tirare a caso. Risolve il "find black-box" pattern.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:05:20 +02:00
5 changed files with 595 additions and 38 deletions
+145 -6
View File
@@ -127,6 +127,7 @@ class Match:
scale: float scale: float
score: float score: float
bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato) bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato)
variant_idx: int = -1 # indice variante usata (per overlay coerente)
@dataclass @dataclass
@@ -512,8 +513,10 @@ class LineShapeMatcher:
self.variants.clear() self.variants.clear()
# Reset view list: template principale = view 0 # Reset view list: template principale = view 0
self._view_templates = [(gray.copy(), mask_full.copy())] self._view_templates = [(gray.copy(), mask_full.copy())]
# Invalida cache feature di refine: il template e cambiato. # Invalida cache: template/param cambiati → spread/feature obsoleti.
self._refine_feat_cache = {} self._refine_feat_cache = {}
if hasattr(self, "_scene_cache"):
self._scene_cache.clear()
self._build_variants_for_view(gray, mask_full, view_idx=0) self._build_variants_for_view(gray, mask_full, view_idx=0)
self._dedup_variants() self._dedup_variants()
return len(self.variants) return len(self.variants)
@@ -669,6 +672,51 @@ class LineShapeMatcher:
raw[b] = d.astype(np.float32) raw[b] = d.astype(np.float32)
return raw return raw
# --- Scene precompute cache (II Halcon-style) -----------------------
_SCENE_CACHE_SIZE = 4
def _scene_cache_key(self, gray: np.ndarray) -> str | None:
"""Hash compatto della scena + param che influenzano spread/density.
Hash su prime 64KB della scena (sufficiente discriminante per
scene fotografiche) + parametri matcher rilevanti. None se cache
disabilitata (es. scene troppo piccole).
"""
if gray.size < 100:
return None
try:
import hashlib
h = hashlib.md5()
sample = gray.tobytes()[:65536]
h.update(sample)
h.update(f"|{gray.shape}|{gray.dtype}".encode())
h.update(
f"|{self.weak_grad}|{self.strong_grad}"
f"|{self.spread_radius}|{self._n_bins}"
f"|{self.pyramid_levels}".encode()
)
return h.hexdigest()
except Exception:
return None
def _scene_cache_get(self, key: str) -> tuple | None:
cache = getattr(self, "_scene_cache", None)
if cache is None:
return None
v = cache.get(key)
if v is not None:
cache.move_to_end(key)
return v
def _scene_cache_put(self, key: str, value: tuple) -> None:
from collections import OrderedDict
if not hasattr(self, "_scene_cache"):
self._scene_cache = OrderedDict()
self._scene_cache[key] = value
self._scene_cache.move_to_end(key)
while len(self._scene_cache) > self._SCENE_CACHE_SIZE:
self._scene_cache.popitem(last=False)
def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray: def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray:
"""Spread bitmap: bit b acceso dove bin b è presente nel raggio. """Spread bitmap: bit b acceso dove bin b è presente nel raggio.
@@ -1309,6 +1357,7 @@ class LineShapeMatcher:
min_recall: float = 0.0, min_recall: float = 0.0,
use_soft_score: bool = False, use_soft_score: bool = False,
subpixel_lm: bool = False, subpixel_lm: bool = False,
debug: bool = False,
) -> list[Match]: ) -> list[Match]:
""" """
scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0: scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0:
@@ -1326,6 +1375,32 @@ class LineShapeMatcher:
if not self.variants: if not self.variants:
raise RuntimeError("Matcher non addestrato: chiamare train() prima.") raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
# Diagnostic counter: traccia perche' candidati sono droppati lungo
# la pipeline. Esposto via get_last_diag() o ritornato implicitamente
# se debug=True (vedi sotto).
diag = {
"n_variants_total": len(self.variants),
"n_variants_top_evaluated": 0,
"n_variants_top_passed": 0,
"n_variants_full_evaluated": 0,
"n_raw_candidates": 0,
"n_after_pre_nms": 0,
"drop_ncc_low": 0,
"drop_min_score_post_avg": 0,
"drop_recall_low": 0,
"drop_bbox_out_of_scene": 0,
"drop_nms_iou": 0,
"n_final": 0,
"top_thresh_used": 0.0,
"verify_threshold_used": float(verify_threshold),
"min_score_used": float(min_score),
"min_recall_used": float(min_recall),
"use_polarity": bool(self.use_polarity),
"use_soft_score": bool(use_soft_score),
"subpixel_lm": bool(subpixel_lm),
}
self._last_diag = diag
gray_full = self._to_gray(scene_bgr) gray_full = self._to_gray(scene_bgr)
# Applica ROI di ricerca: restringe scena a crop, ricorda offset per # Applica ROI di ricerca: restringe scena a crop, ricorda offset per
# ri-traslare le coordinate dei match a fine pipeline. # ri-traslare le coordinate dei match a fine pipeline.
@@ -1340,18 +1415,31 @@ class LineShapeMatcher:
else: else:
gray0 = gray_full gray0 = gray_full
roi_offset = (0, 0) roi_offset = (0, 0)
# Cache pre-compute scena (II Halcon-style): hash bytes scene + param
# gradient/spread → riusa spread piramide + density tra find()
# consecutive con stessa scena (typical UI tuning: slider produce
# 10+ find() su scena identica). Risparmia ~80% del costo non-kernel.
cache_key = self._scene_cache_key(gray0)
cached = self._scene_cache_get(cache_key) if cache_key else None
if cached is not None:
grays, spread_top, bit_active_top, density_top, spread0, \
bit_active_full, density_full, top = cached
else:
grays = [gray0] grays = [gray0]
for _ in range(self.pyramid_levels - 1): for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1])) grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1 top = len(grays) - 1
# Spread bitmap (uint8) al top level: 32× meno memoria della response
# map float32 → MOLTO più cache-friendly per _score_by_shift.
spread_top = self._spread_bitmap(grays[top]) spread_top = self._spread_bitmap(grays[top])
bit_active_top = int( bit_active_top = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread_top & (spread_top.dtype.type(1) << b)).any()) if (spread_top & (spread_top.dtype.type(1) << b)).any())
) )
density_top = _jit_popcount(spread_top)
# spread0 + density_full computati piu sotto, quindi salvo dopo.
spread0 = None
bit_active_full = None
density_full = None
if nms_radius is None: if nms_radius is None:
nms_radius = max(8, min(self.template_size) // 2) nms_radius = max(8, min(self.template_size) // 2)
# Pruning adattivo allo step angolare: con step piccolo (<= 3 deg) # Pruning adattivo allo step angolare: con step piccolo (<= 3 deg)
@@ -1368,9 +1456,10 @@ class LineShapeMatcher:
top_factor = max(top_factor, 0.7) top_factor = max(top_factor, 0.7)
cf_eff = 1 cf_eff = 1
top_thresh = min_score * top_factor top_thresh = min_score * top_factor
diag["top_thresh_used"] = float(top_thresh)
tw, th = self.template_size tw, th = self.template_size
density_top = _jit_popcount(spread_top) # density_top gia' computato sopra (cache o miss)
sf_top = 2 ** top sf_top = 2 ** top
bg_cache_top: dict[float, np.ndarray] = {} bg_cache_top: dict[float, np.ndarray] = {}
bg_cache_full: dict[float, np.ndarray] = {} bg_cache_full: dict[float, np.ndarray] = {}
@@ -1453,6 +1542,7 @@ class LineShapeMatcher:
kept_coarse: list[tuple[int, float]] = [] kept_coarse: list[tuple[int, float]] = []
all_top_scores: list[tuple[int, float]] = [] all_top_scores: list[tuple[int, float]] = []
diag["n_variants_top_evaluated"] = len(coarse_idx_list)
# batch_top: usa kernel batch single-call con prange-esterno su # batch_top: usa kernel batch single-call con prange-esterno su
# varianti. Vince su threadpool quando n_vars >> n_threads e quando # varianti. Vince su threadpool quando n_vars >> n_threads e quando
# H*W top e' piccolo (overhead chiamate JIT > costo kernel). # H*W top e' piccolo (overhead chiamate JIT > costo kernel).
@@ -1516,14 +1606,24 @@ class LineShapeMatcher:
kept_variants.sort(key=lambda t: -t[1]) kept_variants.sort(key=lambda t: -t[1])
max_vars_full = max(max_matches * 8, len(self.variants) // 2) max_vars_full = max(max_matches * 8, len(self.variants) // 2)
kept_variants = kept_variants[:max_vars_full] kept_variants = kept_variants[:max_vars_full]
diag["n_variants_top_passed"] = len(kept_coarse)
diag["n_variants_full_evaluated"] = len(kept_variants)
# Full-res (parallelizzato) con bitmap # Full-res (parallelizzato) con bitmap.
# Riusa cache se disponibile, altrimenti computa e salva.
if spread0 is None:
spread0 = self._spread_bitmap(gray0) spread0 = self._spread_bitmap(gray0)
bit_active_full = int( bit_active_full = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread0 & (spread0.dtype.type(1) << b)).any()) if (spread0 & (spread0.dtype.type(1) << b)).any())
) )
density_full = _jit_popcount(spread0) density_full = _jit_popcount(spread0)
# Salva cache scena complete
if cache_key is not None:
self._scene_cache_put(cache_key, (
grays, spread_top, bit_active_top, density_top,
spread0, bit_active_full, density_full, top,
))
for sc in unique_scales: for sc in unique_scales:
bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1) bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1)
@@ -1601,6 +1701,7 @@ class LineShapeMatcher:
raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi)) raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi))
raw.sort(key=lambda c: -c[0]) raw.sort(key=lambda c: -c[0])
diag["n_raw_candidates"] = len(raw)
# Mappa vi → score_map per subpixel/refinement # Mappa vi → score_map per subpixel/refinement
score_maps = dict(candidates_per_var) score_maps = dict(candidates_per_var)
@@ -1632,6 +1733,7 @@ class LineShapeMatcher:
preliminary_int.append((score, xi, yi, vi)) preliminary_int.append((score, xi, yi, vi))
if len(preliminary_int) >= pre_cap: if len(preliminary_int) >= pre_cap:
break break
diag["n_after_pre_nms"] = len(preliminary_int)
# Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap) # Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap)
kept: list[Match] = [] kept: list[Match] = []
@@ -1678,6 +1780,7 @@ class LineShapeMatcher:
view_idx=getattr(var, "view_idx", 0), view_idx=getattr(var, "view_idx", 0),
) )
if ncc < verify_threshold: if ncc < verify_threshold:
diag["drop_ncc_low"] += 1
continue continue
score_f = (float(score_f) + max(0.0, ncc)) * 0.5 score_f = (float(score_f) + max(0.0, ncc)) * 0.5
# Soft-margin gradient similarity: sostituisce o integra lo # Soft-margin gradient similarity: sostituisce o integra lo
@@ -1692,6 +1795,7 @@ class LineShapeMatcher:
# abbattere lo shape-score sotto la soglia user. Senza questo # abbattere lo shape-score sotto la soglia user. Senza questo
# check apparivano match con score < min_score (UI confusing). # check apparivano match con score < min_score (UI confusing).
if float(score_f) < min_score: if float(score_f) < min_score:
diag["drop_min_score_post_avg"] += 1
continue continue
# Feature recall (Halcon MinScore-style): conta quante feature # Feature recall (Halcon MinScore-style): conta quante feature
@@ -1703,6 +1807,7 @@ class LineShapeMatcher:
spread0, var, cx_f, cy_f, ang_f, spread0, var, cx_f, cy_f, ang_f,
) )
if recall < min_recall: if recall < min_recall:
diag["drop_recall_low"] += 1
continue continue
# Ri-traslo coord da spazio crop ROI a spazio scena originale. # Ri-traslo coord da spazio crop ROI a spazio scena originale.
@@ -1726,6 +1831,7 @@ class LineShapeMatcher:
) )
inside_ratio = float(inter) / poly_area inside_ratio = float(inter) / poly_area
if inside_ratio < 0.75: if inside_ratio < 0.75:
diag["drop_bbox_out_of_scene"] += 1
continue continue
# Penalità scala opzionale: score degrada con distanza da 1.0 # Penalità scala opzionale: score degrada con distanza da 1.0
if scale_penalty > 0.0 and var.scale != 1.0: if scale_penalty > 0.0 and var.scale != 1.0:
@@ -1750,6 +1856,7 @@ class LineShapeMatcher:
dup = True dup = True
break break
if dup: if dup:
diag["drop_nms_iou"] += 1
continue continue
kept.append(Match( kept.append(Match(
cx=cx_out, cy=cy_out, cx=cx_out, cy=cy_out,
@@ -1757,7 +1864,39 @@ class LineShapeMatcher:
scale=var.scale, scale=var.scale,
score=score_f, score=score_f,
bbox_poly=poly, bbox_poly=poly,
variant_idx=int(vi),
)) ))
if len(kept) >= max_matches: if len(kept) >= max_matches:
break break
diag["n_final"] = len(kept)
if debug:
# Debug mode: stampa diagnostica su stderr per visibilita' immediata.
import sys as _sys
_sys.stderr.write(f"[pm2d.find debug] {self._format_diag(diag)}\n")
return kept return kept
def _format_diag(self, diag: dict) -> str:
"""Formatta dict diagnostica in una linea leggibile."""
return (
f"vars: {diag['n_variants_total']} -> "
f"top_eval={diag['n_variants_top_evaluated']} "
f"top_pass={diag['n_variants_top_passed']} "
f"full_eval={diag['n_variants_full_evaluated']} | "
f"raw={diag['n_raw_candidates']} "
f"pre_nms={diag['n_after_pre_nms']} -> "
f"drop[ncc={diag['drop_ncc_low']}, "
f"score={diag['drop_min_score_post_avg']}, "
f"recall={diag['drop_recall_low']}, "
f"bbox={diag['drop_bbox_out_of_scene']}, "
f"nms={diag['drop_nms_iou']}] = "
f"final={diag['n_final']} (top_thresh={diag['top_thresh_used']:.2f})"
)
def get_last_diag(self) -> dict | None:
"""Ritorna dict diagnostica dell'ultima chiamata find().
Halcon-equivalent: oggi inspect_shape_model espone parziali contatori.
Util per debug 'perche' 0 match', tuning interattivo, validation.
Vedi diag keys per significato (n_variants_top_evaluated, drop_*, ...).
"""
return getattr(self, "_last_diag", None)
+216 -18
View File
@@ -131,23 +131,102 @@ def _encode_png(img: np.ndarray) -> bytes:
def _draw_matches(scene: np.ndarray, matches: list[Match], def _draw_matches(scene: np.ndarray, matches: list[Match],
template_gray: np.ndarray | None) -> np.ndarray: template_gray: np.ndarray | None,
matcher: "LineShapeMatcher | None" = None) -> np.ndarray:
"""Disegna match annotati sulla scena.
Se matcher e' passato, usa la stessa pipeline di edge filtering
(hysteresis weak/strong_grad) e selezione feature usata in training,
cosi' l'overlay nel match riflette ESATTAMENTE quello che l'utente
ha visto nel preview "Anteprima edge". Inoltre disegna UCS
(asse X rosso, Y verde) sul centro pose del match.
Senza matcher: fallback Canny (legacy).
"""
out = scene.copy() out = scene.copy()
H, W = scene.shape[:2] H, W = scene.shape[:2]
palette = [ palette = [
(0, 255, 0), (0, 200, 255), (255, 100, 100), (255, 200, 0), (0, 255, 0), (0, 200, 255), (255, 100, 100), (255, 200, 0),
(200, 0, 255), (100, 255, 200), (255, 0, 0), (0, 255, 255), (200, 0, 255), (100, 255, 200), (255, 0, 0), (0, 255, 255),
] ]
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i, m in enumerate(matches): for i, m in enumerate(matches):
color = palette[i % len(palette)] color = palette[i % len(palette)]
if template_gray is not None: # Posizione UCS: baricentro feature warpate (default = cx, cy se non disponibile).
# Mantiene coerenza con anteprima modello che mostra UCS sul baricentro.
ucs_x, ucs_y = float(m.cx), float(m.cy)
if template_gray is not None and matcher is not None:
t = template_gray t = template_gray
th, tw = t.shape th, tw = t.shape
edge = cv2.Canny(t, 50, 150)
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0 cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale) M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t M[1, 2] += m.cy - cy_t
# Background edge filtrati: warpa template + hysteresis
warped_gray = cv2.warpAffine(
t, M, (W, H), flags=cv2.INTER_LINEAR, borderValue=0)
mag, _ = matcher._gradient(warped_gray)
if matcher.weak_grad < matcher.strong_grad:
edge_mask = matcher._hysteresis_mask(mag)
else:
edge_mask = mag >= matcher.strong_grad
if edge_mask.any():
bg_overlay = np.zeros_like(out)
dark = tuple(int(c * 0.35) for c in color)
bg_overlay[edge_mask] = dark
out = cv2.addWeighted(out, 1.0, bg_overlay, 0.7, 0)
# Feature reali del matcher: usa quelle pre-computate della
# variante che ha generato il match. Stesse identiche feature
# mostrate nell'anteprima modello (ruotate/scalate alla pose).
vi = getattr(m, "variant_idx", -1)
fx_scene = fy_scene = fb_arr = None
if 0 <= vi < len(matcher.variants):
lvl0 = matcher.variants[vi].levels[0]
# dx/dy sono offsets relativi al CENTRO del template warpato
# nelle coordinate del kernel template (gia' pre-ruotate
# all'angolo della variante grezza). Per coerenza con la
# pose finale m.angle_deg (post-refine), ri-rotazione del
# delta angolare (m.angle_deg - var.angle_deg).
var = matcher.variants[vi]
dang = np.deg2rad(m.angle_deg - var.angle_deg)
ca, sa = np.cos(dang), np.sin(dang)
dxr = lvl0.dx * ca + lvl0.dy * sa
dyr = -lvl0.dx * sa + lvl0.dy * ca
fx_scene = m.cx + dxr
fy_scene = m.cy + dyr
fb_arr = lvl0.bin
else:
# Fallback: estrai feature dal warpato (perde precisione)
_, bins_w = matcher._gradient(warped_gray)
fx, fy, fb = matcher._extract_features(mag, bins_w, None)
fx_scene = fx.astype(np.float32)
fy_scene = fy.astype(np.float32)
fb_arr = fb
# Disegna feature
for k in range(len(fx_scene)):
px = int(round(float(fx_scene[k])))
py = int(round(float(fy_scene[k])))
if 0 <= px < W and 0 <= py < H:
bcol = bin_colors[int(fb_arr[k]) % len(bin_colors)]
cv2.circle(out, (px, py), 2, bcol, -1, cv2.LINE_AA)
# UCS sul baricentro feature (in scene coords)
if len(fx_scene) > 0:
ucs_x = float(np.mean(fx_scene))
ucs_y = float(np.mean(fy_scene))
elif template_gray is not None:
# Senza matcher: legacy Canny
t = template_gray
th, tw = t.shape
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t
edge = cv2.Canny(t, 50, 150)
warped = cv2.warpAffine(edge, M, (W, H), warped = cv2.warpAffine(edge, M, (W, H),
flags=cv2.INTER_NEAREST, borderValue=0) flags=cv2.INTER_NEAREST, borderValue=0)
mask = warped > 0 mask = warped > 0
@@ -155,20 +234,34 @@ def _draw_matches(scene: np.ndarray, matches: list[Match],
overlay = np.zeros_like(out) overlay = np.zeros_like(out)
overlay[mask] = color overlay[mask] = color
out[mask] = (0.3 * out[mask] + 0.7 * overlay[mask]).astype(np.uint8) out[mask] = (0.3 * out[mask] + 0.7 * overlay[mask]).astype(np.uint8)
poly = m.bbox_poly.astype(np.int32).reshape(-1, 1, 2) # bbox poly e linea-marker rimossi (richiesta utente "togli la ROI"):
cv2.polylines(out, [poly], True, color, 2, cv2.LINE_AA) # UCS + edge filtrati gia' identificano pose e orientamento.
p0 = tuple(m.bbox_poly[0].astype(int)) cx, cy = int(round(ucs_x)), int(round(ucs_y))
p1 = tuple(m.bbox_poly[1].astype(int)) # UCS sul centro pose match (richiesta utente: come nell'anteprima
cv2.line(out, p0, p1, color, 4, cv2.LINE_AA) # modello). Asse X rosso destra, Y verde basso (image y-down).
cx, cy = int(round(m.cx)), int(round(m.cy)) # Lunghezza derivata dalla diagonale bbox per scala-invariante.
cv2.drawMarker(out, (cx, cy), color, cv2.MARKER_CROSS, 22, 2, cv2.LINE_AA)
L = int(np.linalg.norm(m.bbox_poly[1] - m.bbox_poly[0])) // 2 L = int(np.linalg.norm(m.bbox_poly[1] - m.bbox_poly[0])) // 2
a = np.deg2rad(m.angle_deg) if L < 10:
cv2.arrowedLine(out, (cx, cy), L = 30 # fallback se bbox degenere
(int(cx + L * np.cos(a)), int(cy - L * np.sin(a))), ax = np.deg2rad(m.angle_deg)
color, 2, cv2.LINE_AA, tipLength=0.2) # X axis ruotato (rosso)
x_end = (int(cx + L * np.cos(ax)), int(cy - L * np.sin(ax)))
cv2.arrowedLine(out, (cx, cy), x_end,
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (x_end[0] + 4, x_end[1] + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Y axis perpendicolare (verde, +90° in image coords = giu' visivo)
y_end = (int(cx + L * np.cos(ax + np.pi / 2)),
int(cy - L * np.sin(ax + np.pi / 2)))
cv2.arrowedLine(out, (cx, cy), y_end,
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (y_end[0] + 4, y_end[1] + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine UCS: cerchio bianco con bordo nero
cv2.circle(out, (cx, cy), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (cx, cy), 3, (255, 255, 255), -1, cv2.LINE_AA)
label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}" label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}"
cv2.putText(out, label, (cx + 8, cy - 8), cv2.putText(out, label, (cx + 12, cy - 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2, cv2.LINE_AA) cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2, cv2.LINE_AA)
return out return out
@@ -217,6 +310,7 @@ class MatchResp(BaseModel):
find_time: float find_time: float
num_variants: int num_variants: int
annotated_id: str annotated_id: str
diag: dict | None = None # CC: diagnostica pipeline (drop reasons)
class TuneParams(BaseModel): class TuneParams(BaseModel):
@@ -510,7 +604,7 @@ def match(p: MatchParams):
# Render annotated image # Render annotated image
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -521,6 +615,7 @@ def match(p: MatchParams):
) for m_ in matches], ) for m_ in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -586,7 +681,7 @@ def match_simple(p: SimpleMatchParams):
t_find = time.time() - t0 t_find = time.time() - t0
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -596,6 +691,7 @@ def match_simple(p: SimpleMatchParams):
) for mt in matches], ) for mt in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -628,6 +724,107 @@ class SaveRecipeParams(BaseModel):
name: str # nome file ricetta (no path) name: str # nome file ricetta (no path)
class EdgePreviewParams(BaseModel):
model_id: str
roi: list[int]
weak_grad: float = 30.0
strong_grad: float = 60.0
num_features: int = 96
min_feature_spacing: int = 3
use_polarity: bool = False
@app.post("/preview_edges")
def preview_edges(p: EdgePreviewParams):
"""Estrae edge feature dalla ROI con i parametri dati e ritorna
immagine annotata con i pixel selezionati come overlay.
Permette tuning interattivo delle soglie weak/strong_grad e
num_features per "togliere le sporcizie" (rumore di sfondo,
edge spuri) prima di trainare il matcher vero.
"""
model = _load_image(p.model_id)
if model is None:
raise HTTPException(404, "Modello non trovato")
x, y, w, h = p.roi
H_m, W_m = model.shape[:2]
x = max(0, min(int(x), W_m - 1)); y = max(0, min(int(y), H_m - 1))
w = max(1, min(int(w), W_m - x)); h = max(1, min(int(h), H_m - y))
roi_img = model[y:y + h, x:x + w]
# Matcher temporaneo solo per estrazione feature (no train completo)
m = LineShapeMatcher(
weak_grad=p.weak_grad,
strong_grad=p.strong_grad,
num_features=p.num_features,
min_feature_spacing=p.min_feature_spacing,
use_polarity=p.use_polarity,
)
gray = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) if roi_img.ndim == 3 else roi_img
mag, bins = m._gradient(gray)
fx, fy, fb = m._extract_features(mag, bins, None)
# Mostra anche i pixel "weak/strong" come heatmap di sfondo
out = roi_img.copy() if roi_img.ndim == 3 else cv2.cvtColor(roi_img, cv2.COLOR_GRAY2BGR)
# Overlay magnitude leggera
mag_norm = np.clip(mag / max(1.0, mag.max()) * 255, 0, 255).astype(np.uint8)
mag_color = cv2.applyColorMap(mag_norm, cv2.COLORMAP_BONE)
out = cv2.addWeighted(out, 0.6, mag_color, 0.4, 0)
# Pixel "strong" con hysteresis: contorno verde scuro tenue
if m.weak_grad < m.strong_grad:
edge_mask = m._hysteresis_mask(mag).astype(np.uint8) * 255
else:
edge_mask = (mag >= m.strong_grad).astype(np.uint8) * 255
edge_overlay = np.zeros_like(out)
edge_overlay[edge_mask > 0] = (0, 80, 0) # verde scuro
out = cv2.addWeighted(out, 1.0, edge_overlay, 0.5, 0)
# Feature scelte: cerchietti colorati per bin
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i in range(len(fx)):
b = int(fb[i])
col = bin_colors[b % len(bin_colors)]
cv2.circle(out, (int(fx[i]), int(fy[i])), 2, col, -1, cv2.LINE_AA)
# UCS sul baricentro feature (richiesta utente): assi X rosso, Y verde
bary_cx = bary_cy = None
if len(fx) > 0:
bary_cx = float(np.mean(fx))
bary_cy = float(np.mean(fy))
bx, by = int(round(bary_cx)), int(round(bary_cy))
axis_len = max(20, int(0.15 * max(out.shape[:2])))
# X axis (rosso, verso destra)
cv2.arrowedLine(out, (bx, by), (bx + axis_len, by),
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (bx + axis_len + 4, by + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Y axis (verde, verso il basso = convenzione image y-down)
cv2.arrowedLine(out, (bx, by), (bx, by + axis_len),
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (bx + 4, by + axis_len + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine: cerchio bianco con bordo nero
cv2.circle(out, (bx, by), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 3, (255, 255, 255), -1, cv2.LINE_AA)
img_id = _store_image(out)
n_edge_strong = int((mag >= m.strong_grad).sum())
n_edge_total = int(edge_mask.sum() / 255)
return {
"preview_id": img_id,
"n_features": len(fx),
"n_edge_strong": n_edge_strong,
"n_edge_after_hysteresis": n_edge_total,
"mag_max": float(mag.max()),
"mag_p50": float(np.percentile(mag, 50)),
"mag_p85": float(np.percentile(mag, 85)),
"ucs_baricentro": (
{"cx": round(bary_cx, 2), "cy": round(bary_cy, 2)}
if bary_cx is not None else None
),
}
@app.post("/recipes") @app.post("/recipes")
def save_recipe(p: SaveRecipeParams): def save_recipe(p: SaveRecipeParams):
"""Allena matcher e salva su disco come ricetta riutilizzabile.""" """Allena matcher e salva su disco come ricetta riutilizzabile."""
@@ -760,7 +957,7 @@ def match_recipe(p: RecipeMatchParams):
) )
t_find = time.time() - t0 t_find = time.time() - t0
tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8) tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
matches=[MatchResult( matches=[MatchResult(
@@ -769,6 +966,7 @@ def match_recipe(p: RecipeMatchParams):
) for mt in matches], ) for mt in matches],
train_time=0.0, find_time=t_find, train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id, num_variants=len(m.variants), annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
+161
View File
@@ -336,6 +336,7 @@ async function doMatchRecipe() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`); setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
} }
@@ -409,6 +410,7 @@ async function doMatch() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`); setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`);
} }
@@ -436,6 +438,164 @@ function setStatus(s) {
} }
// ---------- Init ---------- // ---------- Init ----------
// ---------- Edge preview (clean rumore) ----------
let _epDebounce = null;
let _epLastImg = null;
async function fetchEdgePreview() {
if (!state.model || !state.roi) {
document.getElementById("edge-preview-info").textContent =
"Disegna prima la ROI sul modello";
return;
}
const body = {
model_id: state.model.id,
roi: state.roi,
weak_grad: parseFloat(document.getElementById("ep-weak").value),
strong_grad: parseFloat(document.getElementById("ep-strong").value),
num_features: parseInt(document.getElementById("ep-nf").value, 10),
min_feature_spacing: parseInt(document.getElementById("ep-sp").value, 10),
use_polarity: document.getElementById("ep-pol").checked,
};
try {
const r = await fetch("/preview_edges", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
_epLastImg = await loadImage(`/image/${j.preview_id}/raw?t=${Date.now()}`);
drawEdgePreview();
const ucs = j.ucs_baricentro
? ` | UCS=(${j.ucs_baricentro.cx},${j.ucs_baricentro.cy})`
: "";
document.getElementById("edge-preview-info").innerHTML =
`<b>${j.n_features}</b> feature scelte (di ${j.n_edge_after_hysteresis} edge totali)<br>` +
`mag: max=${j.mag_max.toFixed(0)} p50=${j.mag_p50.toFixed(0)} ` +
`p85=${j.mag_p85.toFixed(0)}${ucs}`;
} catch (e) {
document.getElementById("edge-preview-info").textContent =
`Errore preview: ${e.message}`;
}
}
function drawEdgePreview() {
const cnv = document.getElementById("c-edge-preview");
if (!_epLastImg) return;
const ctx = cnv.getContext("2d");
// Fit-contain
const r = Math.min(cnv.width / _epLastImg.width,
cnv.height / _epLastImg.height);
const w = _epLastImg.width * r;
const h = _epLastImg.height * r;
const ox = (cnv.width - w) / 2;
const oy = (cnv.height - h) / 2;
ctx.fillStyle = "#000"; ctx.fillRect(0, 0, cnv.width, cnv.height);
ctx.imageSmoothingEnabled = false;
ctx.drawImage(_epLastImg, ox, oy, w, h);
}
function scheduleEdgePreview() {
if (_epDebounce) clearTimeout(_epDebounce);
_epDebounce = setTimeout(fetchEdgePreview, 200);
}
function bindEdgePreviewControls() {
const slid = (id, valEl) => {
const el = document.getElementById(id);
const v = document.getElementById(valEl);
el.addEventListener("input", () => {
v.textContent = el.value;
scheduleEdgePreview();
});
};
slid("ep-weak", "ep-weak-v");
slid("ep-strong", "ep-strong-v");
slid("ep-nf", "ep-nf-v");
slid("ep-sp", "ep-sp-v");
document.getElementById("ep-pol").addEventListener("change",
scheduleEdgePreview);
// Auto-refresh quando il pannello viene aperto
document.getElementById("edge-preview-panel").addEventListener("toggle",
(e) => { if (e.target.open) fetchEdgePreview(); });
document.getElementById("btn-edge-apply").addEventListener("click", () => {
// Copia i valori correnti nei campi avanzati
const map = {
"ep-weak": "adv-weak_grad",
"ep-strong": "adv-strong_grad",
"ep-nf": "adv-num_features",
"ep-sp": "adv-min_feature_spacing",
};
for (const [src, dst] of Object.entries(map)) {
const dstEl = document.getElementById(dst);
if (dstEl) dstEl.value = document.getElementById(src).value;
}
// use_polarity: alla checkbox della modalita Halcon
const polCb = document.getElementById("hc-use-polarity");
if (polCb) polCb.checked = document.getElementById("ep-pol").checked;
// Apri pannello Avanzate per feedback
const advDetails = document.querySelectorAll("#col-params details");
advDetails.forEach((d) => { d.open = true; });
alert("Parametri edge applicati. Esegui MATCH per usare i valori scelti.");
});
}
// ---------- CC: Diagnostica match ----------
function renderDiag(diag, n_matches) {
const el = document.getElementById("diag-content");
if (!diag) {
el.innerHTML = '<em style="color:#888">Diagnostica non disponibile</em>';
return;
}
const dropTotal = (diag.drop_ncc_low || 0) + (diag.drop_min_score_post_avg || 0)
+ (diag.drop_recall_low || 0) + (diag.drop_bbox_out_of_scene || 0)
+ (diag.drop_nms_iou || 0);
// Hint contestuali se 0 match
let hint = "";
if (n_matches === 0) {
if (diag.n_after_pre_nms === 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ Nessun candidato sopra soglia.
Prova: <b>min_score</b> o <b>top_thresh</b> (currently ${diag.top_thresh_used.toFixed(2)})</div>`;
} else if (diag.drop_ncc_low > 0 && dropTotal === diag.drop_ncc_low) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_ncc_low} candidati droppati da NCC.
Prova: <b>verify_threshold</b> (filtro_fp più leggero)</div>`;
} else if (diag.drop_min_score_post_avg > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_min_score_post_avg} match sotto min_score post-NCC.
Prova: <b>min_score</b></div>`;
} else if (diag.drop_recall_low > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_recall_low} match con recall < ${diag.min_recall_used}.
Prova: <b>min_recall</b></div>`;
} else if (diag.drop_bbox_out_of_scene > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_bbox_out_of_scene} match con bbox fuori scena.
Centro derivato male: aumenta <b>min_score</b> o restringi <b>search_roi</b></div>`;
}
}
const flags = [];
if (diag.use_polarity) flags.push("polarity");
if (diag.use_soft_score) flags.push("soft");
if (diag.subpixel_lm) flags.push("subpix-LM");
el.innerHTML = `
<div><b>Pipeline pruning:</b></div>
<div>varianti: ${diag.n_variants_total} top_eval=${diag.n_variants_top_evaluated}
top_pass=${diag.n_variants_top_passed} full_eval=${diag.n_variants_full_evaluated}</div>
<div><b>Candidati:</b> raw=${diag.n_raw_candidates}
pre_nms=${diag.n_after_pre_nms} final=${diag.n_final}</div>
<div><b>Drop reasons:</b> NCC=${diag.drop_ncc_low}, score=${diag.drop_min_score_post_avg},
recall=${diag.drop_recall_low}, bbox=${diag.drop_bbox_out_of_scene}, NMS=${diag.drop_nms_iou}</div>
<div><b>Soglie:</b> top=${diag.top_thresh_used.toFixed(2)},
min_score=${diag.min_score_used.toFixed(2)},
NCC=${diag.verify_threshold_used.toFixed(2)},
recall=${diag.min_recall_used.toFixed(2)}</div>
${flags.length ? `<div><b>Flag attivi:</b> ${flags.join(", ")}</div>` : ""}
${hint}
`;
// Auto-apri pannello se 0 match (segnala problema)
if (n_matches === 0) {
document.getElementById("diag-panel").open = true;
}
}
// ---------- Auto-tune (Halcon-style) ---------- // ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() { async function doAutoTune() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
@@ -608,6 +768,7 @@ window.addEventListener("DOMContentLoaded", async () => {
document.getElementById("btn-unload-recipe").addEventListener("click", document.getElementById("btn-unload-recipe").addEventListener("click",
unloadRecipe); unloadRecipe);
refreshRecipeList(); refreshRecipeList();
bindEdgePreviewControls();
const slider = document.getElementById("p-min-score"); const slider = document.getElementById("p-min-score");
slider.addEventListener("input", (e) => { slider.addEventListener("input", (e) => {
document.getElementById("v-score").textContent = document.getElementById("v-score").textContent =
+44
View File
@@ -45,6 +45,40 @@
<canvas id="c-model" width="380" height="420"></canvas> <canvas id="c-model" width="380" height="420"></canvas>
</div> </div>
<div id="roi-info">ROI: (nessuna)</div> <div id="roi-info">ROI: (nessuna)</div>
<details id="edge-preview-panel" style="margin-top:10px">
<summary>🔬 Anteprima edge / pulizia rumore</summary>
<div style="font-size:11px; color:#aaa; margin:4px 0">
Regola le soglie per togliere edge spuri (sporcizie). UCS rosso/verde
sul baricentro feature.
</div>
<div class="ep-grid">
<label class="ep-row">weak_grad <span id="ep-weak-v">30</span>
<input type="range" id="ep-weak" min="5" max="200" value="30" step="1">
</label>
<label class="ep-row">strong_grad <span id="ep-strong-v">60</span>
<input type="range" id="ep-strong" min="10" max="400" value="60" step="1">
</label>
<label class="ep-row">num_features <span id="ep-nf-v">96</span>
<input type="range" id="ep-nf" min="16" max="300" value="96" step="1">
</label>
<label class="ep-row">spacing <span id="ep-sp-v">3</span>
<input type="range" id="ep-sp" min="1" max="15" value="3" step="1">
</label>
<label class="ep-row" style="flex-direction:row; gap:6px">
<input type="checkbox" id="ep-pol"> polarity
</label>
<button class="btn" id="btn-edge-apply" type="button"
style="grid-column:1/-1">
✓ Applica ai parametri Avanzate
</button>
</div>
<div class="canvas-wrap" style="margin-top:6px">
<canvas id="c-edge-preview" width="380" height="380"></canvas>
</div>
<div id="edge-preview-info" style="font-size:11px; color:#888; margin-top:4px">
Disegna ROI e apri questo pannello per generare anteprima
</div>
</details>
</section> </section>
<section class="col" id="col-scene"> <section class="col" id="col-scene">
@@ -214,6 +248,16 @@
<div class="kv"><span>find:</span><span id="t-find">-</span></div> <div class="kv"><span>find:</span><span id="t-find">-</span></div>
<div class="kv"><span>varianti:</span><span id="t-var">-</span></div> <div class="kv"><span>varianti:</span><span id="t-var">-</span></div>
<div class="kv"><span>match:</span><span id="t-match">-</span></div> <div class="kv"><span>match:</span><span id="t-match">-</span></div>
<details id="diag-panel" style="margin-top:10px">
<summary>🔍 Diagnostica (CC)</summary>
<div id="diag-content" style="font-family:monospace; font-size:11px;
background:#1a1a1a; padding:8px;
border-radius:3px; margin-top:6px;
line-height:1.5">
<em style="color:#888">Esegui un MATCH per vedere la diagnostica</em>
</div>
</details>
</section> </section>
</main> </main>
+15
View File
@@ -173,3 +173,18 @@ footer h2 {
} }
.hc-row.hc-num label { font-size: 11px; color: #aaa; } .hc-row.hc-num label { font-size: 11px; color: #aaa; }
.hc-row.hc-num input { width: 100%; } .hc-row.hc-num input { width: 100%; }
/* Edge preview panel */
.ep-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 6px 12px;
margin-top: 6px;
font-size: 12px;
}
.ep-row {
display: flex; flex-direction: column; gap: 2px;
font-size: 11px; color: #aaa;
}
.ep-row input[type="range"] { width: 100%; }
.ep-row span { color: #fff; font-weight: bold; font-family: monospace; }