Compare commits

..

19 Commits

Author SHA1 Message Date
Adriano 8029a1e12b merge: UCS coerente centro pose 2026-05-05 12:04:24 +02:00
Adriano d37833076e fix: UCS coerente sul centro pose, no traslazione fissata sbagliata
L'UCS del match precedentemente proiettava il baricentro feature
template alla pose, ma:
- Il baricentro veniva calcolato da una variante a 0° (v0) i cui dx/dy
  sono offsets relativi al centro PADDED (non al centro template puro)
- _extract_features dipende dai parametri matcher che possono differire
  da quelli del preview se la ricetta e' caricata
- Risultato: UCS appariva con offset costante errato rispetto al centro
  visibile del pezzo

Fix: UCS sul centro POSE del match (m.cx, m.cy) = posizione del centro
template originale nella scena (questo e' esattamente cio' che
_subpixel_peak ritorna). Coerente, prevedibile, "fissato" sul centro
del pezzo.

Per coerenza visiva, anche preview_edges sposta UCS dal baricentro al
CENTRO ROI (rh/2, rw/2). Cosi' il modello mostra UCS nello stesso
identico punto relativo dove apparira' nel match dopo
traslazione+rotazione della pose.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:04:24 +02:00
Adriano e1ed9206a3 merge: fix UCS match + edge modello overlay 2026-05-05 11:58:21 +02:00
Adriano e84ae199ac fix: UCS match dimensione + orientamento Y + overlay edge modello
3 problemi visibili da screenshot:

1. UCS match troppo grande: usava 0.4 * lato bbox (~114 px su template
   286). Anteprima modello usa 0.15 * max(lato_template) (~42 px).
   Fix: stessa formula scalata per m.scale → coerenza dimensionale.

2. Asse Y match orientamento sbagliato: a m.angle_deg=0 puntava
   in alto invece che in basso (errore segno trigonometrico:
   sin(ax + pi/2) ≠ cos(ax) per il segno y-down).
   Fix corretto:
   - X axis = (cos(ax), -sin(ax))   # rotazione cv2 di (1, 0)
   - Y axis = (sin(ax), cos(ax))    # rotazione cv2 di (0, 1)
   Verificato: a ax=0 → X destra, Y giu' (matches modello).

3. Overlay edge modello orientato (richiesta utente): warpa template
   alla pose (cx, cy, angle, scale), applica hysteresis identica al
   matcher, disegna pixel edge come overlay verde brillante (60% alpha).
   Permette di vedere visivamente l'allineamento del modello sul pezzo
   rilevato.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:58:21 +02:00
Adriano 5f0c4542d3 merge: param edge in find+ricetta, match solo UCS 2026-05-05 11:37:00 +02:00
Adriano 29c034fb05 fix: param edge usati anche in find/ricetta + match overlay solo UCS
Due richieste utente:

1. Param di pulizia rumore (weak/strong/num_features/spacing dal pannello
   "Anteprima edge") devono essere usati anche in find e salvati nelle
   ricette. Prima l'utente li regolava ma erano ignorati: il match usava
   sempre i valori auto_tune.

   Fix:
   - SimpleMatchParams.edge_* (4 campi opzionali): None = usa auto_tune,
     valore = override
   - _simple_to_technical applica gli override se presenti, propagati
     a min_feature_spacing nel matcher init
   - Cache key matcher include min_feature_spacing
   - SaveRecipeParams stessi 4 campi: la ricetta salva i param di
     pulizia rumore identici a quelli del preview
   - UI readEdgeOverrides() legge sempre i valori slider ed inietta
     in body sia di /match_simple sia di POST /recipes

2. Match overlay sulla scena: solo UCS (X rosso, Y verde) ruotato
   secondo m.angle_deg, posizionato sul baricentro feature del
   modello (proiettato alla pose). Niente edge filtrati, niente
   cerchietti feature, niente bbox, niente label/score sulla scena
   reale: l'overlay deve essere pulito, gli edge si vedono solo
   nell'anteprima modello.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:37:00 +02:00
Adriano 6fb1efcab8 merge: fix UCS match + feature pre-computate 2026-05-05 11:02:04 +02:00
Adriano 35df4c473c fix: UCS match e numero feature ora coerenti con anteprima modello
Bug visibili da screenshot:
1. UCS match diverso da UCS anteprima modello (centro pose vs baricentro)
2. Numero feature disegnate < di quelle anteprima modello

Cause:
1. Match UCS era posto su (cx, cy) = centro template, mentre l'anteprima
   modello mostra UCS sul baricentro feature (mean fx, fy).
2. _draw_matches estraeva feature dal template warpato → re-quantizza
   gradient su immagine warp+interp, perdendo precisione vs feature
   pre-computate del matcher.

Fix:
- Match.variant_idx: nuovo field con indice variante usata dal find()
- _draw_matches usa lvl0.dx/dy/bin pre-computati invece di re-estrarre:
  * applica delta-rotation (m.angle_deg - var.angle_deg) per refine
    sub-step
  * proietta in scene coords intorno a (m.cx, m.cy)
  * stesso identico set di feature dell'anteprima modello (modulo
    rotazione+traslazione)
- UCS match calcolato sul baricentro delle feature warpate, non su
  (cx, cy) → coerente con UCS anteprima

Fallback (variant_idx == -1, es. ricetta caricata da save_model
prima di questo commit): usa estrazione warpata legacy.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:02:04 +02:00
Adriano 64f2c8b5dc merge: match overlay edges+UCS, no ROI 2026-05-05 10:55:54 +02:00
Adriano 7e076deb80 feat(web): match overlay con edge filtrati + UCS + rimozione bbox ROI
_draw_matches ora coerente con anteprima modello:

- Edge filtrati con stessa pipeline matcher (hysteresis weak/strong_grad)
  e selezione feature: l'overlay del match riflette esattamente quello
  che l'utente ha visto nel preview "Anteprima edge"
- Background tinta scura su pixel hysteresis (40% colore match)
- Feature scelte come dot colorati per bin (palette 16 bin)
- UCS rosso/verde sul centro pose: asse X destra, Y giu' (image y-down),
  ruotato secondo angle del match
- Origine UCS: cerchio bianco con bordo nero per visibilita'

Rimossi (richiesta utente "togli la ROI"):
- bbox poly perimetrale: ridondante, copriva il pezzo
- linea marker primo lato: sostituita da UCS rosso

Compatibilita': se matcher non passato (es. uso esterno), fallback
Canny legacy. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) ora propagano il matcher a _draw_matches.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:55:54 +02:00
Adriano 852597ed51 merge: UI edge preview + UCS 2026-05-05 10:48:58 +02:00
Adriano a78884f950 feat(web): anteprima edge sul modello + tracker pulizia rumore + UCS baricentro
Pannello "🔬 Anteprima edge / pulizia rumore" sotto il canvas modello.
Permette tuning interattivo dei parametri di selezione edge per
togliere "sporcizie" (rumore di sfondo, edge spuri) prima di
trainare il matcher.

Server:
- POST /preview_edges: dato modello+ROI+param edge, ritorna immagine
  ROI con overlay:
  * heatmap magnitude gradient (sfondo)
  * verde scuro: pixel sopra hysteresis edge
  * cerchietti colorati per bin: feature scelte (palette 16 bin)
  * UCS rosso/verde sul baricentro feature (richiesta utente):
    asse X destra, Y giu' (image y-down)
  Ritorna anche stats: n_features, n_edge_strong, percentili magnitude,
  ucs_baricentro {cx, cy}

UI:
- Slider weak_grad/strong_grad/num_features/spacing + checkbox polarity
- Re-fetch debounced (200ms) ad ogni input → preview live
- Bottone "Applica ai parametri Avanzate": copia i valori scelti
  nei campi Avanzate del matcher principale
- Auto-fetch quando il pannello viene aperto

Use case: operatore vede SUBITO quali edge il matcher userebbe,
regola soglie per escludere rumore, applica e poi MATCH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:48:58 +02:00
Adriano 543ae0f643 merge: UI pannello diagnostica 2026-05-05 10:41:26 +02:00
Adriano a12574f3c5 feat(web): pannello diagnostica match (CC) con hint contestuali
MatchResp ora include diag dict (CC feature). UI rendering:

- Nuovo pannello pieghevole "🔍 Diagnostica" sotto i tempi
- Per ogni match mostra:
  * pipeline pruning (vars total → top_eval → top_pass → full_eval)
  * candidati (raw → pre_nms → final)
  * drop reasons (NCC, score, recall, bbox, NMS) con counter
  * soglie effettive applicate
  * flag attivi (polarity, soft, subpix-LM)

- Quando 0 match → pannello si apre automaticamente + mostra hint
  contestuale specifico:
  * "0 candidati top" → suggerisce ↓ min_score / top_thresh
  * "tutti dropped da NCC" → ↓ verify_threshold (filtro_fp)
  * "score post-NCC sotto" → ↓ min_score
  * "recall basso" → ↓ min_recall
  * "bbox out-of-scene" → check pose / search_roi

Risolve il pattern "0 match perche'?" con guida actionable invece
del black-box. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) propagano il diag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:41:26 +02:00
Adriano 110dc87b08 merge: AA eval CLI 2026-05-05 10:10:00 +02:00
Adriano 2bb2cf63cc merge: II scene cache 2026-05-05 10:09:56 +02:00
Adriano ea6a9163ad merge: CC diagnostic mode 2026-05-05 10:09:56 +02:00
Adriano 1cc7881a51 feat: pm2d.eval - validation harness CLI per LineShapeMatcher
Tool da CLI per misurare oggettivamente la qualita' del matcher
su dataset etichettato. Halcon ha questo solo nell'IDE (HDevelop),
qui esposto come modulo Python testabile in CI.

Format dataset JSON:
  - template + mask
  - params init matcher (override)
  - find_params (override per find())
  - scenes con ground_truth: lista pose attese (cx, cy, angle, scale,
    tolerance_px, tolerance_deg)

Metriche per scena: TP/FP/FN, precision, recall, IoU medio bbox,
tempo find. Aggregato: precision globale, recall, F1.

Match-to-GT criterio: distanza centro <= tolerance_px AND
|angle| <= tolerance_deg, oppure IoU bbox >= 0.3.

Use case:
- regressione: confronto config A vs B oggettivo
- tuning: trovare param ottimi via grid-search guidato da F1
- validazione pre-deploy: report TP/FP/FN su dataset prod

Esposto come entry-point pm2d-eval (pyproject.toml).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:09:45 +02:00
Adriano 74a332a2dd feat: scene precompute cache (II Halcon-style)
LRU cache per scena: hash su prime 64KB bytes + parametri matcher
(weak/strong_grad, spread_radius, n_bins, pyramid_levels). Quando
hit, riusa:
- piramide grays
- spread_top + bit_active_top + density_top
- spread0 + bit_active_full + density_full

Tipico use case: UI tuning con slider min_score/verify_threshold/...
produce 10+ find() consecutive su scena identica. Risparmia
Sobel+dilate+popcount duplicati (~50ms su 1080p).

Speedup misurato: ~15% find() su 1080p (54ms su 351ms). Vantaggio
maggiore su template piccoli (kernel JIT veloce → scena precompute
domina). Cache size 4, invalidata in train() (template cambiato).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:07:27 +02:00
7 changed files with 758 additions and 57 deletions
+217
View File
@@ -0,0 +1,217 @@
"""CLI validation harness per LineShapeMatcher.
Usage:
python -m pm2d.eval dataset.json [opzioni]
Formato dataset (JSON):
{
"template": "path/to/template.png",
"mask": "path/to/mask.png", # opzionale
"params": { # opzionali, override su matcher init
"use_polarity": true,
"angle_step_deg": 5,
...
},
"find_params": { # opzionali, passati a find()
"min_score": 0.6,
"use_soft_score": true,
...
},
"scenes": [
{
"image": "path/to/scene1.png",
"ground_truth": [
{"cx": 320.0, "cy": 240.0, "angle_deg": 12.0,
"scale": 1.0, "tolerance_px": 5.0,
"tolerance_deg": 3.0}
]
}
]
}
Output: report precision/recall/IoU/timing per ogni scena + aggregati.
"""
from __future__ import annotations
import argparse
import json
import math
import sys
import time
from pathlib import Path
import cv2
import numpy as np
from pm2d.line_matcher import LineShapeMatcher, _poly_iou, _oriented_bbox_polygon
def _load_image(path: str | Path) -> np.ndarray:
img = cv2.imread(str(path), cv2.IMREAD_UNCHANGED)
if img is None:
raise FileNotFoundError(f"Immagine non trovata: {path}")
if img.ndim == 2:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
return img
def _gt_to_poly(gt: dict, tw: int, th: int) -> np.ndarray:
"""Costruisce bbox poligonale per un ground truth."""
s = float(gt.get("scale", 1.0))
return _oriented_bbox_polygon(
float(gt["cx"]), float(gt["cy"]),
tw * s, th * s, float(gt["angle_deg"]),
)
def _match_to_gt(match, gt: dict, tw: int, th: int,
iou_thr: float = 0.3) -> bool:
"""True se il match corrisponde al ground truth.
Criterio: distanza centro <= tolerance_px AND |angle_deg - gt| <= tolerance_deg
OR IoU bbox >= iou_thr (fallback per pose con tolerance ampie).
"""
tol_px = float(gt.get("tolerance_px", 5.0))
tol_deg = float(gt.get("tolerance_deg", 3.0))
dx = match.cx - float(gt["cx"])
dy = match.cy - float(gt["cy"])
dist = math.hypot(dx, dy)
da = abs((match.angle_deg - float(gt["angle_deg"]) + 180) % 360 - 180)
if dist <= tol_px and da <= tol_deg:
return True
# Fallback IoU
poly_gt = _gt_to_poly(gt, tw, th)
poly_m = match.bbox_poly
if _poly_iou(poly_m, poly_gt) >= iou_thr:
return True
return False
def evaluate_scene(matcher: LineShapeMatcher, scene_bgr: np.ndarray,
gt_list: list[dict], find_params: dict,
tw: int, th: int) -> dict:
"""Esegue match e calcola TP/FP/FN per una scena."""
t0 = time.time()
matches = matcher.find(scene_bgr, **find_params)
elapsed = time.time() - t0
gt_matched = [False] * len(gt_list)
match_is_tp = [False] * len(matches)
iou_per_match = [0.0] * len(matches)
for i, m in enumerate(matches):
for j, gt in enumerate(gt_list):
if gt_matched[j]:
continue
if _match_to_gt(m, gt, tw, th):
gt_matched[j] = True
match_is_tp[i] = True
# Calcolo IoU per metrica
poly_gt = _gt_to_poly(gt, tw, th)
iou_per_match[i] = _poly_iou(m.bbox_poly, poly_gt)
break
tp = sum(match_is_tp)
fp = len(matches) - tp
fn = len(gt_list) - sum(gt_matched)
return {
"n_matches": len(matches),
"n_gt": len(gt_list),
"tp": tp, "fp": fp, "fn": fn,
"find_time_s": elapsed,
"iou_mean": float(np.mean([i for i, t in zip(iou_per_match, match_is_tp) if t])
if tp > 0 else 0.0),
"diag": (matcher.get_last_diag()
if hasattr(matcher, "get_last_diag") else None),
}
def run(dataset_path: str, scene_filter: str | None = None,
verbose: bool = False) -> dict:
"""Esegue eval su dataset, ritorna report aggregato."""
dataset_path = Path(dataset_path)
base = dataset_path.parent
with open(dataset_path) as f:
ds = json.load(f)
template = _load_image(base / ds["template"])
mask = None
if ds.get("mask"):
mask_img = cv2.imread(str(base / ds["mask"]), cv2.IMREAD_GRAYSCALE)
if mask_img is not None:
mask = (mask_img > 128).astype(np.uint8) * 255
init_params = ds.get("params", {})
find_params = ds.get("find_params", {})
matcher = LineShapeMatcher(**init_params)
n_var = matcher.train(template, mask=mask)
tw, th = matcher.template_size
print(f"Template: {ds['template']} ({tw}x{th}), {n_var} varianti")
print(f"Param matcher: {init_params}")
print(f"Param find: {find_params}")
print()
scenes = ds["scenes"]
if scene_filter:
scenes = [s for s in scenes if scene_filter in s["image"]]
rows = []
tot_tp = tot_fp = tot_fn = 0
tot_time = 0.0
for sc in scenes:
scene = _load_image(base / sc["image"])
gt = sc.get("ground_truth", [])
result = evaluate_scene(matcher, scene, gt, find_params, tw, th)
rows.append({"scene": sc["image"], **result})
tot_tp += result["tp"]; tot_fp += result["fp"]; tot_fn += result["fn"]
tot_time += result["find_time_s"]
prec = result["tp"] / max(1, result["tp"] + result["fp"])
rec = result["tp"] / max(1, result["tp"] + result["fn"])
line = (f" {sc['image']:30s} "
f"TP={result['tp']} FP={result['fp']} FN={result['fn']} "
f"P={prec:.2f} R={rec:.2f} "
f"IoU={result['iou_mean']:.2f} "
f"t={result['find_time_s']*1000:.0f}ms")
print(line)
if verbose and result["diag"] and hasattr(matcher, "_format_diag"):
print(f" diag: {matcher._format_diag(result['diag'])}")
# Aggregati
precision = tot_tp / max(1, tot_tp + tot_fp)
recall = tot_tp / max(1, tot_tp + tot_fn)
f1 = 2 * precision * recall / max(1e-9, precision + recall)
print()
print(f"AGGREGATO: precision={precision:.3f} recall={recall:.3f} "
f"F1={f1:.3f} TP={tot_tp} FP={tot_fp} FN={tot_fn}")
print(f"TIME: total={tot_time:.2f}s avg={tot_time / max(1, len(scenes)) * 1000:.0f}ms/scene")
return {
"precision": precision, "recall": recall, "f1": f1,
"tp": tot_tp, "fp": tot_fp, "fn": tot_fn,
"total_time_s": tot_time, "n_scenes": len(scenes),
"per_scene": rows,
}
def main(argv: list[str] | None = None) -> int:
p = argparse.ArgumentParser(
description="pm2d-eval: validation harness per LineShapeMatcher"
)
p.add_argument("dataset", help="JSON dataset (template + scenes + GT)")
p.add_argument("--scene-filter", default=None,
help="Filtro substring sui nomi scena (debug)")
p.add_argument("--verbose", "-v", action="store_true",
help="Stampa diag dict per ogni scena")
p.add_argument("--out", default=None,
help="Salva report JSON su file")
args = p.parse_args(argv)
report = run(args.dataset, scene_filter=args.scene_filter,
verbose=args.verbose)
if args.out:
with open(args.out, "w") as f:
json.dump(report, f, indent=2)
print(f"Report salvato: {args.out}")
return 0 if report["f1"] > 0.5 else 1
if __name__ == "__main__":
sys.exit(main())
+90 -20
View File
@@ -127,6 +127,7 @@ class Match:
scale: float scale: float
score: float score: float
bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato) bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato)
variant_idx: int = -1 # indice variante usata (per overlay coerente)
@dataclass @dataclass
@@ -512,8 +513,10 @@ class LineShapeMatcher:
self.variants.clear() self.variants.clear()
# Reset view list: template principale = view 0 # Reset view list: template principale = view 0
self._view_templates = [(gray.copy(), mask_full.copy())] self._view_templates = [(gray.copy(), mask_full.copy())]
# Invalida cache feature di refine: il template e cambiato. # Invalida cache: template/param cambiati → spread/feature obsoleti.
self._refine_feat_cache = {} self._refine_feat_cache = {}
if hasattr(self, "_scene_cache"):
self._scene_cache.clear()
self._build_variants_for_view(gray, mask_full, view_idx=0) self._build_variants_for_view(gray, mask_full, view_idx=0)
self._dedup_variants() self._dedup_variants()
return len(self.variants) return len(self.variants)
@@ -669,6 +672,51 @@ class LineShapeMatcher:
raw[b] = d.astype(np.float32) raw[b] = d.astype(np.float32)
return raw return raw
# --- Scene precompute cache (II Halcon-style) -----------------------
_SCENE_CACHE_SIZE = 4
def _scene_cache_key(self, gray: np.ndarray) -> str | None:
"""Hash compatto della scena + param che influenzano spread/density.
Hash su prime 64KB della scena (sufficiente discriminante per
scene fotografiche) + parametri matcher rilevanti. None se cache
disabilitata (es. scene troppo piccole).
"""
if gray.size < 100:
return None
try:
import hashlib
h = hashlib.md5()
sample = gray.tobytes()[:65536]
h.update(sample)
h.update(f"|{gray.shape}|{gray.dtype}".encode())
h.update(
f"|{self.weak_grad}|{self.strong_grad}"
f"|{self.spread_radius}|{self._n_bins}"
f"|{self.pyramid_levels}".encode()
)
return h.hexdigest()
except Exception:
return None
def _scene_cache_get(self, key: str) -> tuple | None:
cache = getattr(self, "_scene_cache", None)
if cache is None:
return None
v = cache.get(key)
if v is not None:
cache.move_to_end(key)
return v
def _scene_cache_put(self, key: str, value: tuple) -> None:
from collections import OrderedDict
if not hasattr(self, "_scene_cache"):
self._scene_cache = OrderedDict()
self._scene_cache[key] = value
self._scene_cache.move_to_end(key)
while len(self._scene_cache) > self._SCENE_CACHE_SIZE:
self._scene_cache.popitem(last=False)
def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray: def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray:
"""Spread bitmap: bit b acceso dove bin b è presente nel raggio. """Spread bitmap: bit b acceso dove bin b è presente nel raggio.
@@ -1367,18 +1415,31 @@ class LineShapeMatcher:
else: else:
gray0 = gray_full gray0 = gray_full
roi_offset = (0, 0) roi_offset = (0, 0)
grays = [gray0]
for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1
# Spread bitmap (uint8) al top level: 32× meno memoria della response # Cache pre-compute scena (II Halcon-style): hash bytes scene + param
# map float32 → MOLTO più cache-friendly per _score_by_shift. # gradient/spread → riusa spread piramide + density tra find()
spread_top = self._spread_bitmap(grays[top]) # consecutive con stessa scena (typical UI tuning: slider produce
bit_active_top = int( # 10+ find() su scena identica). Risparmia ~80% del costo non-kernel.
sum(1 << b for b in range(self._n_bins) cache_key = self._scene_cache_key(gray0)
if (spread_top & (spread_top.dtype.type(1) << b)).any()) cached = self._scene_cache_get(cache_key) if cache_key else None
) if cached is not None:
grays, spread_top, bit_active_top, density_top, spread0, \
bit_active_full, density_full, top = cached
else:
grays = [gray0]
for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1
spread_top = self._spread_bitmap(grays[top])
bit_active_top = int(
sum(1 << b for b in range(self._n_bins)
if (spread_top & (spread_top.dtype.type(1) << b)).any())
)
density_top = _jit_popcount(spread_top)
# spread0 + density_full computati piu sotto, quindi salvo dopo.
spread0 = None
bit_active_full = None
density_full = None
if nms_radius is None: if nms_radius is None:
nms_radius = max(8, min(self.template_size) // 2) nms_radius = max(8, min(self.template_size) // 2)
# Pruning adattivo allo step angolare: con step piccolo (<= 3 deg) # Pruning adattivo allo step angolare: con step piccolo (<= 3 deg)
@@ -1398,7 +1459,7 @@ class LineShapeMatcher:
diag["top_thresh_used"] = float(top_thresh) diag["top_thresh_used"] = float(top_thresh)
tw, th = self.template_size tw, th = self.template_size
density_top = _jit_popcount(spread_top) # density_top gia' computato sopra (cache o miss)
sf_top = 2 ** top sf_top = 2 ** top
bg_cache_top: dict[float, np.ndarray] = {} bg_cache_top: dict[float, np.ndarray] = {}
bg_cache_full: dict[float, np.ndarray] = {} bg_cache_full: dict[float, np.ndarray] = {}
@@ -1548,13 +1609,21 @@ class LineShapeMatcher:
diag["n_variants_top_passed"] = len(kept_coarse) diag["n_variants_top_passed"] = len(kept_coarse)
diag["n_variants_full_evaluated"] = len(kept_variants) diag["n_variants_full_evaluated"] = len(kept_variants)
# Full-res (parallelizzato) con bitmap # Full-res (parallelizzato) con bitmap.
spread0 = self._spread_bitmap(gray0) # Riusa cache se disponibile, altrimenti computa e salva.
bit_active_full = int( if spread0 is None:
sum(1 << b for b in range(self._n_bins) spread0 = self._spread_bitmap(gray0)
if (spread0 & (spread0.dtype.type(1) << b)).any()) bit_active_full = int(
) sum(1 << b for b in range(self._n_bins)
density_full = _jit_popcount(spread0) if (spread0 & (spread0.dtype.type(1) << b)).any())
)
density_full = _jit_popcount(spread0)
# Salva cache scena complete
if cache_key is not None:
self._scene_cache_put(cache_key, (
grays, spread_top, bit_active_top, density_top,
spread0, bit_active_full, density_full, top,
))
for sc in unique_scales: for sc in unique_scales:
bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1) bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1)
@@ -1795,6 +1864,7 @@ class LineShapeMatcher:
scale=var.scale, scale=var.scale,
score=score_f, score=score_f,
bbox_poly=poly, bbox_poly=poly,
variant_idx=int(vi),
)) ))
if len(kept) >= max_matches: if len(kept) >= max_matches:
break break
+200 -37
View File
@@ -78,6 +78,7 @@ def _matcher_cache_key(roi: np.ndarray, tech: dict) -> str:
h.update(roi.tobytes()) h.update(roi.tobytes())
# Solo parametri che influenzano il training # Solo parametri che influenzano il training
relevant = ("num_features", "weak_grad", "strong_grad", relevant = ("num_features", "weak_grad", "strong_grad",
"min_feature_spacing",
"angle_min", "angle_max", "angle_step", "angle_min", "angle_max", "angle_step",
"scale_min", "scale_max", "scale_step", "scale_min", "scale_max", "scale_step",
"spread_radius", "pyramid_levels") "spread_radius", "pyramid_levels")
@@ -131,45 +132,72 @@ def _encode_png(img: np.ndarray) -> bytes:
def _draw_matches(scene: np.ndarray, matches: list[Match], def _draw_matches(scene: np.ndarray, matches: list[Match],
template_gray: np.ndarray | None) -> np.ndarray: template_gray: np.ndarray | None,
matcher: "LineShapeMatcher | None" = None) -> np.ndarray:
"""Disegna SOLO UCS (richiesta utente) per ogni match trovato.
UCS = sistema di coordinate (X rosso, Y verde) posizionato sul
baricentro feature del modello, ruotato secondo l'angolo del match.
Niente edge, niente cerchietti feature, niente bbox: i match sulla
scena reale devono essere puliti, gli edge filtrati si vedono solo
nell'anteprima modello.
"""
out = scene.copy() out = scene.copy()
H, W = scene.shape[:2] # Lunghezza assi UCS: stessa formula dell'anteprima modello
palette = [ # (0.15 * max lato template) scalata per m.scale → coerenza dimensionale.
(0, 255, 0), (0, 200, 255), (255, 100, 100), (255, 200, 0), if matcher is not None and matcher.template_size != (0, 0):
(200, 0, 255), (100, 255, 200), (255, 0, 0), (0, 255, 255), L_base = int(0.15 * max(matcher.template_size))
] else:
L_base = 30
H_scene, W_scene = scene.shape[:2]
for i, m in enumerate(matches): for i, m in enumerate(matches):
color = palette[i % len(palette)] # UCS posizionato esattamente sul CENTRO POSE del match (m.cx, m.cy):
if template_gray is not None: # equivale al centro template traslato alla scena, ruotato con
# m.angle_deg. Coerente con UCS dell'anteprima modello che ora
# e' anche sul centro ROI (vedi preview_edges).
ax = np.deg2rad(m.angle_deg)
ca, sa = np.cos(ax), np.sin(ax)
cx, cy = int(round(m.cx)), int(round(m.cy))
# Overlay edge del modello orientato (richiesta utente):
# warpa template alla pose, applica hysteresis identica al matcher,
# disegna pixel edge come overlay verde tenue.
if template_gray is not None and matcher is not None:
t = template_gray t = template_gray
th, tw = t.shape th, tw = t.shape
edge = cv2.Canny(t, 50, 150)
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0 cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale) M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t M[1, 2] += m.cy - cy_t
warped = cv2.warpAffine(edge, M, (W, H), warped_gray = cv2.warpAffine(
flags=cv2.INTER_NEAREST, borderValue=0) t, M, (W_scene, H_scene),
mask = warped > 0 flags=cv2.INTER_LINEAR, borderValue=0)
if mask.any(): mag, _ = matcher._gradient(warped_gray)
overlay = np.zeros_like(out) if matcher.weak_grad < matcher.strong_grad:
overlay[mask] = color edge_mask = matcher._hysteresis_mask(mag)
out[mask] = (0.3 * out[mask] + 0.7 * overlay[mask]).astype(np.uint8) else:
poly = m.bbox_poly.astype(np.int32).reshape(-1, 1, 2) edge_mask = mag >= matcher.strong_grad
cv2.polylines(out, [poly], True, color, 2, cv2.LINE_AA) if edge_mask.any():
p0 = tuple(m.bbox_poly[0].astype(int)) edge_overlay = np.zeros_like(out)
p1 = tuple(m.bbox_poly[1].astype(int)) edge_overlay[edge_mask] = (0, 220, 0) # verde brillante
cv2.line(out, p0, p1, color, 4, cv2.LINE_AA) out = cv2.addWeighted(out, 1.0, edge_overlay, 0.6, 0)
cx, cy = int(round(m.cx)), int(round(m.cy)) L = max(20, int(L_base * m.scale))
cv2.drawMarker(out, (cx, cy), color, cv2.MARKER_CROSS, 22, 2, cv2.LINE_AA) # X axis = rotazione di (1, 0) con cv2 matrix → (cos, -sin)
L = int(np.linalg.norm(m.bbox_poly[1] - m.bbox_poly[0])) // 2 x_end = (int(cx + L * ca), int(cy - L * sa))
a = np.deg2rad(m.angle_deg) # Y axis = rotazione di (0, 1) con cv2 matrix → (sin, cos)
cv2.arrowedLine(out, (cx, cy), # A m.angle_deg=0 deve puntare GIU' (image y-down convenzione modello)
(int(cx + L * np.cos(a)), int(cy - L * np.sin(a))), y_end = (int(cx + L * sa), int(cy + L * ca))
color, 2, cv2.LINE_AA, tipLength=0.2) cv2.arrowedLine(out, (cx, cy), x_end,
label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}" (0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, label, (cx + 8, cy - 8), cv2.putText(out, "X", (x_end[0] + 4, x_end[1] + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2, cv2.LINE_AA) cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
cv2.arrowedLine(out, (cx, cy), y_end,
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (y_end[0] + 4, y_end[1] + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine UCS: cerchio bianco con bordo nero
cv2.circle(out, (cx, cy), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (cx, cy), 3, (255, 255, 255), -1, cv2.LINE_AA)
return out return out
@@ -217,6 +245,7 @@ class MatchResp(BaseModel):
find_time: float find_time: float
num_variants: int num_variants: int
annotated_id: str annotated_id: str
diag: dict | None = None # CC: diagnostica pipeline (drop reasons)
class TuneParams(BaseModel): class TuneParams(BaseModel):
@@ -271,6 +300,15 @@ class SimpleMatchParams(BaseModel):
penalita_scala: float = 0.0 # 0 = score shape invariante, >0 = penalizza scala != 1 penalita_scala: float = 0.0 # 0 = score shape invariante, >0 = penalizza scala != 1
min_score: float = 0.65 min_score: float = 0.65
max_matches: int = 25 max_matches: int = 25
# --- Override edge da pannello "Anteprima edge" (None = auto_tune) ---
# Quando settati, sovrascrivono i valori derivati da auto_tune e
# vengono usati identici sia nel training del matcher sia nel find.
# Salvati nella ricetta cosi' la stessa pulizia rumore e' replicata
# quando la ricetta viene caricata.
edge_weak_grad: float | None = None
edge_strong_grad: float | None = None
edge_num_features: int | None = None
edge_min_feature_spacing: int | None = None
# --- Halcon-mode flags (default off = backward compat) --- # --- Halcon-mode flags (default off = backward compat) ---
# Init-time (richiede ri-train se cambiato) # Init-time (richiede ri-train se cambiato)
use_polarity: bool = False # F: 16 bin orientation mod 2pi use_polarity: bool = False # F: 16 bin orientation mod 2pi
@@ -319,10 +357,24 @@ def _simple_to_technical(
smin, smax, sstep = SCALE_PRESETS.get(p.scala, (1.0, 1.0, 0.1)) smin, smax, sstep = SCALE_PRESETS.get(p.scala, (1.0, 1.0, 0.1))
ang_step = PRECISION_ANGLE_STEP.get(p.precisione, 5.0) ang_step = PRECISION_ANGLE_STEP.get(p.precisione, 5.0)
# Override edge dal pannello "Anteprima edge" se utente li ha settati.
# Questi sostituiscono i valori auto_tune nel training del matcher,
# garantendo che la selezione edge identica a quella del preview
# venga usata sia in training sia in find.
weak_g = (p.edge_weak_grad if p.edge_weak_grad is not None
else tune["weak_grad"])
strong_g = (p.edge_strong_grad if p.edge_strong_grad is not None
else tune["strong_grad"])
n_feat = (p.edge_num_features if p.edge_num_features is not None
else nf)
min_sp = (p.edge_min_feature_spacing if p.edge_min_feature_spacing is not None
else 3)
return { return {
"num_features": nf, "num_features": n_feat,
"weak_grad": tune["weak_grad"], "weak_grad": weak_g,
"strong_grad": tune["strong_grad"], "strong_grad": strong_g,
"min_feature_spacing": min_sp,
"spread_radius": spread, "spread_radius": spread,
"pyramid_levels": pyr, "pyramid_levels": pyr,
"angle_min": 0.0, "angle_min": 0.0,
@@ -510,7 +562,7 @@ def match(p: MatchParams):
# Render annotated image # Render annotated image
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -521,6 +573,7 @@ def match(p: MatchParams):
) for m_ in matches], ) for m_ in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -557,6 +610,7 @@ def match_simple(p: SimpleMatchParams):
scale_range=(tech["scale_min"], tech["scale_max"]), scale_range=(tech["scale_min"], tech["scale_max"]),
scale_step=tech["scale_step"], scale_step=tech["scale_step"],
spread_radius=tech["spread_radius"], spread_radius=tech["spread_radius"],
min_feature_spacing=tech.get("min_feature_spacing", 3),
pyramid_levels=tech["pyramid_levels"], pyramid_levels=tech["pyramid_levels"],
use_polarity=p.use_polarity, use_polarity=p.use_polarity,
use_gpu=p.use_gpu, use_gpu=p.use_gpu,
@@ -586,7 +640,7 @@ def match_simple(p: SimpleMatchParams):
t_find = time.time() - t0 t_find = time.time() - t0
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -596,6 +650,7 @@ def match_simple(p: SimpleMatchParams):
) for mt in matches], ) for mt in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -625,9 +680,112 @@ class SaveRecipeParams(BaseModel):
precisione: str = "normale" precisione: str = "normale"
use_polarity: bool = False use_polarity: bool = False
use_gpu: bool = False use_gpu: bool = False
# Override edge dal pannello "Anteprima edge" (None = auto_tune)
edge_weak_grad: float | None = None
edge_strong_grad: float | None = None
edge_num_features: int | None = None
edge_min_feature_spacing: int | None = None
name: str # nome file ricetta (no path) name: str # nome file ricetta (no path)
class EdgePreviewParams(BaseModel):
model_id: str
roi: list[int]
weak_grad: float = 30.0
strong_grad: float = 60.0
num_features: int = 96
min_feature_spacing: int = 3
use_polarity: bool = False
@app.post("/preview_edges")
def preview_edges(p: EdgePreviewParams):
"""Estrae edge feature dalla ROI con i parametri dati e ritorna
immagine annotata con i pixel selezionati come overlay.
Permette tuning interattivo delle soglie weak/strong_grad e
num_features per "togliere le sporcizie" (rumore di sfondo,
edge spuri) prima di trainare il matcher vero.
"""
model = _load_image(p.model_id)
if model is None:
raise HTTPException(404, "Modello non trovato")
x, y, w, h = p.roi
H_m, W_m = model.shape[:2]
x = max(0, min(int(x), W_m - 1)); y = max(0, min(int(y), H_m - 1))
w = max(1, min(int(w), W_m - x)); h = max(1, min(int(h), H_m - y))
roi_img = model[y:y + h, x:x + w]
# Matcher temporaneo solo per estrazione feature (no train completo)
m = LineShapeMatcher(
weak_grad=p.weak_grad,
strong_grad=p.strong_grad,
num_features=p.num_features,
min_feature_spacing=p.min_feature_spacing,
use_polarity=p.use_polarity,
)
gray = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) if roi_img.ndim == 3 else roi_img
mag, bins = m._gradient(gray)
fx, fy, fb = m._extract_features(mag, bins, None)
# Mostra anche i pixel "weak/strong" come heatmap di sfondo
out = roi_img.copy() if roi_img.ndim == 3 else cv2.cvtColor(roi_img, cv2.COLOR_GRAY2BGR)
# Overlay magnitude leggera
mag_norm = np.clip(mag / max(1.0, mag.max()) * 255, 0, 255).astype(np.uint8)
mag_color = cv2.applyColorMap(mag_norm, cv2.COLORMAP_BONE)
out = cv2.addWeighted(out, 0.6, mag_color, 0.4, 0)
# Pixel "strong" con hysteresis: contorno verde scuro tenue
if m.weak_grad < m.strong_grad:
edge_mask = m._hysteresis_mask(mag).astype(np.uint8) * 255
else:
edge_mask = (mag >= m.strong_grad).astype(np.uint8) * 255
edge_overlay = np.zeros_like(out)
edge_overlay[edge_mask > 0] = (0, 80, 0) # verde scuro
out = cv2.addWeighted(out, 1.0, edge_overlay, 0.5, 0)
# Feature scelte: cerchietti colorati per bin
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i in range(len(fx)):
b = int(fb[i])
col = bin_colors[b % len(bin_colors)]
cv2.circle(out, (int(fx[i]), int(fy[i])), 2, col, -1, cv2.LINE_AA)
# UCS sul CENTRO ROI (coerente con _draw_matches che usa centro pose).
# In questo modo l'UCS visualizzato nel modello = UCS del match (modulo
# rotazione/traslazione data dalla pose del pezzo trovato).
rh, rw = roi_img.shape[:2]
bx, by = (rw - 1) // 2, (rh - 1) // 2
axis_len = max(20, int(0.15 * max(rw, rh)))
cv2.arrowedLine(out, (bx, by), (bx + axis_len, by),
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (bx + axis_len + 4, by + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
cv2.arrowedLine(out, (bx, by), (bx, by + axis_len),
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (bx + 4, by + axis_len + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 3, (255, 255, 255), -1, cv2.LINE_AA)
bary_cx, bary_cy = float(bx), float(by)
img_id = _store_image(out)
n_edge_strong = int((mag >= m.strong_grad).sum())
n_edge_total = int(edge_mask.sum() / 255)
return {
"preview_id": img_id,
"n_features": len(fx),
"n_edge_strong": n_edge_strong,
"n_edge_after_hysteresis": n_edge_total,
"mag_max": float(mag.max()),
"mag_p50": float(np.percentile(mag, 50)),
"mag_p85": float(np.percentile(mag, 85)),
"ucs_baricentro": (
{"cx": round(bary_cx, 2), "cy": round(bary_cy, 2)}
if bary_cx is not None else None
),
}
@app.post("/recipes") @app.post("/recipes")
def save_recipe(p: SaveRecipeParams): def save_recipe(p: SaveRecipeParams):
"""Allena matcher e salva su disco come ricetta riutilizzabile.""" """Allena matcher e salva su disco come ricetta riutilizzabile."""
@@ -641,6 +799,10 @@ def save_recipe(p: SaveRecipeParams):
tipo=p.tipo, simmetria=p.simmetria, scala=p.scala, tipo=p.tipo, simmetria=p.simmetria, scala=p.scala,
precisione=p.precisione, precisione=p.precisione,
use_polarity=p.use_polarity, use_gpu=p.use_gpu, use_polarity=p.use_polarity, use_gpu=p.use_gpu,
edge_weak_grad=p.edge_weak_grad,
edge_strong_grad=p.edge_strong_grad,
edge_num_features=p.edge_num_features,
edge_min_feature_spacing=p.edge_min_feature_spacing,
) )
tech = _simple_to_technical(sp, roi_img) tech = _simple_to_technical(sp, roi_img)
m = LineShapeMatcher( m = LineShapeMatcher(
@@ -760,7 +922,7 @@ def match_recipe(p: RecipeMatchParams):
) )
t_find = time.time() - t0 t_find = time.time() - t0
tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8) tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
matches=[MatchResult( matches=[MatchResult(
@@ -769,6 +931,7 @@ def match_recipe(p: RecipeMatchParams):
) for mt in matches], ) for mt in matches],
train_time=0.0, find_time=t_find, train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id, num_variants=len(m.variants), annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
+189
View File
@@ -53,10 +53,34 @@ function readUserParams() {
document.getElementById("p-penalita-scala").value), document.getElementById("p-penalita-scala").value),
min_score: parseFloat(document.getElementById("p-min-score").value), min_score: parseFloat(document.getElementById("p-min-score").value),
max_matches: parseInt(document.getElementById("p-max-matches").value, 10), max_matches: parseInt(document.getElementById("p-max-matches").value, 10),
...readEdgeOverrides(),
...readHalconFlags(), ...readHalconFlags(),
}; };
} }
function readEdgeOverrides() {
// Override edge dal pannello "Anteprima edge". Settati = utente li ha
// toccati (anche se uguali al default attuale). Vengono propagati a
// _simple_to_technical e usati identici sia in training sia in find.
// Inoltre salvati nella ricetta cosi' si replicano al load.
const _v = (id, parser) => {
const el = document.getElementById(id);
if (!el) return null;
const v = parser(el.value);
return Number.isFinite(v) ? v : null;
};
// Sempre passa i valori correnti degli slider: e' la richiesta utente
// che i param di pulizia rumore vengano usati anche nel find/ricetta.
const polCb = document.getElementById("hc-use-polarity");
return {
edge_weak_grad: _v("ep-weak", parseFloat),
edge_strong_grad: _v("ep-strong", parseFloat),
edge_num_features: _v("ep-nf", parseInt),
edge_min_feature_spacing: _v("ep-sp", parseInt),
use_polarity: polCb?.checked || document.getElementById("ep-pol")?.checked,
};
}
function readHalconFlags() { function readHalconFlags() {
// Halcon-mode toggle: tutti i flag default-off, esposti via "Modalità Halcon" // Halcon-mode toggle: tutti i flag default-off, esposti via "Modalità Halcon"
const $cb = (id) => document.getElementById(id)?.checked ?? false; const $cb = (id) => document.getElementById(id)?.checked ?? false;
@@ -336,6 +360,7 @@ async function doMatchRecipe() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`); setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
} }
@@ -409,6 +434,7 @@ async function doMatch() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`); setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`);
} }
@@ -436,6 +462,164 @@ function setStatus(s) {
} }
// ---------- Init ---------- // ---------- Init ----------
// ---------- Edge preview (clean rumore) ----------
let _epDebounce = null;
let _epLastImg = null;
async function fetchEdgePreview() {
if (!state.model || !state.roi) {
document.getElementById("edge-preview-info").textContent =
"Disegna prima la ROI sul modello";
return;
}
const body = {
model_id: state.model.id,
roi: state.roi,
weak_grad: parseFloat(document.getElementById("ep-weak").value),
strong_grad: parseFloat(document.getElementById("ep-strong").value),
num_features: parseInt(document.getElementById("ep-nf").value, 10),
min_feature_spacing: parseInt(document.getElementById("ep-sp").value, 10),
use_polarity: document.getElementById("ep-pol").checked,
};
try {
const r = await fetch("/preview_edges", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
_epLastImg = await loadImage(`/image/${j.preview_id}/raw?t=${Date.now()}`);
drawEdgePreview();
const ucs = j.ucs_baricentro
? ` | UCS=(${j.ucs_baricentro.cx},${j.ucs_baricentro.cy})`
: "";
document.getElementById("edge-preview-info").innerHTML =
`<b>${j.n_features}</b> feature scelte (di ${j.n_edge_after_hysteresis} edge totali)<br>` +
`mag: max=${j.mag_max.toFixed(0)} p50=${j.mag_p50.toFixed(0)} ` +
`p85=${j.mag_p85.toFixed(0)}${ucs}`;
} catch (e) {
document.getElementById("edge-preview-info").textContent =
`Errore preview: ${e.message}`;
}
}
function drawEdgePreview() {
const cnv = document.getElementById("c-edge-preview");
if (!_epLastImg) return;
const ctx = cnv.getContext("2d");
// Fit-contain
const r = Math.min(cnv.width / _epLastImg.width,
cnv.height / _epLastImg.height);
const w = _epLastImg.width * r;
const h = _epLastImg.height * r;
const ox = (cnv.width - w) / 2;
const oy = (cnv.height - h) / 2;
ctx.fillStyle = "#000"; ctx.fillRect(0, 0, cnv.width, cnv.height);
ctx.imageSmoothingEnabled = false;
ctx.drawImage(_epLastImg, ox, oy, w, h);
}
function scheduleEdgePreview() {
if (_epDebounce) clearTimeout(_epDebounce);
_epDebounce = setTimeout(fetchEdgePreview, 200);
}
function bindEdgePreviewControls() {
const slid = (id, valEl) => {
const el = document.getElementById(id);
const v = document.getElementById(valEl);
el.addEventListener("input", () => {
v.textContent = el.value;
scheduleEdgePreview();
});
};
slid("ep-weak", "ep-weak-v");
slid("ep-strong", "ep-strong-v");
slid("ep-nf", "ep-nf-v");
slid("ep-sp", "ep-sp-v");
document.getElementById("ep-pol").addEventListener("change",
scheduleEdgePreview);
// Auto-refresh quando il pannello viene aperto
document.getElementById("edge-preview-panel").addEventListener("toggle",
(e) => { if (e.target.open) fetchEdgePreview(); });
document.getElementById("btn-edge-apply").addEventListener("click", () => {
// Copia i valori correnti nei campi avanzati
const map = {
"ep-weak": "adv-weak_grad",
"ep-strong": "adv-strong_grad",
"ep-nf": "adv-num_features",
"ep-sp": "adv-min_feature_spacing",
};
for (const [src, dst] of Object.entries(map)) {
const dstEl = document.getElementById(dst);
if (dstEl) dstEl.value = document.getElementById(src).value;
}
// use_polarity: alla checkbox della modalita Halcon
const polCb = document.getElementById("hc-use-polarity");
if (polCb) polCb.checked = document.getElementById("ep-pol").checked;
// Apri pannello Avanzate per feedback
const advDetails = document.querySelectorAll("#col-params details");
advDetails.forEach((d) => { d.open = true; });
alert("Parametri edge applicati. Esegui MATCH per usare i valori scelti.");
});
}
// ---------- CC: Diagnostica match ----------
function renderDiag(diag, n_matches) {
const el = document.getElementById("diag-content");
if (!diag) {
el.innerHTML = '<em style="color:#888">Diagnostica non disponibile</em>';
return;
}
const dropTotal = (diag.drop_ncc_low || 0) + (diag.drop_min_score_post_avg || 0)
+ (diag.drop_recall_low || 0) + (diag.drop_bbox_out_of_scene || 0)
+ (diag.drop_nms_iou || 0);
// Hint contestuali se 0 match
let hint = "";
if (n_matches === 0) {
if (diag.n_after_pre_nms === 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ Nessun candidato sopra soglia.
Prova: <b>min_score</b> o <b>top_thresh</b> (currently ${diag.top_thresh_used.toFixed(2)})</div>`;
} else if (diag.drop_ncc_low > 0 && dropTotal === diag.drop_ncc_low) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_ncc_low} candidati droppati da NCC.
Prova: <b>verify_threshold</b> (filtro_fp più leggero)</div>`;
} else if (diag.drop_min_score_post_avg > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_min_score_post_avg} match sotto min_score post-NCC.
Prova: <b>min_score</b></div>`;
} else if (diag.drop_recall_low > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_recall_low} match con recall < ${diag.min_recall_used}.
Prova: <b>min_recall</b></div>`;
} else if (diag.drop_bbox_out_of_scene > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_bbox_out_of_scene} match con bbox fuori scena.
Centro derivato male: aumenta <b>min_score</b> o restringi <b>search_roi</b></div>`;
}
}
const flags = [];
if (diag.use_polarity) flags.push("polarity");
if (diag.use_soft_score) flags.push("soft");
if (diag.subpixel_lm) flags.push("subpix-LM");
el.innerHTML = `
<div><b>Pipeline pruning:</b></div>
<div>varianti: ${diag.n_variants_total} top_eval=${diag.n_variants_top_evaluated}
top_pass=${diag.n_variants_top_passed} full_eval=${diag.n_variants_full_evaluated}</div>
<div><b>Candidati:</b> raw=${diag.n_raw_candidates}
pre_nms=${diag.n_after_pre_nms} final=${diag.n_final}</div>
<div><b>Drop reasons:</b> NCC=${diag.drop_ncc_low}, score=${diag.drop_min_score_post_avg},
recall=${diag.drop_recall_low}, bbox=${diag.drop_bbox_out_of_scene}, NMS=${diag.drop_nms_iou}</div>
<div><b>Soglie:</b> top=${diag.top_thresh_used.toFixed(2)},
min_score=${diag.min_score_used.toFixed(2)},
NCC=${diag.verify_threshold_used.toFixed(2)},
recall=${diag.min_recall_used.toFixed(2)}</div>
${flags.length ? `<div><b>Flag attivi:</b> ${flags.join(", ")}</div>` : ""}
${hint}
`;
// Auto-apri pannello se 0 match (segnala problema)
if (n_matches === 0) {
document.getElementById("diag-panel").open = true;
}
}
// ---------- Auto-tune (Halcon-style) ---------- // ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() { async function doAutoTune() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
@@ -556,6 +740,10 @@ async function saveRecipe() {
precisione: user.precisione, precisione: user.precisione,
use_polarity: user.use_polarity, use_polarity: user.use_polarity,
use_gpu: user.use_gpu, use_gpu: user.use_gpu,
edge_weak_grad: user.edge_weak_grad,
edge_strong_grad: user.edge_strong_grad,
edge_num_features: user.edge_num_features,
edge_min_feature_spacing: user.edge_min_feature_spacing,
name: name, name: name,
}; };
try { try {
@@ -608,6 +796,7 @@ window.addEventListener("DOMContentLoaded", async () => {
document.getElementById("btn-unload-recipe").addEventListener("click", document.getElementById("btn-unload-recipe").addEventListener("click",
unloadRecipe); unloadRecipe);
refreshRecipeList(); refreshRecipeList();
bindEdgePreviewControls();
const slider = document.getElementById("p-min-score"); const slider = document.getElementById("p-min-score");
slider.addEventListener("input", (e) => { slider.addEventListener("input", (e) => {
document.getElementById("v-score").textContent = document.getElementById("v-score").textContent =
+44
View File
@@ -45,6 +45,40 @@
<canvas id="c-model" width="380" height="420"></canvas> <canvas id="c-model" width="380" height="420"></canvas>
</div> </div>
<div id="roi-info">ROI: (nessuna)</div> <div id="roi-info">ROI: (nessuna)</div>
<details id="edge-preview-panel" style="margin-top:10px">
<summary>🔬 Anteprima edge / pulizia rumore</summary>
<div style="font-size:11px; color:#aaa; margin:4px 0">
Regola le soglie per togliere edge spuri (sporcizie). UCS rosso/verde
sul baricentro feature.
</div>
<div class="ep-grid">
<label class="ep-row">weak_grad <span id="ep-weak-v">30</span>
<input type="range" id="ep-weak" min="5" max="200" value="30" step="1">
</label>
<label class="ep-row">strong_grad <span id="ep-strong-v">60</span>
<input type="range" id="ep-strong" min="10" max="400" value="60" step="1">
</label>
<label class="ep-row">num_features <span id="ep-nf-v">96</span>
<input type="range" id="ep-nf" min="16" max="300" value="96" step="1">
</label>
<label class="ep-row">spacing <span id="ep-sp-v">3</span>
<input type="range" id="ep-sp" min="1" max="15" value="3" step="1">
</label>
<label class="ep-row" style="flex-direction:row; gap:6px">
<input type="checkbox" id="ep-pol"> polarity
</label>
<button class="btn" id="btn-edge-apply" type="button"
style="grid-column:1/-1">
✓ Applica ai parametri Avanzate
</button>
</div>
<div class="canvas-wrap" style="margin-top:6px">
<canvas id="c-edge-preview" width="380" height="380"></canvas>
</div>
<div id="edge-preview-info" style="font-size:11px; color:#888; margin-top:4px">
Disegna ROI e apri questo pannello per generare anteprima
</div>
</details>
</section> </section>
<section class="col" id="col-scene"> <section class="col" id="col-scene">
@@ -214,6 +248,16 @@
<div class="kv"><span>find:</span><span id="t-find">-</span></div> <div class="kv"><span>find:</span><span id="t-find">-</span></div>
<div class="kv"><span>varianti:</span><span id="t-var">-</span></div> <div class="kv"><span>varianti:</span><span id="t-var">-</span></div>
<div class="kv"><span>match:</span><span id="t-match">-</span></div> <div class="kv"><span>match:</span><span id="t-match">-</span></div>
<details id="diag-panel" style="margin-top:10px">
<summary>🔍 Diagnostica (CC)</summary>
<div id="diag-content" style="font-family:monospace; font-size:11px;
background:#1a1a1a; padding:8px;
border-radius:3px; margin-top:6px;
line-height:1.5">
<em style="color:#888">Esegui un MATCH per vedere la diagnostica</em>
</div>
</details>
</section> </section>
</main> </main>
+15
View File
@@ -173,3 +173,18 @@ footer h2 {
} }
.hc-row.hc-num label { font-size: 11px; color: #aaa; } .hc-row.hc-num label { font-size: 11px; color: #aaa; }
.hc-row.hc-num input { width: 100%; } .hc-row.hc-num input { width: 100%; }
/* Edge preview panel */
.ep-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 6px 12px;
margin-top: 6px;
font-size: 12px;
}
.ep-row {
display: flex; flex-direction: column; gap: 2px;
font-size: 11px; color: #aaa;
}
.ep-row input[type="range"] { width: 100%; }
.ep-row span { color: #fff; font-weight: bold; font-family: monospace; }
+3
View File
@@ -12,6 +12,9 @@ dependencies = [
"uvicorn[standard]>=0.34", "uvicorn[standard]>=0.34",
] ]
[project.scripts]
pm2d-eval = "pm2d.eval:main"
[dependency-groups] [dependency-groups]
dev = [ dev = [
"httpx>=0.28.1", "httpx>=0.28.1",