Compare commits

...

32 Commits

Author SHA1 Message Date
Adriano d335f866a3 merge: refine veloce + UCS Y visibile 2026-05-05 12:38:47 +02:00
Adriano 88f80a2cad fix: refine angolo piu' veloce + edge overlay ciano (no clash con asse Y)
Bug visibili dallo screenshot:
1. Rallentamento sostanziale: il fix precedente aggiungeva 16 iter golden
   (era 8) + 3 chiamate parabolic fit = ~19 _score_at_angle vs 11 prima.
2. Asse Y dell'UCS invisibile sul match: edge overlay era verde brillante
   (0,220,0) e si sovrapponeva esattamente al verde dell'asse Y dell'UCS.
3. Angolo non corretto: il parabolic fit finale era instabile su template
   simmetrici (multiple local max ravvicinati lo facevano divergere fuori
   dal vero picco trovato dal golden).

Fix:
- _refine_angle: 10 iter golden con tol 0.05 (compromesso tra precisione
  e velocita'). Rimosso parabolic fit finale instabile. search_radius
  resta a step pieno (utile per recuperare estremi del bin).
- Edge overlay color: ciano (BGR 255,200,0) invece di verde brillante.
  L'asse Y verde dell'UCS ora ben visibile sopra l'overlay.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:38:47 +02:00
Adriano d52d0d0489 merge: precisione rotazione + default Nessuna 2026-05-05 12:32:17 +02:00
Adriano 9451a418a6 fix: precisione rotazione +UI simmetria default Nessuna
Precisione rotazione:
- _refine_angle: tol 0.1 -> 0.02 deg, 8 -> 16 iter golden-section
- search_radius default = step pieno (era step/2): copre il caso peggiore
  in cui il picco vero e' all'estremo del bin angolare grezzo
- Aggiunto parabolic fit finale sui 3 punti vicini al best (precisione
  <0.01 deg quando lo score map e' smooth attorno al picco)

Default UI:
- Simmetria "Nessuna" come default (era "Invariante" che limitava
  matching a una singola pose - confondente per l'operatore tipico).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:32:17 +02:00
Adriano 2c9160e4be merge: perf profile/bench/prune 2026-05-05 12:25:15 +02:00
Adriano 6d6dcc3b7a feat: profile mode + bench suite + skip-bin-vuoti + variant pruning histogram
4 ottimizzazioni performance + visibilita':

GGG. find(profile=True) → timing per fase
- _checkpoint() registra ms tra: to_gray, spread_top, top_pruning,
  full_kernel, refine_verify_nms
- get_last_profile() ritorna dict ms per identificare bottleneck
- Costo runtime trascurabile (~5 us per call)

HHH. pm2d.bench - benchmark suite eseguibile
- 3 scenarios (rect/L/circle x scene clean/cluttered)
- 5 configs (baseline, polarity, propagate, greedy, stride)
- Auto-aggiunge gpu_umat se opencl_available()
- Tabella ms/find + profile per ogni combo
- Entry-point pm2d-bench (--quick per smoke test 2 iter)

XX. Skip dilate per bin vuoti in _spread_bitmap
- Pre-calcolo bin presenti via np.unique sui pixel valid
- Su scene a bassa varianza orientation skip 50-70% delle dilate
- Misurato benchmark: spread_top da ~0.3ms a ~0.1ms in molti casi

VV. Variant pruning preliminare via histogramma orientation
- Per ogni variante calcolo overlap (feature bins ∩ scene bins) /
  total feature bins
- Se overlap < 0.5 * min_score → skip variante (no kernel call)
- Counter n_variants_pruned_histogram nel diag
- Vantaggio: scene focalizzate (poche direzioni dominanti) skippano
  varianti template con bin assenti dalla scena

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:25:15 +02:00
Adriano ee1c4a8f92 merge: fix edge bordi spuri overlay match 2026-05-05 12:13:07 +02:00
Adriano 5002515b41 fix: rimuove edge spuri sui bordi template warpato (apparivano come ROI)
Bug: per ogni match l'overlay edge del modello includeva anche il
PERIMETRO del template warpato (transizione bordo nero borderValue=0
→ scena = forte gradient artefatto). Con N match si vedevano N
rettangoli verdi attorno ai pezzi, simili a "ROI ripetute".

Fix:
- Warpa anche _train_mask alla pose
- Erode di (2*spread_radius+1) per scartare la fascia di transizione
  bordo che produce gradient spurio
- Maschera edge_mask con warped_mask: solo edge interni al pezzo
  vengono visualizzati

Risultato: overlay edge pulito che mostra solo i veri edge del
modello allineati al pezzo trovato, niente cornici fasulle.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:13:07 +02:00
Adriano 8029a1e12b merge: UCS coerente centro pose 2026-05-05 12:04:24 +02:00
Adriano d37833076e fix: UCS coerente sul centro pose, no traslazione fissata sbagliata
L'UCS del match precedentemente proiettava il baricentro feature
template alla pose, ma:
- Il baricentro veniva calcolato da una variante a 0° (v0) i cui dx/dy
  sono offsets relativi al centro PADDED (non al centro template puro)
- _extract_features dipende dai parametri matcher che possono differire
  da quelli del preview se la ricetta e' caricata
- Risultato: UCS appariva con offset costante errato rispetto al centro
  visibile del pezzo

Fix: UCS sul centro POSE del match (m.cx, m.cy) = posizione del centro
template originale nella scena (questo e' esattamente cio' che
_subpixel_peak ritorna). Coerente, prevedibile, "fissato" sul centro
del pezzo.

Per coerenza visiva, anche preview_edges sposta UCS dal baricentro al
CENTRO ROI (rh/2, rw/2). Cosi' il modello mostra UCS nello stesso
identico punto relativo dove apparira' nel match dopo
traslazione+rotazione della pose.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 12:04:24 +02:00
Adriano e1ed9206a3 merge: fix UCS match + edge modello overlay 2026-05-05 11:58:21 +02:00
Adriano e84ae199ac fix: UCS match dimensione + orientamento Y + overlay edge modello
3 problemi visibili da screenshot:

1. UCS match troppo grande: usava 0.4 * lato bbox (~114 px su template
   286). Anteprima modello usa 0.15 * max(lato_template) (~42 px).
   Fix: stessa formula scalata per m.scale → coerenza dimensionale.

2. Asse Y match orientamento sbagliato: a m.angle_deg=0 puntava
   in alto invece che in basso (errore segno trigonometrico:
   sin(ax + pi/2) ≠ cos(ax) per il segno y-down).
   Fix corretto:
   - X axis = (cos(ax), -sin(ax))   # rotazione cv2 di (1, 0)
   - Y axis = (sin(ax), cos(ax))    # rotazione cv2 di (0, 1)
   Verificato: a ax=0 → X destra, Y giu' (matches modello).

3. Overlay edge modello orientato (richiesta utente): warpa template
   alla pose (cx, cy, angle, scale), applica hysteresis identica al
   matcher, disegna pixel edge come overlay verde brillante (60% alpha).
   Permette di vedere visivamente l'allineamento del modello sul pezzo
   rilevato.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:58:21 +02:00
Adriano 5f0c4542d3 merge: param edge in find+ricetta, match solo UCS 2026-05-05 11:37:00 +02:00
Adriano 29c034fb05 fix: param edge usati anche in find/ricetta + match overlay solo UCS
Due richieste utente:

1. Param di pulizia rumore (weak/strong/num_features/spacing dal pannello
   "Anteprima edge") devono essere usati anche in find e salvati nelle
   ricette. Prima l'utente li regolava ma erano ignorati: il match usava
   sempre i valori auto_tune.

   Fix:
   - SimpleMatchParams.edge_* (4 campi opzionali): None = usa auto_tune,
     valore = override
   - _simple_to_technical applica gli override se presenti, propagati
     a min_feature_spacing nel matcher init
   - Cache key matcher include min_feature_spacing
   - SaveRecipeParams stessi 4 campi: la ricetta salva i param di
     pulizia rumore identici a quelli del preview
   - UI readEdgeOverrides() legge sempre i valori slider ed inietta
     in body sia di /match_simple sia di POST /recipes

2. Match overlay sulla scena: solo UCS (X rosso, Y verde) ruotato
   secondo m.angle_deg, posizionato sul baricentro feature del
   modello (proiettato alla pose). Niente edge filtrati, niente
   cerchietti feature, niente bbox, niente label/score sulla scena
   reale: l'overlay deve essere pulito, gli edge si vedono solo
   nell'anteprima modello.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:37:00 +02:00
Adriano 6fb1efcab8 merge: fix UCS match + feature pre-computate 2026-05-05 11:02:04 +02:00
Adriano 35df4c473c fix: UCS match e numero feature ora coerenti con anteprima modello
Bug visibili da screenshot:
1. UCS match diverso da UCS anteprima modello (centro pose vs baricentro)
2. Numero feature disegnate < di quelle anteprima modello

Cause:
1. Match UCS era posto su (cx, cy) = centro template, mentre l'anteprima
   modello mostra UCS sul baricentro feature (mean fx, fy).
2. _draw_matches estraeva feature dal template warpato → re-quantizza
   gradient su immagine warp+interp, perdendo precisione vs feature
   pre-computate del matcher.

Fix:
- Match.variant_idx: nuovo field con indice variante usata dal find()
- _draw_matches usa lvl0.dx/dy/bin pre-computati invece di re-estrarre:
  * applica delta-rotation (m.angle_deg - var.angle_deg) per refine
    sub-step
  * proietta in scene coords intorno a (m.cx, m.cy)
  * stesso identico set di feature dell'anteprima modello (modulo
    rotazione+traslazione)
- UCS match calcolato sul baricentro delle feature warpate, non su
  (cx, cy) → coerente con UCS anteprima

Fallback (variant_idx == -1, es. ricetta caricata da save_model
prima di questo commit): usa estrazione warpata legacy.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:02:04 +02:00
Adriano 64f2c8b5dc merge: match overlay edges+UCS, no ROI 2026-05-05 10:55:54 +02:00
Adriano 7e076deb80 feat(web): match overlay con edge filtrati + UCS + rimozione bbox ROI
_draw_matches ora coerente con anteprima modello:

- Edge filtrati con stessa pipeline matcher (hysteresis weak/strong_grad)
  e selezione feature: l'overlay del match riflette esattamente quello
  che l'utente ha visto nel preview "Anteprima edge"
- Background tinta scura su pixel hysteresis (40% colore match)
- Feature scelte come dot colorati per bin (palette 16 bin)
- UCS rosso/verde sul centro pose: asse X destra, Y giu' (image y-down),
  ruotato secondo angle del match
- Origine UCS: cerchio bianco con bordo nero per visibilita'

Rimossi (richiesta utente "togli la ROI"):
- bbox poly perimetrale: ridondante, copriva il pezzo
- linea marker primo lato: sostituita da UCS rosso

Compatibilita': se matcher non passato (es. uso esterno), fallback
Canny legacy. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) ora propagano il matcher a _draw_matches.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:55:54 +02:00
Adriano 852597ed51 merge: UI edge preview + UCS 2026-05-05 10:48:58 +02:00
Adriano a78884f950 feat(web): anteprima edge sul modello + tracker pulizia rumore + UCS baricentro
Pannello "🔬 Anteprima edge / pulizia rumore" sotto il canvas modello.
Permette tuning interattivo dei parametri di selezione edge per
togliere "sporcizie" (rumore di sfondo, edge spuri) prima di
trainare il matcher.

Server:
- POST /preview_edges: dato modello+ROI+param edge, ritorna immagine
  ROI con overlay:
  * heatmap magnitude gradient (sfondo)
  * verde scuro: pixel sopra hysteresis edge
  * cerchietti colorati per bin: feature scelte (palette 16 bin)
  * UCS rosso/verde sul baricentro feature (richiesta utente):
    asse X destra, Y giu' (image y-down)
  Ritorna anche stats: n_features, n_edge_strong, percentili magnitude,
  ucs_baricentro {cx, cy}

UI:
- Slider weak_grad/strong_grad/num_features/spacing + checkbox polarity
- Re-fetch debounced (200ms) ad ogni input → preview live
- Bottone "Applica ai parametri Avanzate": copia i valori scelti
  nei campi Avanzate del matcher principale
- Auto-fetch quando il pannello viene aperto

Use case: operatore vede SUBITO quali edge il matcher userebbe,
regola soglie per escludere rumore, applica e poi MATCH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:48:58 +02:00
Adriano 543ae0f643 merge: UI pannello diagnostica 2026-05-05 10:41:26 +02:00
Adriano a12574f3c5 feat(web): pannello diagnostica match (CC) con hint contestuali
MatchResp ora include diag dict (CC feature). UI rendering:

- Nuovo pannello pieghevole "🔍 Diagnostica" sotto i tempi
- Per ogni match mostra:
  * pipeline pruning (vars total → top_eval → top_pass → full_eval)
  * candidati (raw → pre_nms → final)
  * drop reasons (NCC, score, recall, bbox, NMS) con counter
  * soglie effettive applicate
  * flag attivi (polarity, soft, subpix-LM)

- Quando 0 match → pannello si apre automaticamente + mostra hint
  contestuale specifico:
  * "0 candidati top" → suggerisce ↓ min_score / top_thresh
  * "tutti dropped da NCC" → ↓ verify_threshold (filtro_fp)
  * "score post-NCC sotto" → ↓ min_score
  * "recall basso" → ↓ min_recall
  * "bbox out-of-scene" → check pose / search_roi

Risolve il pattern "0 match perche'?" con guida actionable invece
del black-box. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) propagano il diag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:41:26 +02:00
Adriano 110dc87b08 merge: AA eval CLI 2026-05-05 10:10:00 +02:00
Adriano 2bb2cf63cc merge: II scene cache 2026-05-05 10:09:56 +02:00
Adriano ea6a9163ad merge: CC diagnostic mode 2026-05-05 10:09:56 +02:00
Adriano 1cc7881a51 feat: pm2d.eval - validation harness CLI per LineShapeMatcher
Tool da CLI per misurare oggettivamente la qualita' del matcher
su dataset etichettato. Halcon ha questo solo nell'IDE (HDevelop),
qui esposto come modulo Python testabile in CI.

Format dataset JSON:
  - template + mask
  - params init matcher (override)
  - find_params (override per find())
  - scenes con ground_truth: lista pose attese (cx, cy, angle, scale,
    tolerance_px, tolerance_deg)

Metriche per scena: TP/FP/FN, precision, recall, IoU medio bbox,
tempo find. Aggregato: precision globale, recall, F1.

Match-to-GT criterio: distanza centro <= tolerance_px AND
|angle| <= tolerance_deg, oppure IoU bbox >= 0.3.

Use case:
- regressione: confronto config A vs B oggettivo
- tuning: trovare param ottimi via grid-search guidato da F1
- validazione pre-deploy: report TP/FP/FN su dataset prod

Esposto come entry-point pm2d-eval (pyproject.toml).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:09:45 +02:00
Adriano 74a332a2dd feat: scene precompute cache (II Halcon-style)
LRU cache per scena: hash su prime 64KB bytes + parametri matcher
(weak/strong_grad, spread_radius, n_bins, pyramid_levels). Quando
hit, riusa:
- piramide grays
- spread_top + bit_active_top + density_top
- spread0 + bit_active_full + density_full

Tipico use case: UI tuning con slider min_score/verify_threshold/...
produce 10+ find() consecutive su scena identica. Risparmia
Sobel+dilate+popcount duplicati (~50ms su 1080p).

Speedup misurato: ~15% find() su 1080p (54ms su 351ms). Vantaggio
maggiore su template piccoli (kernel JIT veloce → scena precompute
domina). Cache size 4, invalidata in train() (template cambiato).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:07:27 +02:00
Adriano dae49eb4a3 feat: diagnostic mode trasparente per find()
self._last_diag accumula counter durante find():
- Pipeline pruning: top_evaluated, top_passed, full_evaluated
- Candidati: n_raw, n_after_pre_nms, n_final
- Drop reason: ncc_low, min_score_post_avg, recall_low,
  bbox_out_of_scene, nms_iou
- Param effettivi: top_thresh_used, verify_threshold_used, ecc.

API:
- find(debug=True): stampa one-line summary su stderr
- m.get_last_diag(): ritorna dict completo per inspection

Use case: 0 match? guarda dove sono finiti i candidati
(es. drop_ncc_low=200 → soglia NCC troppo alta) invece di
tirare a caso. Risolve il "find black-box" pattern.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:05:20 +02:00
Adriano 9218cb2741 chore: gitignore recipes/*.npz e rimuove Pippo.npz dal tracking
Le ricette pre-trained (binari numpy compressi) sono dati utente
specifici della macchina/ROI/template, non vanno versionati.
Rimosso Pippo.npz dal repo (mantenuto su filesystem locale).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:21:46 +02:00
Adriano 159f9089a5 merge: UI load ricetta 2026-05-04 23:20:52 +02:00
Adriano b718e81ccf feat(web): UI carica/stacca ricetta + match con ricetta caricata
Manca il path "load" della V feature: utente poteva salvare ricetta
ma non caricarla dalla UI. Aggiunto:

Server:
- POST /recipes/{name}/load: carica .npz in cache _RECIPE_MATCHERS
- POST /match_recipe: usa matcher caricato senza re-train (zero
  training time, solo find params propagati)

UI:
- Dropdown ricette disponibili (auto-refreshed da GET /recipes)
- Bottone "Carica" attiva ricetta + popola state.active_recipe
- Bottone "Stacca" torna al flow normale (training da ROI)
- Status indicator mostra ricetta attiva e dimensioni

doMatch dispatcha automaticamente:
- ricetta attiva → /match_recipe (no model/ROI necessari)
- altrimenti → /match o /match_simple come prima

Use case: ricetta tarata offline, deploy a runtime production senza
ricaricare modello+ROI ogni volta.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:20:52 +02:00
Adriano d46197a81a merge: UI bottone auto-tune 2026-05-04 23:10:07 +02:00
9 changed files with 1316 additions and 61 deletions
+2
View File
@@ -8,3 +8,5 @@ __pycache__/
.DS_Store .DS_Store
*.log *.log
models/ models/
# Ricette pre-trained (generate da utente, non versionare)
recipes/*.npz
+179
View File
@@ -0,0 +1,179 @@
"""Benchmark suite per LineShapeMatcher.
Usage:
python -m pm2d.bench [--quick]
Misura tempi find() su 3 template-tipo × 3 scene-tipo × N config:
- Template: rettangolo 80×80, L-shape 120×120, cerchio 150×150
- Scene: pulita 800×600, cluttered 1080×1920, multi-pezzo 1080×1920
- Config: baseline, polarity, gpu, pyramid_propagate, greediness=0.7
Per ogni config stampa: ms/find, ms per fase (profile), n. match.
Output tabellare per detectare regressioni in CI.
"""
from __future__ import annotations
import argparse
import time
import cv2
import numpy as np
from pm2d.line_matcher import LineShapeMatcher, opencl_available
# ---------- Sintetizzatori template/scena ----------
def _tpl_rect() -> np.ndarray:
t = np.zeros((80, 80, 3), np.uint8)
cv2.rectangle(t, (15, 15), (65, 65), (255, 255, 255), 3)
return t
def _tpl_lshape() -> np.ndarray:
t = np.zeros((120, 120, 3), np.uint8)
cv2.rectangle(t, (20, 20), (50, 100), (255, 255, 255), -1)
cv2.rectangle(t, (20, 70), (100, 100), (255, 255, 255), -1)
return t
def _tpl_circle() -> np.ndarray:
t = np.zeros((150, 150, 3), np.uint8)
cv2.circle(t, (75, 75), 60, (255, 255, 255), 4)
return t
def _scene_clean(W: int, H: int, n_pieces: int = 1) -> np.ndarray:
np.random.seed(0)
s = np.zeros((H, W, 3), np.uint8)
for _ in range(n_pieces):
cx = np.random.randint(80, W - 80)
cy = np.random.randint(80, H - 80)
cv2.rectangle(s, (cx - 25, cy - 25), (cx + 25, cy + 25), (255, 255, 255), 3)
return s
def _scene_cluttered(W: int, H: int) -> np.ndarray:
np.random.seed(0)
s = np.random.randint(50, 200, (H, W, 3), np.uint8)
cv2.rectangle(s, (300, 200), (350, 250), (255, 255, 255), 3)
cv2.rectangle(s, (1500, 800), (1550, 850), (255, 255, 255), 3)
return s
# ---------- Single benchmark ----------
def _bench_config(template, scene, config_name: str,
init_kw: dict, find_kw: dict,
n_iter: int = 5) -> dict:
m = LineShapeMatcher(**init_kw)
t0 = time.perf_counter()
n_var = m.train(template)
t_train = time.perf_counter() - t0
# Warmup (Numba JIT)
m.find(scene, **find_kw)
m.find(scene, **find_kw)
# Run
times_ms = []
for _ in range(n_iter):
t0 = time.perf_counter()
matches = m.find(scene, **find_kw)
times_ms.append((time.perf_counter() - t0) * 1000.0)
# Profile (1 iter)
m.find(scene, profile=True, **find_kw)
prof = m.get_last_profile() or {}
return {
"config": config_name,
"n_variants": n_var,
"t_train_s": round(t_train, 3),
"ms_avg": round(float(np.mean(times_ms)), 1),
"ms_min": round(float(np.min(times_ms)), 1),
"ms_max": round(float(np.max(times_ms)), 1),
"n_matches": len(matches),
"profile_ms": {k: round(v, 1) for k, v in prof.items()},
}
# ---------- Suite ----------
CONFIGS = [
("baseline",
{"angle_step_deg": 10, "pyramid_levels": 2},
{"min_score": 0.4, "verify_threshold": 0.2}),
("polarity",
{"angle_step_deg": 10, "pyramid_levels": 2, "use_polarity": True},
{"min_score": 0.4, "verify_threshold": 0.2}),
("propagate",
{"angle_step_deg": 10, "pyramid_levels": 3},
{"min_score": 0.4, "verify_threshold": 0.2,
"pyramid_propagate": True, "propagate_topk": 4}),
("greedy_07",
{"angle_step_deg": 10, "pyramid_levels": 2},
{"min_score": 0.4, "verify_threshold": 0.2, "greediness": 0.7}),
("stride2",
{"angle_step_deg": 10, "pyramid_levels": 2},
{"min_score": 0.4, "verify_threshold": 0.2, "coarse_stride": 2}),
]
if opencl_available():
CONFIGS.append(
("gpu_umat",
{"angle_step_deg": 10, "pyramid_levels": 2, "use_gpu": True},
{"min_score": 0.4, "verify_threshold": 0.2})
)
SCENARIOS = [
("rect_80 vs scene_800x600", _tpl_rect, lambda: _scene_clean(800, 600, 1)),
("lshape_120 vs scene_1080x1920_clutter",
_tpl_lshape, lambda: _scene_cluttered(1920, 1080)),
("circle_150 vs scene_clean_3pieces",
_tpl_circle, lambda: _scene_clean(1920, 1080, 3)),
]
def run(quick: bool = False) -> int:
n_iter = 2 if quick else 5
print(f"=== PM2D Benchmark Suite ({len(SCENARIOS)} scenarios x "
f"{len(CONFIGS)} configs, n_iter={n_iter}) ===\n")
rows = []
for sc_name, tpl_fn, scn_fn in SCENARIOS:
template = tpl_fn()
scene = scn_fn()
print(f"--- Scenario: {sc_name} (tpl={template.shape}, "
f"scn={scene.shape}) ---")
for cfg_name, init_kw, find_kw in CONFIGS:
r = _bench_config(template, scene, cfg_name, init_kw, find_kw,
n_iter=n_iter)
r["scenario"] = sc_name
rows.append(r)
prof_str = " ".join(
f"{k}={v:.1f}" for k, v in r["profile_ms"].items()
)
print(f" {cfg_name:14s} {r['ms_avg']:6.1f}ms "
f"(min {r['ms_min']:.1f} max {r['ms_max']:.1f}) "
f"vars={r['n_variants']:3d} "
f"matches={r['n_matches']:2d}")
if prof_str:
print(f" profile: {prof_str}")
print()
print("=== Done ===")
return 0
def main(argv: list[str] | None = None) -> int:
p = argparse.ArgumentParser(description="PM2D benchmark suite")
p.add_argument("--quick", action="store_true",
help="2 iterazioni per config invece di 5 (smoke test)")
args = p.parse_args(argv)
return run(quick=args.quick)
if __name__ == "__main__":
import sys
sys.exit(main())
+217
View File
@@ -0,0 +1,217 @@
"""CLI validation harness per LineShapeMatcher.
Usage:
python -m pm2d.eval dataset.json [opzioni]
Formato dataset (JSON):
{
"template": "path/to/template.png",
"mask": "path/to/mask.png", # opzionale
"params": { # opzionali, override su matcher init
"use_polarity": true,
"angle_step_deg": 5,
...
},
"find_params": { # opzionali, passati a find()
"min_score": 0.6,
"use_soft_score": true,
...
},
"scenes": [
{
"image": "path/to/scene1.png",
"ground_truth": [
{"cx": 320.0, "cy": 240.0, "angle_deg": 12.0,
"scale": 1.0, "tolerance_px": 5.0,
"tolerance_deg": 3.0}
]
}
]
}
Output: report precision/recall/IoU/timing per ogni scena + aggregati.
"""
from __future__ import annotations
import argparse
import json
import math
import sys
import time
from pathlib import Path
import cv2
import numpy as np
from pm2d.line_matcher import LineShapeMatcher, _poly_iou, _oriented_bbox_polygon
def _load_image(path: str | Path) -> np.ndarray:
img = cv2.imread(str(path), cv2.IMREAD_UNCHANGED)
if img is None:
raise FileNotFoundError(f"Immagine non trovata: {path}")
if img.ndim == 2:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
return img
def _gt_to_poly(gt: dict, tw: int, th: int) -> np.ndarray:
"""Costruisce bbox poligonale per un ground truth."""
s = float(gt.get("scale", 1.0))
return _oriented_bbox_polygon(
float(gt["cx"]), float(gt["cy"]),
tw * s, th * s, float(gt["angle_deg"]),
)
def _match_to_gt(match, gt: dict, tw: int, th: int,
iou_thr: float = 0.3) -> bool:
"""True se il match corrisponde al ground truth.
Criterio: distanza centro <= tolerance_px AND |angle_deg - gt| <= tolerance_deg
OR IoU bbox >= iou_thr (fallback per pose con tolerance ampie).
"""
tol_px = float(gt.get("tolerance_px", 5.0))
tol_deg = float(gt.get("tolerance_deg", 3.0))
dx = match.cx - float(gt["cx"])
dy = match.cy - float(gt["cy"])
dist = math.hypot(dx, dy)
da = abs((match.angle_deg - float(gt["angle_deg"]) + 180) % 360 - 180)
if dist <= tol_px and da <= tol_deg:
return True
# Fallback IoU
poly_gt = _gt_to_poly(gt, tw, th)
poly_m = match.bbox_poly
if _poly_iou(poly_m, poly_gt) >= iou_thr:
return True
return False
def evaluate_scene(matcher: LineShapeMatcher, scene_bgr: np.ndarray,
gt_list: list[dict], find_params: dict,
tw: int, th: int) -> dict:
"""Esegue match e calcola TP/FP/FN per una scena."""
t0 = time.time()
matches = matcher.find(scene_bgr, **find_params)
elapsed = time.time() - t0
gt_matched = [False] * len(gt_list)
match_is_tp = [False] * len(matches)
iou_per_match = [0.0] * len(matches)
for i, m in enumerate(matches):
for j, gt in enumerate(gt_list):
if gt_matched[j]:
continue
if _match_to_gt(m, gt, tw, th):
gt_matched[j] = True
match_is_tp[i] = True
# Calcolo IoU per metrica
poly_gt = _gt_to_poly(gt, tw, th)
iou_per_match[i] = _poly_iou(m.bbox_poly, poly_gt)
break
tp = sum(match_is_tp)
fp = len(matches) - tp
fn = len(gt_list) - sum(gt_matched)
return {
"n_matches": len(matches),
"n_gt": len(gt_list),
"tp": tp, "fp": fp, "fn": fn,
"find_time_s": elapsed,
"iou_mean": float(np.mean([i for i, t in zip(iou_per_match, match_is_tp) if t])
if tp > 0 else 0.0),
"diag": (matcher.get_last_diag()
if hasattr(matcher, "get_last_diag") else None),
}
def run(dataset_path: str, scene_filter: str | None = None,
verbose: bool = False) -> dict:
"""Esegue eval su dataset, ritorna report aggregato."""
dataset_path = Path(dataset_path)
base = dataset_path.parent
with open(dataset_path) as f:
ds = json.load(f)
template = _load_image(base / ds["template"])
mask = None
if ds.get("mask"):
mask_img = cv2.imread(str(base / ds["mask"]), cv2.IMREAD_GRAYSCALE)
if mask_img is not None:
mask = (mask_img > 128).astype(np.uint8) * 255
init_params = ds.get("params", {})
find_params = ds.get("find_params", {})
matcher = LineShapeMatcher(**init_params)
n_var = matcher.train(template, mask=mask)
tw, th = matcher.template_size
print(f"Template: {ds['template']} ({tw}x{th}), {n_var} varianti")
print(f"Param matcher: {init_params}")
print(f"Param find: {find_params}")
print()
scenes = ds["scenes"]
if scene_filter:
scenes = [s for s in scenes if scene_filter in s["image"]]
rows = []
tot_tp = tot_fp = tot_fn = 0
tot_time = 0.0
for sc in scenes:
scene = _load_image(base / sc["image"])
gt = sc.get("ground_truth", [])
result = evaluate_scene(matcher, scene, gt, find_params, tw, th)
rows.append({"scene": sc["image"], **result})
tot_tp += result["tp"]; tot_fp += result["fp"]; tot_fn += result["fn"]
tot_time += result["find_time_s"]
prec = result["tp"] / max(1, result["tp"] + result["fp"])
rec = result["tp"] / max(1, result["tp"] + result["fn"])
line = (f" {sc['image']:30s} "
f"TP={result['tp']} FP={result['fp']} FN={result['fn']} "
f"P={prec:.2f} R={rec:.2f} "
f"IoU={result['iou_mean']:.2f} "
f"t={result['find_time_s']*1000:.0f}ms")
print(line)
if verbose and result["diag"] and hasattr(matcher, "_format_diag"):
print(f" diag: {matcher._format_diag(result['diag'])}")
# Aggregati
precision = tot_tp / max(1, tot_tp + tot_fp)
recall = tot_tp / max(1, tot_tp + tot_fn)
f1 = 2 * precision * recall / max(1e-9, precision + recall)
print()
print(f"AGGREGATO: precision={precision:.3f} recall={recall:.3f} "
f"F1={f1:.3f} TP={tot_tp} FP={tot_fp} FN={tot_fn}")
print(f"TIME: total={tot_time:.2f}s avg={tot_time / max(1, len(scenes)) * 1000:.0f}ms/scene")
return {
"precision": precision, "recall": recall, "f1": f1,
"tp": tot_tp, "fp": tot_fp, "fn": tot_fn,
"total_time_s": tot_time, "n_scenes": len(scenes),
"per_scene": rows,
}
def main(argv: list[str] | None = None) -> int:
p = argparse.ArgumentParser(
description="pm2d-eval: validation harness per LineShapeMatcher"
)
p.add_argument("dataset", help="JSON dataset (template + scenes + GT)")
p.add_argument("--scene-filter", default=None,
help="Filtro substring sui nomi scena (debug)")
p.add_argument("--verbose", "-v", action="store_true",
help="Stampa diag dict per ogni scena")
p.add_argument("--out", default=None,
help="Salva report JSON su file")
args = p.parse_args(argv)
report = run(args.dataset, scene_filter=args.scene_filter,
verbose=args.verbose)
if args.out:
with open(args.out, "w") as f:
json.dump(report, f, indent=2)
print(f"Report salvato: {args.out}")
return 0 if report["f1"] > 0.5 else 1
if __name__ == "__main__":
sys.exit(main())
+236 -10
View File
@@ -127,6 +127,7 @@ class Match:
scale: float scale: float
score: float score: float
bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato) bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato)
variant_idx: int = -1 # indice variante usata (per overlay coerente)
@dataclass @dataclass
@@ -512,8 +513,10 @@ class LineShapeMatcher:
self.variants.clear() self.variants.clear()
# Reset view list: template principale = view 0 # Reset view list: template principale = view 0
self._view_templates = [(gray.copy(), mask_full.copy())] self._view_templates = [(gray.copy(), mask_full.copy())]
# Invalida cache feature di refine: il template e cambiato. # Invalida cache: template/param cambiati → spread/feature obsoleti.
self._refine_feat_cache = {} self._refine_feat_cache = {}
if hasattr(self, "_scene_cache"):
self._scene_cache.clear()
self._build_variants_for_view(gray, mask_full, view_idx=0) self._build_variants_for_view(gray, mask_full, view_idx=0)
self._dedup_variants() self._dedup_variants()
return len(self.variants) return len(self.variants)
@@ -669,6 +672,51 @@ class LineShapeMatcher:
raw[b] = d.astype(np.float32) raw[b] = d.astype(np.float32)
return raw return raw
# --- Scene precompute cache (II Halcon-style) -----------------------
_SCENE_CACHE_SIZE = 4
def _scene_cache_key(self, gray: np.ndarray) -> str | None:
"""Hash compatto della scena + param che influenzano spread/density.
Hash su prime 64KB della scena (sufficiente discriminante per
scene fotografiche) + parametri matcher rilevanti. None se cache
disabilitata (es. scene troppo piccole).
"""
if gray.size < 100:
return None
try:
import hashlib
h = hashlib.md5()
sample = gray.tobytes()[:65536]
h.update(sample)
h.update(f"|{gray.shape}|{gray.dtype}".encode())
h.update(
f"|{self.weak_grad}|{self.strong_grad}"
f"|{self.spread_radius}|{self._n_bins}"
f"|{self.pyramid_levels}".encode()
)
return h.hexdigest()
except Exception:
return None
def _scene_cache_get(self, key: str) -> tuple | None:
cache = getattr(self, "_scene_cache", None)
if cache is None:
return None
v = cache.get(key)
if v is not None:
cache.move_to_end(key)
return v
def _scene_cache_put(self, key: str, value: tuple) -> None:
from collections import OrderedDict
if not hasattr(self, "_scene_cache"):
self._scene_cache = OrderedDict()
self._scene_cache[key] = value
self._scene_cache.move_to_end(key)
while len(self._scene_cache) > self._SCENE_CACHE_SIZE:
self._scene_cache.popitem(last=False)
def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray: def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray:
"""Spread bitmap: bit b acceso dove bin b è presente nel raggio. """Spread bitmap: bit b acceso dove bin b è presente nel raggio.
@@ -688,7 +736,24 @@ class LineShapeMatcher:
nb = self._n_bins nb = self._n_bins
dtype = np.uint16 if nb > 8 else np.uint8 dtype = np.uint16 if nb > 8 else np.uint8
spread = np.zeros((H, W), dtype=dtype) spread = np.zeros((H, W), dtype=dtype)
# XX optimization: skip dilate per bin senza pixel attivi.
# Su scene a bassa varianza orientation (es. pezzi industriali con
# poche direzioni dominanti) tipicamente 50-70% dei bin sono vuoti.
# Pre-calcolo bin presenti via mask globale; per bin assenti niente
# dilate (resta zero nel bitmap).
if isinstance(bins, np.ndarray):
valid_bins = bins[valid] if isinstance(valid, np.ndarray) else None
if valid_bins is not None and valid_bins.size > 0:
bin_present = np.zeros(nb, dtype=bool)
unique_bins = np.unique(valid_bins)
bin_present[unique_bins[unique_bins < nb]] = True
else:
bin_present = np.zeros(nb, dtype=bool)
else:
bin_present = np.ones(nb, dtype=bool)
for b in range(nb): for b in range(nb):
if not bin_present[b]:
continue # XX: nessun pixel di questo bin sopra weak_grad
mask_b = ((bins == b) & valid).astype(np.uint8) mask_b = ((bins == b) & valid).astype(np.uint8)
if self.use_gpu: if self.use_gpu:
d = cv2.dilate(cv2.UMat(mask_b), kernel) d = cv2.dilate(cv2.UMat(mask_b), kernel)
@@ -889,7 +954,7 @@ class LineShapeMatcher:
# variante e' quantizzato a multipli di angle_step (5 deg default). # variante e' quantizzato a multipli di angle_step (5 deg default).
# Refine angolare e' essenziale per orientamento sub-step. # Refine angolare e' essenziale per orientamento sub-step.
if search_radius is None: if search_radius is None:
search_radius = self._effective_angle_step() / 2.0 search_radius = self._effective_angle_step()
h, w = template_gray.shape h, w = template_gray.shape
sw = max(16, int(round(w * scale))) sw = max(16, int(round(w * scale)))
@@ -977,8 +1042,12 @@ class LineShapeMatcher:
# Score all'origine come riferimento (ang offset 0) # Score all'origine come riferimento (ang offset 0)
s0, cx0_s, cy0_s = _score_at_angle(0.0) s0, cx0_s, cy0_s = _score_at_angle(0.0)
best = (angle_deg, s0, cx0_s, cy0_s) best = (angle_deg, s0, cx0_s, cy0_s)
tol = 0.1 # gradi # Precisione angolare: 10 iter golden con tol 0.05 deg.
for _ in range(8): # Compromesso speed/accuracy: il parabolic fit aggiuntivo era
# instabile su score map non-smooth (template simmetrici producono
# multipli local max ravvicinati che lo facevano divergere).
tol = 0.05
for _ in range(10):
if s1 > best[1]: if s1 > best[1]:
best = (angle_deg + x1, s1, cx1, cy1) best = (angle_deg + x1, s1, cx1, cy1)
if s2 > best[1]: if s2 > best[1]:
@@ -1309,6 +1378,8 @@ class LineShapeMatcher:
min_recall: float = 0.0, min_recall: float = 0.0,
use_soft_score: bool = False, use_soft_score: bool = False,
subpixel_lm: bool = False, subpixel_lm: bool = False,
debug: bool = False,
profile: bool = False,
) -> list[Match]: ) -> list[Match]:
""" """
scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0: scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0:
@@ -1326,7 +1397,48 @@ class LineShapeMatcher:
if not self.variants: if not self.variants:
raise RuntimeError("Matcher non addestrato: chiamare train() prima.") raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
# Diagnostic counter: traccia perche' candidati sono droppati lungo
# la pipeline. Esposto via get_last_diag() o ritornato implicitamente
# se debug=True (vedi sotto).
diag = {
"n_variants_total": len(self.variants),
"n_variants_top_evaluated": 0,
"n_variants_top_passed": 0,
"n_variants_full_evaluated": 0,
"n_raw_candidates": 0,
"n_after_pre_nms": 0,
"drop_ncc_low": 0,
"drop_min_score_post_avg": 0,
"drop_recall_low": 0,
"drop_bbox_out_of_scene": 0,
"drop_nms_iou": 0,
"n_variants_pruned_histogram": 0,
"n_final": 0,
"top_thresh_used": 0.0,
"verify_threshold_used": float(verify_threshold),
"min_score_used": float(min_score),
"min_recall_used": float(min_recall),
"use_polarity": bool(self.use_polarity),
"use_soft_score": bool(use_soft_score),
"subpixel_lm": bool(subpixel_lm),
}
self._last_diag = diag
# GGG: profile mode → timing per fase, esposto via get_last_profile()
import time as _time
prof = {} if profile else None
_t_prev = _time.perf_counter() if profile else 0.0
def _checkpoint(name: str):
nonlocal _t_prev
if prof is None:
return
now = _time.perf_counter()
prof[name] = (now - _t_prev) * 1000.0 # ms
_t_prev = now
self._last_profile = prof
gray_full = self._to_gray(scene_bgr) gray_full = self._to_gray(scene_bgr)
_checkpoint("to_gray")
# Applica ROI di ricerca: restringe scena a crop, ricorda offset per # Applica ROI di ricerca: restringe scena a crop, ricorda offset per
# ri-traslare le coordinate dei match a fine pipeline. # ri-traslare le coordinate dei match a fine pipeline.
if search_roi is not None: if search_roi is not None:
@@ -1340,18 +1452,32 @@ class LineShapeMatcher:
else: else:
gray0 = gray_full gray0 = gray_full
roi_offset = (0, 0) roi_offset = (0, 0)
# Cache pre-compute scena (II Halcon-style): hash bytes scene + param
# gradient/spread → riusa spread piramide + density tra find()
# consecutive con stessa scena (typical UI tuning: slider produce
# 10+ find() su scena identica). Risparmia ~80% del costo non-kernel.
cache_key = self._scene_cache_key(gray0)
cached = self._scene_cache_get(cache_key) if cache_key else None
if cached is not None:
grays, spread_top, bit_active_top, density_top, spread0, \
bit_active_full, density_full, top = cached
else:
grays = [gray0] grays = [gray0]
for _ in range(self.pyramid_levels - 1): for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1])) grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1 top = len(grays) - 1
# Spread bitmap (uint8) al top level: 32× meno memoria della response
# map float32 → MOLTO più cache-friendly per _score_by_shift.
spread_top = self._spread_bitmap(grays[top]) spread_top = self._spread_bitmap(grays[top])
bit_active_top = int( bit_active_top = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread_top & (spread_top.dtype.type(1) << b)).any()) if (spread_top & (spread_top.dtype.type(1) << b)).any())
) )
density_top = _jit_popcount(spread_top)
# spread0 + density_full computati piu sotto, quindi salvo dopo.
spread0 = None
bit_active_full = None
density_full = None
_checkpoint("spread_top")
if nms_radius is None: if nms_radius is None:
nms_radius = max(8, min(self.template_size) // 2) nms_radius = max(8, min(self.template_size) // 2)
# Pruning adattivo allo step angolare: con step piccolo (<= 3 deg) # Pruning adattivo allo step angolare: con step piccolo (<= 3 deg)
@@ -1368,9 +1494,10 @@ class LineShapeMatcher:
top_factor = max(top_factor, 0.7) top_factor = max(top_factor, 0.7)
cf_eff = 1 cf_eff = 1
top_thresh = min_score * top_factor top_thresh = min_score * top_factor
diag["top_thresh_used"] = float(top_thresh)
tw, th = self.template_size tw, th = self.template_size
density_top = _jit_popcount(spread_top) # density_top gia' computato sopra (cache o miss)
sf_top = 2 ** top sf_top = 2 ** top
bg_cache_top: dict[float, np.ndarray] = {} bg_cache_top: dict[float, np.ndarray] = {}
bg_cache_full: dict[float, np.ndarray] = {} bg_cache_full: dict[float, np.ndarray] = {}
@@ -1412,6 +1539,38 @@ class LineShapeMatcher:
end = min(n, i + half + 1) end = min(n, i + half + 1)
neighbor_map[vi_c] = vi_sorted[start:end] neighbor_map[vi_c] = vi_sorted[start:end]
# VV: pruning preliminare via overlap istogramma orientation.
# Scene-bins-attivi vs variant-feature-bins. Se la variante ha bin
# dominanti che la scena non possiede → score impossibile, skip
# senza chiamare il kernel. Costo: O(n_variants * 8 ops).
scene_bins = np.array(
[bool((bit_active_top >> b) & 1) for b in range(self._n_bins)],
dtype=bool,
)
if scene_bins.any():
n_scene_active = int(scene_bins.sum())
# Soglia: variante deve avere >= 50% delle sue feature in bin
# presenti nella scena. Sotto = score certamente < 0.5.
pruned_idx_list = []
n_pruned = 0
for vi in coarse_idx_list:
lvl = self.variants[vi].levels[
min(top, len(self.variants[vi].levels) - 1)
]
if len(lvl.bin) == 0:
continue
feat_in_scene = int(np.isin(lvl.bin, np.where(scene_bins)[0]).sum())
ratio = feat_in_scene / len(lvl.bin)
if ratio < 0.5 * min_score:
n_pruned += 1
continue
pruned_idx_list.append(vi)
if n_pruned > 0 and pruned_idx_list:
coarse_idx_list = pruned_idx_list
diag["n_variants_pruned_histogram"] = n_pruned
else:
diag["n_variants_pruned_histogram"] = 0
# Pruning varianti via top-level (parallelizzato). # Pruning varianti via top-level (parallelizzato).
# coarse_stride > 1: 1 pixel ogni stride (~stride^2 speed-up). # coarse_stride > 1: 1 pixel ogni stride (~stride^2 speed-up).
# pyramid_propagate=True: top-K picchi per restringere full-res. # pyramid_propagate=True: top-K picchi per restringere full-res.
@@ -1453,6 +1612,7 @@ class LineShapeMatcher:
kept_coarse: list[tuple[int, float]] = [] kept_coarse: list[tuple[int, float]] = []
all_top_scores: list[tuple[int, float]] = [] all_top_scores: list[tuple[int, float]] = []
diag["n_variants_top_evaluated"] = len(coarse_idx_list)
# batch_top: usa kernel batch single-call con prange-esterno su # batch_top: usa kernel batch single-call con prange-esterno su
# varianti. Vince su threadpool quando n_vars >> n_threads e quando # varianti. Vince su threadpool quando n_vars >> n_threads e quando
# H*W top e' piccolo (overhead chiamate JIT > costo kernel). # H*W top e' piccolo (overhead chiamate JIT > costo kernel).
@@ -1506,6 +1666,7 @@ class LineShapeMatcher:
kept_variants: list[tuple[int, float]] = [ kept_variants: list[tuple[int, float]] = [
(vi, score_by_vi[vi]) for vi in expanded (vi, score_by_vi[vi]) for vi in expanded
] ]
_checkpoint("top_pruning")
if not kept_variants: if not kept_variants:
return [] return []
@@ -1516,14 +1677,24 @@ class LineShapeMatcher:
kept_variants.sort(key=lambda t: -t[1]) kept_variants.sort(key=lambda t: -t[1])
max_vars_full = max(max_matches * 8, len(self.variants) // 2) max_vars_full = max(max_matches * 8, len(self.variants) // 2)
kept_variants = kept_variants[:max_vars_full] kept_variants = kept_variants[:max_vars_full]
diag["n_variants_top_passed"] = len(kept_coarse)
diag["n_variants_full_evaluated"] = len(kept_variants)
# Full-res (parallelizzato) con bitmap # Full-res (parallelizzato) con bitmap.
# Riusa cache se disponibile, altrimenti computa e salva.
if spread0 is None:
spread0 = self._spread_bitmap(gray0) spread0 = self._spread_bitmap(gray0)
bit_active_full = int( bit_active_full = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread0 & (spread0.dtype.type(1) << b)).any()) if (spread0 & (spread0.dtype.type(1) << b)).any())
) )
density_full = _jit_popcount(spread0) density_full = _jit_popcount(spread0)
# Salva cache scena complete
if cache_key is not None:
self._scene_cache_put(cache_key, (
grays, spread_top, bit_active_top, density_top,
spread0, bit_active_full, density_full, top,
))
for sc in unique_scales: for sc in unique_scales:
bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1) bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1)
@@ -1601,6 +1772,8 @@ class LineShapeMatcher:
raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi)) raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi))
raw.sort(key=lambda c: -c[0]) raw.sort(key=lambda c: -c[0])
diag["n_raw_candidates"] = len(raw)
_checkpoint("full_kernel")
# Mappa vi → score_map per subpixel/refinement # Mappa vi → score_map per subpixel/refinement
score_maps = dict(candidates_per_var) score_maps = dict(candidates_per_var)
@@ -1632,6 +1805,7 @@ class LineShapeMatcher:
preliminary_int.append((score, xi, yi, vi)) preliminary_int.append((score, xi, yi, vi))
if len(preliminary_int) >= pre_cap: if len(preliminary_int) >= pre_cap:
break break
diag["n_after_pre_nms"] = len(preliminary_int)
# Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap) # Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap)
kept: list[Match] = [] kept: list[Match] = []
@@ -1655,7 +1829,10 @@ class LineShapeMatcher:
ang_f, score_f, cx_f, cy_f = self._refine_angle( ang_f, score_f, cx_f, cy_f = self._refine_angle(
spread0, bit_active_full, self.template_gray, cx_f, cy_f, spread0, bit_active_full, self.template_gray, cx_f, cy_f,
var.angle_deg, var.scale, mask_full, var.angle_deg, var.scale, mask_full,
search_radius=self._effective_angle_step() / 2.0, # Search radius esteso allo step pieno (era step/2):
# copre il caso peggiore in cui il picco vero e' all'estremo
# del bin angolare della variante grezza.
search_radius=self._effective_angle_step(),
original_score=score, original_score=score,
) )
# Halcon SubPixel='least_squares_high': refinement iterativo # Halcon SubPixel='least_squares_high': refinement iterativo
@@ -1678,6 +1855,7 @@ class LineShapeMatcher:
view_idx=getattr(var, "view_idx", 0), view_idx=getattr(var, "view_idx", 0),
) )
if ncc < verify_threshold: if ncc < verify_threshold:
diag["drop_ncc_low"] += 1
continue continue
score_f = (float(score_f) + max(0.0, ncc)) * 0.5 score_f = (float(score_f) + max(0.0, ncc)) * 0.5
# Soft-margin gradient similarity: sostituisce o integra lo # Soft-margin gradient similarity: sostituisce o integra lo
@@ -1692,6 +1870,7 @@ class LineShapeMatcher:
# abbattere lo shape-score sotto la soglia user. Senza questo # abbattere lo shape-score sotto la soglia user. Senza questo
# check apparivano match con score < min_score (UI confusing). # check apparivano match con score < min_score (UI confusing).
if float(score_f) < min_score: if float(score_f) < min_score:
diag["drop_min_score_post_avg"] += 1
continue continue
# Feature recall (Halcon MinScore-style): conta quante feature # Feature recall (Halcon MinScore-style): conta quante feature
@@ -1703,6 +1882,7 @@ class LineShapeMatcher:
spread0, var, cx_f, cy_f, ang_f, spread0, var, cx_f, cy_f, ang_f,
) )
if recall < min_recall: if recall < min_recall:
diag["drop_recall_low"] += 1
continue continue
# Ri-traslo coord da spazio crop ROI a spazio scena originale. # Ri-traslo coord da spazio crop ROI a spazio scena originale.
@@ -1726,6 +1906,7 @@ class LineShapeMatcher:
) )
inside_ratio = float(inter) / poly_area inside_ratio = float(inter) / poly_area
if inside_ratio < 0.75: if inside_ratio < 0.75:
diag["drop_bbox_out_of_scene"] += 1
continue continue
# Penalità scala opzionale: score degrada con distanza da 1.0 # Penalità scala opzionale: score degrada con distanza da 1.0
if scale_penalty > 0.0 and var.scale != 1.0: if scale_penalty > 0.0 and var.scale != 1.0:
@@ -1750,6 +1931,7 @@ class LineShapeMatcher:
dup = True dup = True
break break
if dup: if dup:
diag["drop_nms_iou"] += 1
continue continue
kept.append(Match( kept.append(Match(
cx=cx_out, cy=cy_out, cx=cx_out, cy=cy_out,
@@ -1757,7 +1939,51 @@ class LineShapeMatcher:
scale=var.scale, scale=var.scale,
score=score_f, score=score_f,
bbox_poly=poly, bbox_poly=poly,
variant_idx=int(vi),
)) ))
if len(kept) >= max_matches: if len(kept) >= max_matches:
break break
diag["n_final"] = len(kept)
_checkpoint("refine_verify_nms")
if profile:
self._last_profile = prof
if debug:
# Debug mode: stampa diagnostica su stderr per visibilita' immediata.
import sys as _sys
_sys.stderr.write(f"[pm2d.find debug] {self._format_diag(diag)}\n")
return kept return kept
def _format_diag(self, diag: dict) -> str:
"""Formatta dict diagnostica in una linea leggibile."""
return (
f"vars: {diag['n_variants_total']} -> "
f"top_eval={diag['n_variants_top_evaluated']} "
f"top_pass={diag['n_variants_top_passed']} "
f"full_eval={diag['n_variants_full_evaluated']} | "
f"raw={diag['n_raw_candidates']} "
f"pre_nms={diag['n_after_pre_nms']} -> "
f"drop[ncc={diag['drop_ncc_low']}, "
f"score={diag['drop_min_score_post_avg']}, "
f"recall={diag['drop_recall_low']}, "
f"bbox={diag['drop_bbox_out_of_scene']}, "
f"nms={diag['drop_nms_iou']}] = "
f"final={diag['n_final']} (top_thresh={diag['top_thresh_used']:.2f})"
)
def get_last_profile(self) -> dict | None:
"""Ritorna timing per fase dell'ultimo find(profile=True).
Chiavi: to_gray, spread_top, top_pruning, full_kernel,
refine_verify_nms (millisecondi). Util per identificare bottleneck
dove ottimizzare.
"""
return getattr(self, "_last_profile", None)
def get_last_diag(self) -> dict | None:
"""Ritorna dict diagnostica dell'ultima chiamata find().
Halcon-equivalent: oggi inspect_shape_model espone parziali contatori.
Util per debug 'perche' 0 match', tuning interattivo, validation.
Vedi diag keys per significato (n_variants_top_evaluated, drop_*, ...).
"""
return getattr(self, "_last_diag", None)
+311 -35
View File
@@ -78,6 +78,7 @@ def _matcher_cache_key(roi: np.ndarray, tech: dict) -> str:
h.update(roi.tobytes()) h.update(roi.tobytes())
# Solo parametri che influenzano il training # Solo parametri che influenzano il training
relevant = ("num_features", "weak_grad", "strong_grad", relevant = ("num_features", "weak_grad", "strong_grad",
"min_feature_spacing",
"angle_min", "angle_max", "angle_step", "angle_min", "angle_max", "angle_step",
"scale_min", "scale_max", "scale_step", "scale_min", "scale_max", "scale_step",
"spread_radius", "pyramid_levels") "spread_radius", "pyramid_levels")
@@ -131,45 +132,89 @@ def _encode_png(img: np.ndarray) -> bytes:
def _draw_matches(scene: np.ndarray, matches: list[Match], def _draw_matches(scene: np.ndarray, matches: list[Match],
template_gray: np.ndarray | None) -> np.ndarray: template_gray: np.ndarray | None,
matcher: "LineShapeMatcher | None" = None) -> np.ndarray:
"""Disegna SOLO UCS (richiesta utente) per ogni match trovato.
UCS = sistema di coordinate (X rosso, Y verde) posizionato sul
baricentro feature del modello, ruotato secondo l'angolo del match.
Niente edge, niente cerchietti feature, niente bbox: i match sulla
scena reale devono essere puliti, gli edge filtrati si vedono solo
nell'anteprima modello.
"""
out = scene.copy() out = scene.copy()
H, W = scene.shape[:2] # Lunghezza assi UCS: stessa formula dell'anteprima modello
palette = [ # (0.15 * max lato template) scalata per m.scale → coerenza dimensionale.
(0, 255, 0), (0, 200, 255), (255, 100, 100), (255, 200, 0), if matcher is not None and matcher.template_size != (0, 0):
(200, 0, 255), (100, 255, 200), (255, 0, 0), (0, 255, 255), L_base = int(0.15 * max(matcher.template_size))
] else:
L_base = 30
H_scene, W_scene = scene.shape[:2]
for i, m in enumerate(matches): for i, m in enumerate(matches):
color = palette[i % len(palette)] # UCS posizionato esattamente sul CENTRO POSE del match (m.cx, m.cy):
if template_gray is not None: # equivale al centro template traslato alla scena, ruotato con
# m.angle_deg. Coerente con UCS dell'anteprima modello che ora
# e' anche sul centro ROI (vedi preview_edges).
ax = np.deg2rad(m.angle_deg)
ca, sa = np.cos(ax), np.sin(ax)
cx, cy = int(round(m.cx)), int(round(m.cy))
# Overlay edge del modello orientato (richiesta utente):
# warpa template alla pose, applica hysteresis identica al matcher,
# disegna pixel edge come overlay verde tenue. Maschera col
# _train_mask warpato + erode per rimuovere edge sui BORDI del
# rettangolo template (transizione bordo nero → scena = falso edge
# che appariva come "ROI" attorno a ogni match).
if template_gray is not None and matcher is not None:
t = template_gray t = template_gray
th, tw = t.shape th, tw = t.shape
edge = cv2.Canny(t, 50, 150)
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0 cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale) M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t M[1, 2] += m.cy - cy_t
warped = cv2.warpAffine(edge, M, (W, H), warped_gray = cv2.warpAffine(
t, M, (W_scene, H_scene),
flags=cv2.INTER_LINEAR, borderValue=0)
# Maschera: train_mask se disponibile, altrimenti rettangolo pieno
mask_src = (matcher._train_mask if matcher._train_mask is not None
else np.full((th, tw), 255, dtype=np.uint8))
warped_mask = cv2.warpAffine(
mask_src, M, (W_scene, H_scene),
flags=cv2.INTER_NEAREST, borderValue=0) flags=cv2.INTER_NEAREST, borderValue=0)
mask = warped > 0 # Erode di spread_radius per scartare la fascia di transizione
if mask.any(): # bordo che produce gradient spurio
overlay = np.zeros_like(out) er_k = max(3, 2 * matcher.spread_radius + 1)
overlay[mask] = color kernel_er = np.ones((er_k, er_k), np.uint8)
out[mask] = (0.3 * out[mask] + 0.7 * overlay[mask]).astype(np.uint8) warped_mask = cv2.erode(warped_mask, kernel_er)
poly = m.bbox_poly.astype(np.int32).reshape(-1, 1, 2) mag, _ = matcher._gradient(warped_gray)
cv2.polylines(out, [poly], True, color, 2, cv2.LINE_AA) if matcher.weak_grad < matcher.strong_grad:
p0 = tuple(m.bbox_poly[0].astype(int)) edge_mask = matcher._hysteresis_mask(mag)
p1 = tuple(m.bbox_poly[1].astype(int)) else:
cv2.line(out, p0, p1, color, 4, cv2.LINE_AA) edge_mask = mag >= matcher.strong_grad
cx, cy = int(round(m.cx)), int(round(m.cy)) edge_mask = edge_mask & (warped_mask > 0)
cv2.drawMarker(out, (cx, cy), color, cv2.MARKER_CROSS, 22, 2, cv2.LINE_AA) if edge_mask.any():
L = int(np.linalg.norm(m.bbox_poly[1] - m.bbox_poly[0])) // 2 edge_overlay = np.zeros_like(out)
a = np.deg2rad(m.angle_deg) # Ciano (cambiato da verde): non collide col verde dell'asse
cv2.arrowedLine(out, (cx, cy), # Y dell'UCS che altrimenti scompariva nell'overlay edge.
(int(cx + L * np.cos(a)), int(cy - L * np.sin(a))), edge_overlay[edge_mask] = (255, 200, 0) # ciano (BGR)
color, 2, cv2.LINE_AA, tipLength=0.2) out = cv2.addWeighted(out, 1.0, edge_overlay, 0.6, 0)
label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}" L = max(20, int(L_base * m.scale))
cv2.putText(out, label, (cx + 8, cy - 8), # X axis = rotazione di (1, 0) con cv2 matrix → (cos, -sin)
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2, cv2.LINE_AA) x_end = (int(cx + L * ca), int(cy - L * sa))
# Y axis = rotazione di (0, 1) con cv2 matrix → (sin, cos)
# A m.angle_deg=0 deve puntare GIU' (image y-down convenzione modello)
y_end = (int(cx + L * sa), int(cy + L * ca))
cv2.arrowedLine(out, (cx, cy), x_end,
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (x_end[0] + 4, x_end[1] + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
cv2.arrowedLine(out, (cx, cy), y_end,
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (y_end[0] + 4, y_end[1] + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine UCS: cerchio bianco con bordo nero
cv2.circle(out, (cx, cy), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (cx, cy), 3, (255, 255, 255), -1, cv2.LINE_AA)
return out return out
@@ -217,6 +262,7 @@ class MatchResp(BaseModel):
find_time: float find_time: float
num_variants: int num_variants: int
annotated_id: str annotated_id: str
diag: dict | None = None # CC: diagnostica pipeline (drop reasons)
class TuneParams(BaseModel): class TuneParams(BaseModel):
@@ -271,6 +317,15 @@ class SimpleMatchParams(BaseModel):
penalita_scala: float = 0.0 # 0 = score shape invariante, >0 = penalizza scala != 1 penalita_scala: float = 0.0 # 0 = score shape invariante, >0 = penalizza scala != 1
min_score: float = 0.65 min_score: float = 0.65
max_matches: int = 25 max_matches: int = 25
# --- Override edge da pannello "Anteprima edge" (None = auto_tune) ---
# Quando settati, sovrascrivono i valori derivati da auto_tune e
# vengono usati identici sia nel training del matcher sia nel find.
# Salvati nella ricetta cosi' la stessa pulizia rumore e' replicata
# quando la ricetta viene caricata.
edge_weak_grad: float | None = None
edge_strong_grad: float | None = None
edge_num_features: int | None = None
edge_min_feature_spacing: int | None = None
# --- Halcon-mode flags (default off = backward compat) --- # --- Halcon-mode flags (default off = backward compat) ---
# Init-time (richiede ri-train se cambiato) # Init-time (richiede ri-train se cambiato)
use_polarity: bool = False # F: 16 bin orientation mod 2pi use_polarity: bool = False # F: 16 bin orientation mod 2pi
@@ -319,10 +374,24 @@ def _simple_to_technical(
smin, smax, sstep = SCALE_PRESETS.get(p.scala, (1.0, 1.0, 0.1)) smin, smax, sstep = SCALE_PRESETS.get(p.scala, (1.0, 1.0, 0.1))
ang_step = PRECISION_ANGLE_STEP.get(p.precisione, 5.0) ang_step = PRECISION_ANGLE_STEP.get(p.precisione, 5.0)
# Override edge dal pannello "Anteprima edge" se utente li ha settati.
# Questi sostituiscono i valori auto_tune nel training del matcher,
# garantendo che la selezione edge identica a quella del preview
# venga usata sia in training sia in find.
weak_g = (p.edge_weak_grad if p.edge_weak_grad is not None
else tune["weak_grad"])
strong_g = (p.edge_strong_grad if p.edge_strong_grad is not None
else tune["strong_grad"])
n_feat = (p.edge_num_features if p.edge_num_features is not None
else nf)
min_sp = (p.edge_min_feature_spacing if p.edge_min_feature_spacing is not None
else 3)
return { return {
"num_features": nf, "num_features": n_feat,
"weak_grad": tune["weak_grad"], "weak_grad": weak_g,
"strong_grad": tune["strong_grad"], "strong_grad": strong_g,
"min_feature_spacing": min_sp,
"spread_radius": spread, "spread_radius": spread,
"pyramid_levels": pyr, "pyramid_levels": pyr,
"angle_min": 0.0, "angle_min": 0.0,
@@ -510,7 +579,7 @@ def match(p: MatchParams):
# Render annotated image # Render annotated image
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -521,6 +590,7 @@ def match(p: MatchParams):
) for m_ in matches], ) for m_ in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -557,6 +627,7 @@ def match_simple(p: SimpleMatchParams):
scale_range=(tech["scale_min"], tech["scale_max"]), scale_range=(tech["scale_min"], tech["scale_max"]),
scale_step=tech["scale_step"], scale_step=tech["scale_step"],
spread_radius=tech["spread_radius"], spread_radius=tech["spread_radius"],
min_feature_spacing=tech.get("min_feature_spacing", 3),
pyramid_levels=tech["pyramid_levels"], pyramid_levels=tech["pyramid_levels"],
use_polarity=p.use_polarity, use_polarity=p.use_polarity,
use_gpu=p.use_gpu, use_gpu=p.use_gpu,
@@ -586,7 +657,7 @@ def match_simple(p: SimpleMatchParams):
t_find = time.time() - t0 t_find = time.time() - t0
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg) annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated) ann_id = _store_image(annotated)
return MatchResp( return MatchResp(
@@ -596,6 +667,7 @@ def match_simple(p: SimpleMatchParams):
) for mt in matches], ) for mt in matches],
train_time=t_train, find_time=t_find, train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id, num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
) )
@@ -625,9 +697,112 @@ class SaveRecipeParams(BaseModel):
precisione: str = "normale" precisione: str = "normale"
use_polarity: bool = False use_polarity: bool = False
use_gpu: bool = False use_gpu: bool = False
# Override edge dal pannello "Anteprima edge" (None = auto_tune)
edge_weak_grad: float | None = None
edge_strong_grad: float | None = None
edge_num_features: int | None = None
edge_min_feature_spacing: int | None = None
name: str # nome file ricetta (no path) name: str # nome file ricetta (no path)
class EdgePreviewParams(BaseModel):
model_id: str
roi: list[int]
weak_grad: float = 30.0
strong_grad: float = 60.0
num_features: int = 96
min_feature_spacing: int = 3
use_polarity: bool = False
@app.post("/preview_edges")
def preview_edges(p: EdgePreviewParams):
"""Estrae edge feature dalla ROI con i parametri dati e ritorna
immagine annotata con i pixel selezionati come overlay.
Permette tuning interattivo delle soglie weak/strong_grad e
num_features per "togliere le sporcizie" (rumore di sfondo,
edge spuri) prima di trainare il matcher vero.
"""
model = _load_image(p.model_id)
if model is None:
raise HTTPException(404, "Modello non trovato")
x, y, w, h = p.roi
H_m, W_m = model.shape[:2]
x = max(0, min(int(x), W_m - 1)); y = max(0, min(int(y), H_m - 1))
w = max(1, min(int(w), W_m - x)); h = max(1, min(int(h), H_m - y))
roi_img = model[y:y + h, x:x + w]
# Matcher temporaneo solo per estrazione feature (no train completo)
m = LineShapeMatcher(
weak_grad=p.weak_grad,
strong_grad=p.strong_grad,
num_features=p.num_features,
min_feature_spacing=p.min_feature_spacing,
use_polarity=p.use_polarity,
)
gray = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) if roi_img.ndim == 3 else roi_img
mag, bins = m._gradient(gray)
fx, fy, fb = m._extract_features(mag, bins, None)
# Mostra anche i pixel "weak/strong" come heatmap di sfondo
out = roi_img.copy() if roi_img.ndim == 3 else cv2.cvtColor(roi_img, cv2.COLOR_GRAY2BGR)
# Overlay magnitude leggera
mag_norm = np.clip(mag / max(1.0, mag.max()) * 255, 0, 255).astype(np.uint8)
mag_color = cv2.applyColorMap(mag_norm, cv2.COLORMAP_BONE)
out = cv2.addWeighted(out, 0.6, mag_color, 0.4, 0)
# Pixel "strong" con hysteresis: contorno verde scuro tenue
if m.weak_grad < m.strong_grad:
edge_mask = m._hysteresis_mask(mag).astype(np.uint8) * 255
else:
edge_mask = (mag >= m.strong_grad).astype(np.uint8) * 255
edge_overlay = np.zeros_like(out)
edge_overlay[edge_mask > 0] = (0, 80, 0) # verde scuro
out = cv2.addWeighted(out, 1.0, edge_overlay, 0.5, 0)
# Feature scelte: cerchietti colorati per bin
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i in range(len(fx)):
b = int(fb[i])
col = bin_colors[b % len(bin_colors)]
cv2.circle(out, (int(fx[i]), int(fy[i])), 2, col, -1, cv2.LINE_AA)
# UCS sul CENTRO ROI (coerente con _draw_matches che usa centro pose).
# In questo modo l'UCS visualizzato nel modello = UCS del match (modulo
# rotazione/traslazione data dalla pose del pezzo trovato).
rh, rw = roi_img.shape[:2]
bx, by = (rw - 1) // 2, (rh - 1) // 2
axis_len = max(20, int(0.15 * max(rw, rh)))
cv2.arrowedLine(out, (bx, by), (bx + axis_len, by),
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (bx + axis_len + 4, by + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
cv2.arrowedLine(out, (bx, by), (bx, by + axis_len),
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (bx + 4, by + axis_len + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 3, (255, 255, 255), -1, cv2.LINE_AA)
bary_cx, bary_cy = float(bx), float(by)
img_id = _store_image(out)
n_edge_strong = int((mag >= m.strong_grad).sum())
n_edge_total = int(edge_mask.sum() / 255)
return {
"preview_id": img_id,
"n_features": len(fx),
"n_edge_strong": n_edge_strong,
"n_edge_after_hysteresis": n_edge_total,
"mag_max": float(mag.max()),
"mag_p50": float(np.percentile(mag, 50)),
"mag_p85": float(np.percentile(mag, 85)),
"ucs_baricentro": (
{"cx": round(bary_cx, 2), "cy": round(bary_cy, 2)}
if bary_cx is not None else None
),
}
@app.post("/recipes") @app.post("/recipes")
def save_recipe(p: SaveRecipeParams): def save_recipe(p: SaveRecipeParams):
"""Allena matcher e salva su disco come ricetta riutilizzabile.""" """Allena matcher e salva su disco come ricetta riutilizzabile."""
@@ -641,6 +816,10 @@ def save_recipe(p: SaveRecipeParams):
tipo=p.tipo, simmetria=p.simmetria, scala=p.scala, tipo=p.tipo, simmetria=p.simmetria, scala=p.scala,
precisione=p.precisione, precisione=p.precisione,
use_polarity=p.use_polarity, use_gpu=p.use_gpu, use_polarity=p.use_polarity, use_gpu=p.use_gpu,
edge_weak_grad=p.edge_weak_grad,
edge_strong_grad=p.edge_strong_grad,
edge_num_features=p.edge_num_features,
edge_min_feature_spacing=p.edge_min_feature_spacing,
) )
tech = _simple_to_technical(sp, roi_img) tech = _simple_to_technical(sp, roi_img)
m = LineShapeMatcher( m = LineShapeMatcher(
@@ -676,6 +855,103 @@ def list_recipes():
return {"files": files, "dir": str(RECIPES_DIR)} return {"files": files, "dir": str(RECIPES_DIR)}
# Cache di matcher caricati da .npz (V feature). Key: nome ricetta.
_RECIPE_MATCHERS: OrderedDict = OrderedDict()
_RECIPE_MATCHERS_SIZE = 4
@app.post("/recipes/{name}/load")
def load_recipe(name: str):
"""Carica ricetta .npz e popola cache matcher in memoria.
Una volta caricata, /match_recipe la usa direttamente senza
re-train. Halcon-equivalent read_shape_model + handle.
"""
safe_name = "".join(c for c in name if c.isalnum() or c in "._-")
if not safe_name.endswith(".npz"):
safe_name += ".npz"
path = RECIPES_DIR / safe_name
if not path.is_file():
raise HTTPException(404, f"Ricetta non trovata: {safe_name}")
m = LineShapeMatcher.load_model(str(path))
_RECIPE_MATCHERS[safe_name] = m
_RECIPE_MATCHERS.move_to_end(safe_name)
while len(_RECIPE_MATCHERS) > _RECIPE_MATCHERS_SIZE:
_RECIPE_MATCHERS.popitem(last=False)
return {
"name": safe_name,
"n_variants": len(m.variants),
"template_size": list(m.template_size),
"use_polarity": m.use_polarity,
}
class RecipeMatchParams(BaseModel):
recipe: str
scene_id: str
# Solo find-time params (training gia' fatto offline)
min_score: float = 0.65
max_matches: int = 25
min_recall: float = 0.0
use_soft_score: bool = False
subpixel_lm: bool = False
nms_iou_threshold: float = 0.3
coarse_stride: int = 1
pyramid_propagate: bool = False
greediness: float = 0.0
refine_pose_joint: bool = False
search_roi: list[int] | None = None
verify_threshold: float = 0.5
scale_penalty: float = 0.0
@app.post("/match_recipe", response_model=MatchResp)
def match_recipe(p: RecipeMatchParams):
"""Match con ricetta pre-trained: zero training, solo find."""
safe_name = p.recipe if p.recipe.endswith(".npz") else f"{p.recipe}.npz"
m = _RECIPE_MATCHERS.get(safe_name)
if m is None:
# Auto-load on demand
path = RECIPES_DIR / safe_name
if not path.is_file():
raise HTTPException(404, f"Ricetta non trovata: {safe_name}")
m = LineShapeMatcher.load_model(str(path))
_RECIPE_MATCHERS[safe_name] = m
scene = _load_image(p.scene_id)
if scene is None:
raise HTTPException(404, "Scena non trovata")
search_roi_t = tuple(p.search_roi) if p.search_roi else None
t0 = time.time()
matches = m.find(
scene,
min_score=p.min_score, max_matches=p.max_matches,
verify_threshold=p.verify_threshold,
scale_penalty=p.scale_penalty,
min_recall=p.min_recall,
use_soft_score=p.use_soft_score,
subpixel_lm=p.subpixel_lm,
nms_iou_threshold=p.nms_iou_threshold,
coarse_stride=p.coarse_stride,
pyramid_propagate=p.pyramid_propagate,
greediness=p.greediness,
refine_pose_joint=p.refine_pose_joint,
search_roi=search_roi_t,
)
t_find = time.time() - t0
tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8)
annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated)
return MatchResp(
matches=[MatchResult(
cx=mt.cx, cy=mt.cy, angle_deg=mt.angle_deg, scale=mt.scale,
score=mt.score, bbox_poly=mt.bbox_poly.tolist(),
) for mt in matches],
train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
)
# Mount static # Mount static
app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static") app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
+282
View File
@@ -19,6 +19,7 @@ const PALETTE = [
const state = { const state = {
model: null, scene: null, roi: null, drag: null, model: null, scene: null, roi: null, drag: null,
matches: [], annotatedImg: null, matches: [], annotatedImg: null,
active_recipe: null, // V: ricetta caricata (string nome) o null
}; };
// ---------- Forms ---------- // ---------- Forms ----------
@@ -52,10 +53,34 @@ function readUserParams() {
document.getElementById("p-penalita-scala").value), document.getElementById("p-penalita-scala").value),
min_score: parseFloat(document.getElementById("p-min-score").value), min_score: parseFloat(document.getElementById("p-min-score").value),
max_matches: parseInt(document.getElementById("p-max-matches").value, 10), max_matches: parseInt(document.getElementById("p-max-matches").value, 10),
...readEdgeOverrides(),
...readHalconFlags(), ...readHalconFlags(),
}; };
} }
function readEdgeOverrides() {
// Override edge dal pannello "Anteprima edge". Settati = utente li ha
// toccati (anche se uguali al default attuale). Vengono propagati a
// _simple_to_technical e usati identici sia in training sia in find.
// Inoltre salvati nella ricetta cosi' si replicano al load.
const _v = (id, parser) => {
const el = document.getElementById(id);
if (!el) return null;
const v = parser(el.value);
return Number.isFinite(v) ? v : null;
};
// Sempre passa i valori correnti degli slider: e' la richiesta utente
// che i param di pulizia rumore vengano usati anche nel find/ricetta.
const polCb = document.getElementById("hc-use-polarity");
return {
edge_weak_grad: _v("ep-weak", parseFloat),
edge_strong_grad: _v("ep-strong", parseFloat),
edge_num_features: _v("ep-nf", parseInt),
edge_min_feature_spacing: _v("ep-sp", parseInt),
use_polarity: polCb?.checked || document.getElementById("ep-pol")?.checked,
};
}
function readHalconFlags() { function readHalconFlags() {
// Halcon-mode toggle: tutti i flag default-off, esposti via "Modalità Halcon" // Halcon-mode toggle: tutti i flag default-off, esposti via "Modalità Halcon"
const $cb = (id) => document.getElementById(id)?.checked ?? false; const $cb = (id) => document.getElementById(id)?.checked ?? false;
@@ -307,7 +332,43 @@ function setupROI() {
} }
// ---------- Match action ---------- // ---------- Match action ----------
async function doMatchRecipe() {
if (!state.scene) { setStatus("Carica scena"); return; }
setStatus(`Match ricetta ${state.active_recipe}...`);
const hc = readHalconFlags();
const body = {
recipe: state.active_recipe,
scene_id: state.scene.id,
min_score: parseFloat(document.getElementById("p-min-score").value),
max_matches: parseInt(document.getElementById("p-max-matches").value, 10),
verify_threshold: 0.50,
...hc,
};
const r = await fetch("/match_recipe", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) { setStatus(`Errore: ${await r.text()}`); return; }
const data = await r.json();
state.matches = data.matches;
state.annotatedImg = await loadImage(
`/image/${data.annotated_id}/raw?t=${Date.now()}`);
renderScene();
renderLegend();
document.getElementById("t-train").textContent = "—";
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
}
async function doMatch() { async function doMatch() {
// Path V: ricetta caricata → bypass training, solo find su scena
if (state.active_recipe) {
return doMatchRecipe();
}
if (!state.model) { setStatus("Carica modello"); return; } if (!state.model) { setStatus("Carica modello"); return; }
if (!state.scene) { setStatus("Carica scena"); return; } if (!state.scene) { setStatus("Carica scena"); return; }
if (!state.roi) { setStatus("Seleziona ROI sul modello"); return; } if (!state.roi) { setStatus("Seleziona ROI sul modello"); return; }
@@ -373,6 +434,7 @@ async function doMatch() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`; document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants; document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length; document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`); setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`);
} }
@@ -400,6 +462,164 @@ function setStatus(s) {
} }
// ---------- Init ---------- // ---------- Init ----------
// ---------- Edge preview (clean rumore) ----------
let _epDebounce = null;
let _epLastImg = null;
async function fetchEdgePreview() {
if (!state.model || !state.roi) {
document.getElementById("edge-preview-info").textContent =
"Disegna prima la ROI sul modello";
return;
}
const body = {
model_id: state.model.id,
roi: state.roi,
weak_grad: parseFloat(document.getElementById("ep-weak").value),
strong_grad: parseFloat(document.getElementById("ep-strong").value),
num_features: parseInt(document.getElementById("ep-nf").value, 10),
min_feature_spacing: parseInt(document.getElementById("ep-sp").value, 10),
use_polarity: document.getElementById("ep-pol").checked,
};
try {
const r = await fetch("/preview_edges", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
_epLastImg = await loadImage(`/image/${j.preview_id}/raw?t=${Date.now()}`);
drawEdgePreview();
const ucs = j.ucs_baricentro
? ` | UCS=(${j.ucs_baricentro.cx},${j.ucs_baricentro.cy})`
: "";
document.getElementById("edge-preview-info").innerHTML =
`<b>${j.n_features}</b> feature scelte (di ${j.n_edge_after_hysteresis} edge totali)<br>` +
`mag: max=${j.mag_max.toFixed(0)} p50=${j.mag_p50.toFixed(0)} ` +
`p85=${j.mag_p85.toFixed(0)}${ucs}`;
} catch (e) {
document.getElementById("edge-preview-info").textContent =
`Errore preview: ${e.message}`;
}
}
function drawEdgePreview() {
const cnv = document.getElementById("c-edge-preview");
if (!_epLastImg) return;
const ctx = cnv.getContext("2d");
// Fit-contain
const r = Math.min(cnv.width / _epLastImg.width,
cnv.height / _epLastImg.height);
const w = _epLastImg.width * r;
const h = _epLastImg.height * r;
const ox = (cnv.width - w) / 2;
const oy = (cnv.height - h) / 2;
ctx.fillStyle = "#000"; ctx.fillRect(0, 0, cnv.width, cnv.height);
ctx.imageSmoothingEnabled = false;
ctx.drawImage(_epLastImg, ox, oy, w, h);
}
function scheduleEdgePreview() {
if (_epDebounce) clearTimeout(_epDebounce);
_epDebounce = setTimeout(fetchEdgePreview, 200);
}
function bindEdgePreviewControls() {
const slid = (id, valEl) => {
const el = document.getElementById(id);
const v = document.getElementById(valEl);
el.addEventListener("input", () => {
v.textContent = el.value;
scheduleEdgePreview();
});
};
slid("ep-weak", "ep-weak-v");
slid("ep-strong", "ep-strong-v");
slid("ep-nf", "ep-nf-v");
slid("ep-sp", "ep-sp-v");
document.getElementById("ep-pol").addEventListener("change",
scheduleEdgePreview);
// Auto-refresh quando il pannello viene aperto
document.getElementById("edge-preview-panel").addEventListener("toggle",
(e) => { if (e.target.open) fetchEdgePreview(); });
document.getElementById("btn-edge-apply").addEventListener("click", () => {
// Copia i valori correnti nei campi avanzati
const map = {
"ep-weak": "adv-weak_grad",
"ep-strong": "adv-strong_grad",
"ep-nf": "adv-num_features",
"ep-sp": "adv-min_feature_spacing",
};
for (const [src, dst] of Object.entries(map)) {
const dstEl = document.getElementById(dst);
if (dstEl) dstEl.value = document.getElementById(src).value;
}
// use_polarity: alla checkbox della modalita Halcon
const polCb = document.getElementById("hc-use-polarity");
if (polCb) polCb.checked = document.getElementById("ep-pol").checked;
// Apri pannello Avanzate per feedback
const advDetails = document.querySelectorAll("#col-params details");
advDetails.forEach((d) => { d.open = true; });
alert("Parametri edge applicati. Esegui MATCH per usare i valori scelti.");
});
}
// ---------- CC: Diagnostica match ----------
function renderDiag(diag, n_matches) {
const el = document.getElementById("diag-content");
if (!diag) {
el.innerHTML = '<em style="color:#888">Diagnostica non disponibile</em>';
return;
}
const dropTotal = (diag.drop_ncc_low || 0) + (diag.drop_min_score_post_avg || 0)
+ (diag.drop_recall_low || 0) + (diag.drop_bbox_out_of_scene || 0)
+ (diag.drop_nms_iou || 0);
// Hint contestuali se 0 match
let hint = "";
if (n_matches === 0) {
if (diag.n_after_pre_nms === 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ Nessun candidato sopra soglia.
Prova: ↓ <b>min_score</b> o ↓ <b>top_thresh</b> (currently ${diag.top_thresh_used.toFixed(2)})</div>`;
} else if (diag.drop_ncc_low > 0 && dropTotal === diag.drop_ncc_low) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_ncc_low} candidati droppati da NCC.
Prova: ↓ <b>verify_threshold</b> (filtro_fp più leggero)</div>`;
} else if (diag.drop_min_score_post_avg > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_min_score_post_avg} match sotto min_score post-NCC.
Prova: ↓ <b>min_score</b></div>`;
} else if (diag.drop_recall_low > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_recall_low} match con recall < ${diag.min_recall_used}.
Prova: ↓ <b>min_recall</b></div>`;
} else if (diag.drop_bbox_out_of_scene > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_bbox_out_of_scene} match con bbox fuori scena.
Centro derivato male: aumenta <b>min_score</b> o restringi <b>search_roi</b></div>`;
}
}
const flags = [];
if (diag.use_polarity) flags.push("polarity");
if (diag.use_soft_score) flags.push("soft");
if (diag.subpixel_lm) flags.push("subpix-LM");
el.innerHTML = `
<div><b>Pipeline pruning:</b></div>
<div>varianti: ${diag.n_variants_total} → top_eval=${diag.n_variants_top_evaluated}
→ top_pass=${diag.n_variants_top_passed} → full_eval=${diag.n_variants_full_evaluated}</div>
<div><b>Candidati:</b> raw=${diag.n_raw_candidates}
→ pre_nms=${diag.n_after_pre_nms} → final=${diag.n_final}</div>
<div><b>Drop reasons:</b> NCC=${diag.drop_ncc_low}, score=${diag.drop_min_score_post_avg},
recall=${diag.drop_recall_low}, bbox=${diag.drop_bbox_out_of_scene}, NMS=${diag.drop_nms_iou}</div>
<div><b>Soglie:</b> top=${diag.top_thresh_used.toFixed(2)},
min_score=${diag.min_score_used.toFixed(2)},
NCC=${diag.verify_threshold_used.toFixed(2)},
recall=${diag.min_recall_used.toFixed(2)}</div>
${flags.length ? `<div><b>Flag attivi:</b> ${flags.join(", ")}</div>` : ""}
${hint}
`;
// Auto-apri pannello se 0 match (segnala problema)
if (n_matches === 0) {
document.getElementById("diag-panel").open = true;
}
}
// ---------- Auto-tune (Halcon-style) ---------- // ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() { async function doAutoTune() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
@@ -447,6 +667,57 @@ async function doAutoTune() {
} }
} }
// ---------- V: Recipe load/list/unload ----------
async function refreshRecipeList() {
try {
const r = await fetch("/recipes");
if (!r.ok) return;
const j = await r.json();
const sel = document.getElementById("hc-recipe-list");
const cur = sel.value;
sel.innerHTML = '<option value="">— ricette disponibili —</option>';
for (const f of j.files) {
const o = document.createElement("option");
o.value = f.name;
o.textContent = `${f.name} (${(f.size / 1024).toFixed(1)} KB)`;
sel.appendChild(o);
}
if (cur) sel.value = cur;
} catch (e) { /* silent */ }
}
async function loadRecipe() {
const sel = document.getElementById("hc-recipe-list");
const name = sel.value;
if (!name) {
alert("Seleziona una ricetta dalla lista.");
return;
}
try {
const r = await fetch(`/recipes/${encodeURIComponent(name)}/load`, {
method: "POST",
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
state.active_recipe = j.name;
document.getElementById("recipe-status").textContent =
`Caricata: ${j.name}${j.n_variants} varianti, ` +
`${j.template_size[0]}x${j.template_size[1]} px` +
(j.use_polarity ? " (polarity)" : "");
document.getElementById("recipe-status").style.color = "#0c0";
document.getElementById("btn-unload-recipe").disabled = false;
} catch (e) {
alert(`Errore caricamento: ${e.message}`);
}
}
function unloadRecipe() {
state.active_recipe = null;
document.getElementById("recipe-status").textContent = "Nessuna ricetta caricata";
document.getElementById("recipe-status").style.color = "#888";
document.getElementById("btn-unload-recipe").disabled = true;
}
// ---------- V: Save recipe ---------- // ---------- V: Save recipe ----------
async function saveRecipe() { async function saveRecipe() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
@@ -469,6 +740,10 @@ async function saveRecipe() {
precisione: user.precisione, precisione: user.precisione,
use_polarity: user.use_polarity, use_polarity: user.use_polarity,
use_gpu: user.use_gpu, use_gpu: user.use_gpu,
edge_weak_grad: user.edge_weak_grad,
edge_strong_grad: user.edge_strong_grad,
edge_num_features: user.edge_num_features,
edge_min_feature_spacing: user.edge_min_feature_spacing,
name: name, name: name,
}; };
try { try {
@@ -480,6 +755,7 @@ async function saveRecipe() {
if (!r.ok) throw new Error(await r.text()); if (!r.ok) throw new Error(await r.text());
const j = await r.json(); const j = await r.json();
alert(`Ricetta salvata: ${j.name}\n${j.n_variants} varianti, ${j.size} bytes`); alert(`Ricetta salvata: ${j.name}\n${j.n_variants} varianti, ${j.size} bytes`);
refreshRecipeList();
} catch (e) { } catch (e) {
alert(`Errore salvataggio: ${e.message}`); alert(`Errore salvataggio: ${e.message}`);
} }
@@ -515,6 +791,12 @@ window.addEventListener("DOMContentLoaded", async () => {
document.getElementById("btn-autotune").addEventListener("click", doAutoTune); document.getElementById("btn-autotune").addEventListener("click", doAutoTune);
document.getElementById("btn-save-recipe").addEventListener("click", document.getElementById("btn-save-recipe").addEventListener("click",
saveRecipe); saveRecipe);
document.getElementById("btn-load-recipe").addEventListener("click",
loadRecipe);
document.getElementById("btn-unload-recipe").addEventListener("click",
unloadRecipe);
refreshRecipeList();
bindEdgePreviewControls();
const slider = document.getElementById("p-min-score"); const slider = document.getElementById("p-min-score");
slider.addEventListener("input", (e) => { slider.addEventListener("input", (e) => {
document.getElementById("v-score").textContent = document.getElementById("v-score").textContent =
+55 -1
View File
@@ -45,6 +45,40 @@
<canvas id="c-model" width="380" height="420"></canvas> <canvas id="c-model" width="380" height="420"></canvas>
</div> </div>
<div id="roi-info">ROI: (nessuna)</div> <div id="roi-info">ROI: (nessuna)</div>
<details id="edge-preview-panel" style="margin-top:10px">
<summary>🔬 Anteprima edge / pulizia rumore</summary>
<div style="font-size:11px; color:#aaa; margin:4px 0">
Regola le soglie per togliere edge spuri (sporcizie). UCS rosso/verde
sul baricentro feature.
</div>
<div class="ep-grid">
<label class="ep-row">weak_grad <span id="ep-weak-v">30</span>
<input type="range" id="ep-weak" min="5" max="200" value="30" step="1">
</label>
<label class="ep-row">strong_grad <span id="ep-strong-v">60</span>
<input type="range" id="ep-strong" min="10" max="400" value="60" step="1">
</label>
<label class="ep-row">num_features <span id="ep-nf-v">96</span>
<input type="range" id="ep-nf" min="16" max="300" value="96" step="1">
</label>
<label class="ep-row">spacing <span id="ep-sp-v">3</span>
<input type="range" id="ep-sp" min="1" max="15" value="3" step="1">
</label>
<label class="ep-row" style="flex-direction:row; gap:6px">
<input type="checkbox" id="ep-pol"> polarity
</label>
<button class="btn" id="btn-edge-apply" type="button"
style="grid-column:1/-1">
✓ Applica ai parametri Avanzate
</button>
</div>
<div class="canvas-wrap" style="margin-top:6px">
<canvas id="c-edge-preview" width="380" height="380"></canvas>
</div>
<div id="edge-preview-info" style="font-size:11px; color:#888; margin-top:4px">
Disegna ROI e apri questo pannello per generare anteprima
</div>
</details>
</section> </section>
<section class="col" id="col-scene"> <section class="col" id="col-scene">
@@ -68,8 +102,8 @@
<div class="field"> <div class="field">
<label>Simmetria</label> <label>Simmetria</label>
<select id="p-simmetria"> <select id="p-simmetria">
<option value="nessuna" selected>Nessuna (0..360°)</option>
<option value="invariante">Invariante (cerchi — no rotazione)</option> <option value="invariante">Invariante (cerchi — no rotazione)</option>
<option value="nessuna">Nessuna (0..360°)</option>
<option value="bilaterale">Bilaterale (speculare 180°)</option> <option value="bilaterale">Bilaterale (speculare 180°)</option>
<option value="rot_3">Rotazionale 3× (120°)</option> <option value="rot_3">Rotazionale 3× (120°)</option>
<option value="rot_4">Rotazionale 4× (90°)</option> <option value="rot_4">Rotazionale 4× (90°)</option>
@@ -190,6 +224,16 @@
<input type="text" id="hc-recipe-name" placeholder="nome_ricetta" style="flex:1"> <input type="text" id="hc-recipe-name" placeholder="nome_ricetta" style="flex:1">
<button class="btn" id="btn-save-recipe" type="button">💾 Salva</button> <button class="btn" id="btn-save-recipe" type="button">💾 Salva</button>
</div> </div>
<div style="display:flex; gap:6px; margin-top:6px; align-items:center">
<select id="hc-recipe-list" style="flex:1">
<option value="">— ricette disponibili —</option>
</select>
<button class="btn" id="btn-load-recipe" type="button">📂 Carica</button>
<button class="btn" id="btn-unload-recipe" type="button" disabled>✖ Stacca</button>
</div>
<div id="recipe-status" style="margin-top:4px; font-size:11px; color:#888">
Nessuna ricetta caricata
</div>
</div> </div>
</div> </div>
</details> </details>
@@ -204,6 +248,16 @@
<div class="kv"><span>find:</span><span id="t-find">-</span></div> <div class="kv"><span>find:</span><span id="t-find">-</span></div>
<div class="kv"><span>varianti:</span><span id="t-var">-</span></div> <div class="kv"><span>varianti:</span><span id="t-var">-</span></div>
<div class="kv"><span>match:</span><span id="t-match">-</span></div> <div class="kv"><span>match:</span><span id="t-match">-</span></div>
<details id="diag-panel" style="margin-top:10px">
<summary>🔍 Diagnostica (CC)</summary>
<div id="diag-content" style="font-family:monospace; font-size:11px;
background:#1a1a1a; padding:8px;
border-radius:3px; margin-top:6px;
line-height:1.5">
<em style="color:#888">Esegui un MATCH per vedere la diagnostica</em>
</div>
</details>
</section> </section>
</main> </main>
+15
View File
@@ -173,3 +173,18 @@ footer h2 {
} }
.hc-row.hc-num label { font-size: 11px; color: #aaa; } .hc-row.hc-num label { font-size: 11px; color: #aaa; }
.hc-row.hc-num input { width: 100%; } .hc-row.hc-num input { width: 100%; }
/* Edge preview panel */
.ep-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 6px 12px;
margin-top: 6px;
font-size: 12px;
}
.ep-row {
display: flex; flex-direction: column; gap: 2px;
font-size: 11px; color: #aaa;
}
.ep-row input[type="range"] { width: 100%; }
.ep-row span { color: #fff; font-weight: bold; font-family: monospace; }
+4
View File
@@ -12,6 +12,10 @@ dependencies = [
"uvicorn[standard]>=0.34", "uvicorn[standard]>=0.34",
] ]
[project.scripts]
pm2d-eval = "pm2d.eval:main"
pm2d-bench = "pm2d.bench:main"
[dependency-groups] [dependency-groups]
dev = [ dev = [
"httpx>=0.28.1", "httpx>=0.28.1",