Compare commits

..

13 Commits

Author SHA1 Message Date
Adriano 6fb1efcab8 merge: fix UCS match + feature pre-computate 2026-05-05 11:02:04 +02:00
Adriano 35df4c473c fix: UCS match e numero feature ora coerenti con anteprima modello
Bug visibili da screenshot:
1. UCS match diverso da UCS anteprima modello (centro pose vs baricentro)
2. Numero feature disegnate < di quelle anteprima modello

Cause:
1. Match UCS era posto su (cx, cy) = centro template, mentre l'anteprima
   modello mostra UCS sul baricentro feature (mean fx, fy).
2. _draw_matches estraeva feature dal template warpato → re-quantizza
   gradient su immagine warp+interp, perdendo precisione vs feature
   pre-computate del matcher.

Fix:
- Match.variant_idx: nuovo field con indice variante usata dal find()
- _draw_matches usa lvl0.dx/dy/bin pre-computati invece di re-estrarre:
  * applica delta-rotation (m.angle_deg - var.angle_deg) per refine
    sub-step
  * proietta in scene coords intorno a (m.cx, m.cy)
  * stesso identico set di feature dell'anteprima modello (modulo
    rotazione+traslazione)
- UCS match calcolato sul baricentro delle feature warpate, non su
  (cx, cy) → coerente con UCS anteprima

Fallback (variant_idx == -1, es. ricetta caricata da save_model
prima di questo commit): usa estrazione warpata legacy.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 11:02:04 +02:00
Adriano 64f2c8b5dc merge: match overlay edges+UCS, no ROI 2026-05-05 10:55:54 +02:00
Adriano 7e076deb80 feat(web): match overlay con edge filtrati + UCS + rimozione bbox ROI
_draw_matches ora coerente con anteprima modello:

- Edge filtrati con stessa pipeline matcher (hysteresis weak/strong_grad)
  e selezione feature: l'overlay del match riflette esattamente quello
  che l'utente ha visto nel preview "Anteprima edge"
- Background tinta scura su pixel hysteresis (40% colore match)
- Feature scelte come dot colorati per bin (palette 16 bin)
- UCS rosso/verde sul centro pose: asse X destra, Y giu' (image y-down),
  ruotato secondo angle del match
- Origine UCS: cerchio bianco con bordo nero per visibilita'

Rimossi (richiesta utente "togli la ROI"):
- bbox poly perimetrale: ridondante, copriva il pezzo
- linea marker primo lato: sostituita da UCS rosso

Compatibilita': se matcher non passato (es. uso esterno), fallback
Canny legacy. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) ora propagano il matcher a _draw_matches.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:55:54 +02:00
Adriano 852597ed51 merge: UI edge preview + UCS 2026-05-05 10:48:58 +02:00
Adriano a78884f950 feat(web): anteprima edge sul modello + tracker pulizia rumore + UCS baricentro
Pannello "🔬 Anteprima edge / pulizia rumore" sotto il canvas modello.
Permette tuning interattivo dei parametri di selezione edge per
togliere "sporcizie" (rumore di sfondo, edge spuri) prima di
trainare il matcher.

Server:
- POST /preview_edges: dato modello+ROI+param edge, ritorna immagine
  ROI con overlay:
  * heatmap magnitude gradient (sfondo)
  * verde scuro: pixel sopra hysteresis edge
  * cerchietti colorati per bin: feature scelte (palette 16 bin)
  * UCS rosso/verde sul baricentro feature (richiesta utente):
    asse X destra, Y giu' (image y-down)
  Ritorna anche stats: n_features, n_edge_strong, percentili magnitude,
  ucs_baricentro {cx, cy}

UI:
- Slider weak_grad/strong_grad/num_features/spacing + checkbox polarity
- Re-fetch debounced (200ms) ad ogni input → preview live
- Bottone "Applica ai parametri Avanzate": copia i valori scelti
  nei campi Avanzate del matcher principale
- Auto-fetch quando il pannello viene aperto

Use case: operatore vede SUBITO quali edge il matcher userebbe,
regola soglie per escludere rumore, applica e poi MATCH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:48:58 +02:00
Adriano 543ae0f643 merge: UI pannello diagnostica 2026-05-05 10:41:26 +02:00
Adriano a12574f3c5 feat(web): pannello diagnostica match (CC) con hint contestuali
MatchResp ora include diag dict (CC feature). UI rendering:

- Nuovo pannello pieghevole "🔍 Diagnostica" sotto i tempi
- Per ogni match mostra:
  * pipeline pruning (vars total → top_eval → top_pass → full_eval)
  * candidati (raw → pre_nms → final)
  * drop reasons (NCC, score, recall, bbox, NMS) con counter
  * soglie effettive applicate
  * flag attivi (polarity, soft, subpix-LM)

- Quando 0 match → pannello si apre automaticamente + mostra hint
  contestuale specifico:
  * "0 candidati top" → suggerisce ↓ min_score / top_thresh
  * "tutti dropped da NCC" → ↓ verify_threshold (filtro_fp)
  * "score post-NCC sotto" → ↓ min_score
  * "recall basso" → ↓ min_recall
  * "bbox out-of-scene" → check pose / search_roi

Risolve il pattern "0 match perche'?" con guida actionable invece
del black-box. Tutti e 3 endpoint match (/match, /match_simple,
/match_recipe) propagano il diag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:41:26 +02:00
Adriano 110dc87b08 merge: AA eval CLI 2026-05-05 10:10:00 +02:00
Adriano 2bb2cf63cc merge: II scene cache 2026-05-05 10:09:56 +02:00
Adriano ea6a9163ad merge: CC diagnostic mode 2026-05-05 10:09:56 +02:00
Adriano 1cc7881a51 feat: pm2d.eval - validation harness CLI per LineShapeMatcher
Tool da CLI per misurare oggettivamente la qualita' del matcher
su dataset etichettato. Halcon ha questo solo nell'IDE (HDevelop),
qui esposto come modulo Python testabile in CI.

Format dataset JSON:
  - template + mask
  - params init matcher (override)
  - find_params (override per find())
  - scenes con ground_truth: lista pose attese (cx, cy, angle, scale,
    tolerance_px, tolerance_deg)

Metriche per scena: TP/FP/FN, precision, recall, IoU medio bbox,
tempo find. Aggregato: precision globale, recall, F1.

Match-to-GT criterio: distanza centro <= tolerance_px AND
|angle| <= tolerance_deg, oppure IoU bbox >= 0.3.

Use case:
- regressione: confronto config A vs B oggettivo
- tuning: trovare param ottimi via grid-search guidato da F1
- validazione pre-deploy: report TP/FP/FN su dataset prod

Esposto come entry-point pm2d-eval (pyproject.toml).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:09:45 +02:00
Adriano dae49eb4a3 feat: diagnostic mode trasparente per find()
self._last_diag accumula counter durante find():
- Pipeline pruning: top_evaluated, top_passed, full_evaluated
- Candidati: n_raw, n_after_pre_nms, n_final
- Drop reason: ncc_low, min_score_post_avg, recall_low,
  bbox_out_of_scene, nms_iou
- Param effettivi: top_thresh_used, verify_threshold_used, ecc.

API:
- find(debug=True): stampa one-line summary su stderr
- m.get_last_diag(): ritorna dict completo per inspection

Use case: 0 match? guarda dove sono finiti i candidati
(es. drop_ncc_low=200 → soglia NCC troppo alta) invece di
tirare a caso. Risolve il "find black-box" pattern.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:05:20 +02:00
7 changed files with 727 additions and 18 deletions
+217
View File
@@ -0,0 +1,217 @@
"""CLI validation harness per LineShapeMatcher.
Usage:
python -m pm2d.eval dataset.json [opzioni]
Formato dataset (JSON):
{
"template": "path/to/template.png",
"mask": "path/to/mask.png", # opzionale
"params": { # opzionali, override su matcher init
"use_polarity": true,
"angle_step_deg": 5,
...
},
"find_params": { # opzionali, passati a find()
"min_score": 0.6,
"use_soft_score": true,
...
},
"scenes": [
{
"image": "path/to/scene1.png",
"ground_truth": [
{"cx": 320.0, "cy": 240.0, "angle_deg": 12.0,
"scale": 1.0, "tolerance_px": 5.0,
"tolerance_deg": 3.0}
]
}
]
}
Output: report precision/recall/IoU/timing per ogni scena + aggregati.
"""
from __future__ import annotations
import argparse
import json
import math
import sys
import time
from pathlib import Path
import cv2
import numpy as np
from pm2d.line_matcher import LineShapeMatcher, _poly_iou, _oriented_bbox_polygon
def _load_image(path: str | Path) -> np.ndarray:
img = cv2.imread(str(path), cv2.IMREAD_UNCHANGED)
if img is None:
raise FileNotFoundError(f"Immagine non trovata: {path}")
if img.ndim == 2:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
return img
def _gt_to_poly(gt: dict, tw: int, th: int) -> np.ndarray:
"""Costruisce bbox poligonale per un ground truth."""
s = float(gt.get("scale", 1.0))
return _oriented_bbox_polygon(
float(gt["cx"]), float(gt["cy"]),
tw * s, th * s, float(gt["angle_deg"]),
)
def _match_to_gt(match, gt: dict, tw: int, th: int,
iou_thr: float = 0.3) -> bool:
"""True se il match corrisponde al ground truth.
Criterio: distanza centro <= tolerance_px AND |angle_deg - gt| <= tolerance_deg
OR IoU bbox >= iou_thr (fallback per pose con tolerance ampie).
"""
tol_px = float(gt.get("tolerance_px", 5.0))
tol_deg = float(gt.get("tolerance_deg", 3.0))
dx = match.cx - float(gt["cx"])
dy = match.cy - float(gt["cy"])
dist = math.hypot(dx, dy)
da = abs((match.angle_deg - float(gt["angle_deg"]) + 180) % 360 - 180)
if dist <= tol_px and da <= tol_deg:
return True
# Fallback IoU
poly_gt = _gt_to_poly(gt, tw, th)
poly_m = match.bbox_poly
if _poly_iou(poly_m, poly_gt) >= iou_thr:
return True
return False
def evaluate_scene(matcher: LineShapeMatcher, scene_bgr: np.ndarray,
gt_list: list[dict], find_params: dict,
tw: int, th: int) -> dict:
"""Esegue match e calcola TP/FP/FN per una scena."""
t0 = time.time()
matches = matcher.find(scene_bgr, **find_params)
elapsed = time.time() - t0
gt_matched = [False] * len(gt_list)
match_is_tp = [False] * len(matches)
iou_per_match = [0.0] * len(matches)
for i, m in enumerate(matches):
for j, gt in enumerate(gt_list):
if gt_matched[j]:
continue
if _match_to_gt(m, gt, tw, th):
gt_matched[j] = True
match_is_tp[i] = True
# Calcolo IoU per metrica
poly_gt = _gt_to_poly(gt, tw, th)
iou_per_match[i] = _poly_iou(m.bbox_poly, poly_gt)
break
tp = sum(match_is_tp)
fp = len(matches) - tp
fn = len(gt_list) - sum(gt_matched)
return {
"n_matches": len(matches),
"n_gt": len(gt_list),
"tp": tp, "fp": fp, "fn": fn,
"find_time_s": elapsed,
"iou_mean": float(np.mean([i for i, t in zip(iou_per_match, match_is_tp) if t])
if tp > 0 else 0.0),
"diag": (matcher.get_last_diag()
if hasattr(matcher, "get_last_diag") else None),
}
def run(dataset_path: str, scene_filter: str | None = None,
verbose: bool = False) -> dict:
"""Esegue eval su dataset, ritorna report aggregato."""
dataset_path = Path(dataset_path)
base = dataset_path.parent
with open(dataset_path) as f:
ds = json.load(f)
template = _load_image(base / ds["template"])
mask = None
if ds.get("mask"):
mask_img = cv2.imread(str(base / ds["mask"]), cv2.IMREAD_GRAYSCALE)
if mask_img is not None:
mask = (mask_img > 128).astype(np.uint8) * 255
init_params = ds.get("params", {})
find_params = ds.get("find_params", {})
matcher = LineShapeMatcher(**init_params)
n_var = matcher.train(template, mask=mask)
tw, th = matcher.template_size
print(f"Template: {ds['template']} ({tw}x{th}), {n_var} varianti")
print(f"Param matcher: {init_params}")
print(f"Param find: {find_params}")
print()
scenes = ds["scenes"]
if scene_filter:
scenes = [s for s in scenes if scene_filter in s["image"]]
rows = []
tot_tp = tot_fp = tot_fn = 0
tot_time = 0.0
for sc in scenes:
scene = _load_image(base / sc["image"])
gt = sc.get("ground_truth", [])
result = evaluate_scene(matcher, scene, gt, find_params, tw, th)
rows.append({"scene": sc["image"], **result})
tot_tp += result["tp"]; tot_fp += result["fp"]; tot_fn += result["fn"]
tot_time += result["find_time_s"]
prec = result["tp"] / max(1, result["tp"] + result["fp"])
rec = result["tp"] / max(1, result["tp"] + result["fn"])
line = (f" {sc['image']:30s} "
f"TP={result['tp']} FP={result['fp']} FN={result['fn']} "
f"P={prec:.2f} R={rec:.2f} "
f"IoU={result['iou_mean']:.2f} "
f"t={result['find_time_s']*1000:.0f}ms")
print(line)
if verbose and result["diag"] and hasattr(matcher, "_format_diag"):
print(f" diag: {matcher._format_diag(result['diag'])}")
# Aggregati
precision = tot_tp / max(1, tot_tp + tot_fp)
recall = tot_tp / max(1, tot_tp + tot_fn)
f1 = 2 * precision * recall / max(1e-9, precision + recall)
print()
print(f"AGGREGATO: precision={precision:.3f} recall={recall:.3f} "
f"F1={f1:.3f} TP={tot_tp} FP={tot_fp} FN={tot_fn}")
print(f"TIME: total={tot_time:.2f}s avg={tot_time / max(1, len(scenes)) * 1000:.0f}ms/scene")
return {
"precision": precision, "recall": recall, "f1": f1,
"tp": tot_tp, "fp": tot_fp, "fn": tot_fn,
"total_time_s": tot_time, "n_scenes": len(scenes),
"per_scene": rows,
}
def main(argv: list[str] | None = None) -> int:
p = argparse.ArgumentParser(
description="pm2d-eval: validation harness per LineShapeMatcher"
)
p.add_argument("dataset", help="JSON dataset (template + scenes + GT)")
p.add_argument("--scene-filter", default=None,
help="Filtro substring sui nomi scena (debug)")
p.add_argument("--verbose", "-v", action="store_true",
help="Stampa diag dict per ogni scena")
p.add_argument("--out", default=None,
help="Salva report JSON su file")
args = p.parse_args(argv)
report = run(args.dataset, scene_filter=args.scene_filter,
verbose=args.verbose)
if args.out:
with open(args.out, "w") as f:
json.dump(report, f, indent=2)
print(f"Report salvato: {args.out}")
return 0 if report["f1"] > 0.5 else 1
if __name__ == "__main__":
sys.exit(main())
+71
View File
@@ -127,6 +127,7 @@ class Match:
scale: float
score: float
bbox_poly: np.ndarray # (4, 2) float32 - 4 vertici ordinati (ruotato)
variant_idx: int = -1 # indice variante usata (per overlay coerente)
@dataclass
@@ -1356,6 +1357,7 @@ class LineShapeMatcher:
min_recall: float = 0.0,
use_soft_score: bool = False,
subpixel_lm: bool = False,
debug: bool = False,
) -> list[Match]:
"""
scale_penalty: se > 0, riduce lo score per match a scala diversa da 1.0:
@@ -1373,6 +1375,32 @@ class LineShapeMatcher:
if not self.variants:
raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
# Diagnostic counter: traccia perche' candidati sono droppati lungo
# la pipeline. Esposto via get_last_diag() o ritornato implicitamente
# se debug=True (vedi sotto).
diag = {
"n_variants_total": len(self.variants),
"n_variants_top_evaluated": 0,
"n_variants_top_passed": 0,
"n_variants_full_evaluated": 0,
"n_raw_candidates": 0,
"n_after_pre_nms": 0,
"drop_ncc_low": 0,
"drop_min_score_post_avg": 0,
"drop_recall_low": 0,
"drop_bbox_out_of_scene": 0,
"drop_nms_iou": 0,
"n_final": 0,
"top_thresh_used": 0.0,
"verify_threshold_used": float(verify_threshold),
"min_score_used": float(min_score),
"min_recall_used": float(min_recall),
"use_polarity": bool(self.use_polarity),
"use_soft_score": bool(use_soft_score),
"subpixel_lm": bool(subpixel_lm),
}
self._last_diag = diag
gray_full = self._to_gray(scene_bgr)
# Applica ROI di ricerca: restringe scena a crop, ricorda offset per
# ri-traslare le coordinate dei match a fine pipeline.
@@ -1428,6 +1456,7 @@ class LineShapeMatcher:
top_factor = max(top_factor, 0.7)
cf_eff = 1
top_thresh = min_score * top_factor
diag["top_thresh_used"] = float(top_thresh)
tw, th = self.template_size
# density_top gia' computato sopra (cache o miss)
@@ -1513,6 +1542,7 @@ class LineShapeMatcher:
kept_coarse: list[tuple[int, float]] = []
all_top_scores: list[tuple[int, float]] = []
diag["n_variants_top_evaluated"] = len(coarse_idx_list)
# batch_top: usa kernel batch single-call con prange-esterno su
# varianti. Vince su threadpool quando n_vars >> n_threads e quando
# H*W top e' piccolo (overhead chiamate JIT > costo kernel).
@@ -1576,6 +1606,8 @@ class LineShapeMatcher:
kept_variants.sort(key=lambda t: -t[1])
max_vars_full = max(max_matches * 8, len(self.variants) // 2)
kept_variants = kept_variants[:max_vars_full]
diag["n_variants_top_passed"] = len(kept_coarse)
diag["n_variants_full_evaluated"] = len(kept_variants)
# Full-res (parallelizzato) con bitmap.
# Riusa cache se disponibile, altrimenti computa e salva.
@@ -1669,6 +1701,7 @@ class LineShapeMatcher:
raw.append((float(vals[i]), int(xs[i]), int(ys[i]), vi))
raw.sort(key=lambda c: -c[0])
diag["n_raw_candidates"] = len(raw)
# Mappa vi → score_map per subpixel/refinement
score_maps = dict(candidates_per_var)
@@ -1700,6 +1733,7 @@ class LineShapeMatcher:
preliminary_int.append((score, xi, yi, vi))
if len(preliminary_int) >= pre_cap:
break
diag["n_after_pre_nms"] = len(preliminary_int)
# Subpixel + refine + verify solo sui candidati pre-NMS (max pre_cap)
kept: list[Match] = []
@@ -1746,6 +1780,7 @@ class LineShapeMatcher:
view_idx=getattr(var, "view_idx", 0),
)
if ncc < verify_threshold:
diag["drop_ncc_low"] += 1
continue
score_f = (float(score_f) + max(0.0, ncc)) * 0.5
# Soft-margin gradient similarity: sostituisce o integra lo
@@ -1760,6 +1795,7 @@ class LineShapeMatcher:
# abbattere lo shape-score sotto la soglia user. Senza questo
# check apparivano match con score < min_score (UI confusing).
if float(score_f) < min_score:
diag["drop_min_score_post_avg"] += 1
continue
# Feature recall (Halcon MinScore-style): conta quante feature
@@ -1771,6 +1807,7 @@ class LineShapeMatcher:
spread0, var, cx_f, cy_f, ang_f,
)
if recall < min_recall:
diag["drop_recall_low"] += 1
continue
# Ri-traslo coord da spazio crop ROI a spazio scena originale.
@@ -1794,6 +1831,7 @@ class LineShapeMatcher:
)
inside_ratio = float(inter) / poly_area
if inside_ratio < 0.75:
diag["drop_bbox_out_of_scene"] += 1
continue
# Penalità scala opzionale: score degrada con distanza da 1.0
if scale_penalty > 0.0 and var.scale != 1.0:
@@ -1818,6 +1856,7 @@ class LineShapeMatcher:
dup = True
break
if dup:
diag["drop_nms_iou"] += 1
continue
kept.append(Match(
cx=cx_out, cy=cy_out,
@@ -1825,7 +1864,39 @@ class LineShapeMatcher:
scale=var.scale,
score=score_f,
bbox_poly=poly,
variant_idx=int(vi),
))
if len(kept) >= max_matches:
break
diag["n_final"] = len(kept)
if debug:
# Debug mode: stampa diagnostica su stderr per visibilita' immediata.
import sys as _sys
_sys.stderr.write(f"[pm2d.find debug] {self._format_diag(diag)}\n")
return kept
def _format_diag(self, diag: dict) -> str:
"""Formatta dict diagnostica in una linea leggibile."""
return (
f"vars: {diag['n_variants_total']} -> "
f"top_eval={diag['n_variants_top_evaluated']} "
f"top_pass={diag['n_variants_top_passed']} "
f"full_eval={diag['n_variants_full_evaluated']} | "
f"raw={diag['n_raw_candidates']} "
f"pre_nms={diag['n_after_pre_nms']} -> "
f"drop[ncc={diag['drop_ncc_low']}, "
f"score={diag['drop_min_score_post_avg']}, "
f"recall={diag['drop_recall_low']}, "
f"bbox={diag['drop_bbox_out_of_scene']}, "
f"nms={diag['drop_nms_iou']}] = "
f"final={diag['n_final']} (top_thresh={diag['top_thresh_used']:.2f})"
)
def get_last_diag(self) -> dict | None:
"""Ritorna dict diagnostica dell'ultima chiamata find().
Halcon-equivalent: oggi inspect_shape_model espone parziali contatori.
Util per debug 'perche' 0 match', tuning interattivo, validation.
Vedi diag keys per significato (n_variants_top_evaluated, drop_*, ...).
"""
return getattr(self, "_last_diag", None)
+216 -18
View File
@@ -131,23 +131,102 @@ def _encode_png(img: np.ndarray) -> bytes:
def _draw_matches(scene: np.ndarray, matches: list[Match],
template_gray: np.ndarray | None) -> np.ndarray:
template_gray: np.ndarray | None,
matcher: "LineShapeMatcher | None" = None) -> np.ndarray:
"""Disegna match annotati sulla scena.
Se matcher e' passato, usa la stessa pipeline di edge filtering
(hysteresis weak/strong_grad) e selezione feature usata in training,
cosi' l'overlay nel match riflette ESATTAMENTE quello che l'utente
ha visto nel preview "Anteprima edge". Inoltre disegna UCS
(asse X rosso, Y verde) sul centro pose del match.
Senza matcher: fallback Canny (legacy).
"""
out = scene.copy()
H, W = scene.shape[:2]
palette = [
(0, 255, 0), (0, 200, 255), (255, 100, 100), (255, 200, 0),
(200, 0, 255), (100, 255, 200), (255, 0, 0), (0, 255, 255),
]
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i, m in enumerate(matches):
color = palette[i % len(palette)]
if template_gray is not None:
# Posizione UCS: baricentro feature warpate (default = cx, cy se non disponibile).
# Mantiene coerenza con anteprima modello che mostra UCS sul baricentro.
ucs_x, ucs_y = float(m.cx), float(m.cy)
if template_gray is not None and matcher is not None:
t = template_gray
th, tw = t.shape
edge = cv2.Canny(t, 50, 150)
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t
# Background edge filtrati: warpa template + hysteresis
warped_gray = cv2.warpAffine(
t, M, (W, H), flags=cv2.INTER_LINEAR, borderValue=0)
mag, _ = matcher._gradient(warped_gray)
if matcher.weak_grad < matcher.strong_grad:
edge_mask = matcher._hysteresis_mask(mag)
else:
edge_mask = mag >= matcher.strong_grad
if edge_mask.any():
bg_overlay = np.zeros_like(out)
dark = tuple(int(c * 0.35) for c in color)
bg_overlay[edge_mask] = dark
out = cv2.addWeighted(out, 1.0, bg_overlay, 0.7, 0)
# Feature reali del matcher: usa quelle pre-computate della
# variante che ha generato il match. Stesse identiche feature
# mostrate nell'anteprima modello (ruotate/scalate alla pose).
vi = getattr(m, "variant_idx", -1)
fx_scene = fy_scene = fb_arr = None
if 0 <= vi < len(matcher.variants):
lvl0 = matcher.variants[vi].levels[0]
# dx/dy sono offsets relativi al CENTRO del template warpato
# nelle coordinate del kernel template (gia' pre-ruotate
# all'angolo della variante grezza). Per coerenza con la
# pose finale m.angle_deg (post-refine), ri-rotazione del
# delta angolare (m.angle_deg - var.angle_deg).
var = matcher.variants[vi]
dang = np.deg2rad(m.angle_deg - var.angle_deg)
ca, sa = np.cos(dang), np.sin(dang)
dxr = lvl0.dx * ca + lvl0.dy * sa
dyr = -lvl0.dx * sa + lvl0.dy * ca
fx_scene = m.cx + dxr
fy_scene = m.cy + dyr
fb_arr = lvl0.bin
else:
# Fallback: estrai feature dal warpato (perde precisione)
_, bins_w = matcher._gradient(warped_gray)
fx, fy, fb = matcher._extract_features(mag, bins_w, None)
fx_scene = fx.astype(np.float32)
fy_scene = fy.astype(np.float32)
fb_arr = fb
# Disegna feature
for k in range(len(fx_scene)):
px = int(round(float(fx_scene[k])))
py = int(round(float(fy_scene[k])))
if 0 <= px < W and 0 <= py < H:
bcol = bin_colors[int(fb_arr[k]) % len(bin_colors)]
cv2.circle(out, (px, py), 2, bcol, -1, cv2.LINE_AA)
# UCS sul baricentro feature (in scene coords)
if len(fx_scene) > 0:
ucs_x = float(np.mean(fx_scene))
ucs_y = float(np.mean(fy_scene))
elif template_gray is not None:
# Senza matcher: legacy Canny
t = template_gray
th, tw = t.shape
cx_t = (tw - 1) / 2.0; cy_t = (th - 1) / 2.0
M = cv2.getRotationMatrix2D((cx_t, cy_t), m.angle_deg, m.scale)
M[0, 2] += m.cx - cx_t
M[1, 2] += m.cy - cy_t
edge = cv2.Canny(t, 50, 150)
warped = cv2.warpAffine(edge, M, (W, H),
flags=cv2.INTER_NEAREST, borderValue=0)
mask = warped > 0
@@ -155,20 +234,34 @@ def _draw_matches(scene: np.ndarray, matches: list[Match],
overlay = np.zeros_like(out)
overlay[mask] = color
out[mask] = (0.3 * out[mask] + 0.7 * overlay[mask]).astype(np.uint8)
poly = m.bbox_poly.astype(np.int32).reshape(-1, 1, 2)
cv2.polylines(out, [poly], True, color, 2, cv2.LINE_AA)
p0 = tuple(m.bbox_poly[0].astype(int))
p1 = tuple(m.bbox_poly[1].astype(int))
cv2.line(out, p0, p1, color, 4, cv2.LINE_AA)
cx, cy = int(round(m.cx)), int(round(m.cy))
cv2.drawMarker(out, (cx, cy), color, cv2.MARKER_CROSS, 22, 2, cv2.LINE_AA)
# bbox poly e linea-marker rimossi (richiesta utente "togli la ROI"):
# UCS + edge filtrati gia' identificano pose e orientamento.
cx, cy = int(round(ucs_x)), int(round(ucs_y))
# UCS sul centro pose match (richiesta utente: come nell'anteprima
# modello). Asse X rosso destra, Y verde basso (image y-down).
# Lunghezza derivata dalla diagonale bbox per scala-invariante.
L = int(np.linalg.norm(m.bbox_poly[1] - m.bbox_poly[0])) // 2
a = np.deg2rad(m.angle_deg)
cv2.arrowedLine(out, (cx, cy),
(int(cx + L * np.cos(a)), int(cy - L * np.sin(a))),
color, 2, cv2.LINE_AA, tipLength=0.2)
if L < 10:
L = 30 # fallback se bbox degenere
ax = np.deg2rad(m.angle_deg)
# X axis ruotato (rosso)
x_end = (int(cx + L * np.cos(ax)), int(cy - L * np.sin(ax)))
cv2.arrowedLine(out, (cx, cy), x_end,
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (x_end[0] + 4, x_end[1] + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Y axis perpendicolare (verde, +90° in image coords = giu' visivo)
y_end = (int(cx + L * np.cos(ax + np.pi / 2)),
int(cy - L * np.sin(ax + np.pi / 2)))
cv2.arrowedLine(out, (cx, cy), y_end,
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (y_end[0] + 4, y_end[1] + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine UCS: cerchio bianco con bordo nero
cv2.circle(out, (cx, cy), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (cx, cy), 3, (255, 255, 255), -1, cv2.LINE_AA)
label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}"
cv2.putText(out, label, (cx + 8, cy - 8),
cv2.putText(out, label, (cx + 12, cy - 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2, cv2.LINE_AA)
return out
@@ -217,6 +310,7 @@ class MatchResp(BaseModel):
find_time: float
num_variants: int
annotated_id: str
diag: dict | None = None # CC: diagnostica pipeline (drop reasons)
class TuneParams(BaseModel):
@@ -510,7 +604,7 @@ def match(p: MatchParams):
# Render annotated image
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg)
annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated)
return MatchResp(
@@ -521,6 +615,7 @@ def match(p: MatchParams):
) for m_ in matches],
train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
)
@@ -586,7 +681,7 @@ def match_simple(p: SimpleMatchParams):
t_find = time.time() - t0
tg = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY)
annotated = _draw_matches(scene, matches, tg)
annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated)
return MatchResp(
@@ -596,6 +691,7 @@ def match_simple(p: SimpleMatchParams):
) for mt in matches],
train_time=t_train, find_time=t_find,
num_variants=n, annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
)
@@ -628,6 +724,107 @@ class SaveRecipeParams(BaseModel):
name: str # nome file ricetta (no path)
class EdgePreviewParams(BaseModel):
model_id: str
roi: list[int]
weak_grad: float = 30.0
strong_grad: float = 60.0
num_features: int = 96
min_feature_spacing: int = 3
use_polarity: bool = False
@app.post("/preview_edges")
def preview_edges(p: EdgePreviewParams):
"""Estrae edge feature dalla ROI con i parametri dati e ritorna
immagine annotata con i pixel selezionati come overlay.
Permette tuning interattivo delle soglie weak/strong_grad e
num_features per "togliere le sporcizie" (rumore di sfondo,
edge spuri) prima di trainare il matcher vero.
"""
model = _load_image(p.model_id)
if model is None:
raise HTTPException(404, "Modello non trovato")
x, y, w, h = p.roi
H_m, W_m = model.shape[:2]
x = max(0, min(int(x), W_m - 1)); y = max(0, min(int(y), H_m - 1))
w = max(1, min(int(w), W_m - x)); h = max(1, min(int(h), H_m - y))
roi_img = model[y:y + h, x:x + w]
# Matcher temporaneo solo per estrazione feature (no train completo)
m = LineShapeMatcher(
weak_grad=p.weak_grad,
strong_grad=p.strong_grad,
num_features=p.num_features,
min_feature_spacing=p.min_feature_spacing,
use_polarity=p.use_polarity,
)
gray = cv2.cvtColor(roi_img, cv2.COLOR_BGR2GRAY) if roi_img.ndim == 3 else roi_img
mag, bins = m._gradient(gray)
fx, fy, fb = m._extract_features(mag, bins, None)
# Mostra anche i pixel "weak/strong" come heatmap di sfondo
out = roi_img.copy() if roi_img.ndim == 3 else cv2.cvtColor(roi_img, cv2.COLOR_GRAY2BGR)
# Overlay magnitude leggera
mag_norm = np.clip(mag / max(1.0, mag.max()) * 255, 0, 255).astype(np.uint8)
mag_color = cv2.applyColorMap(mag_norm, cv2.COLORMAP_BONE)
out = cv2.addWeighted(out, 0.6, mag_color, 0.4, 0)
# Pixel "strong" con hysteresis: contorno verde scuro tenue
if m.weak_grad < m.strong_grad:
edge_mask = m._hysteresis_mask(mag).astype(np.uint8) * 255
else:
edge_mask = (mag >= m.strong_grad).astype(np.uint8) * 255
edge_overlay = np.zeros_like(out)
edge_overlay[edge_mask > 0] = (0, 80, 0) # verde scuro
out = cv2.addWeighted(out, 1.0, edge_overlay, 0.5, 0)
# Feature scelte: cerchietti colorati per bin
bin_colors = [
(255, 0, 0), (255, 128, 0), (255, 255, 0), (0, 255, 0),
(0, 255, 255), (0, 128, 255), (0, 0, 255), (255, 0, 255),
(255, 100, 100), (255, 180, 100), (255, 230, 100), (180, 255, 100),
(100, 255, 200), (100, 180, 255), (180, 100, 255), (255, 100, 200),
]
for i in range(len(fx)):
b = int(fb[i])
col = bin_colors[b % len(bin_colors)]
cv2.circle(out, (int(fx[i]), int(fy[i])), 2, col, -1, cv2.LINE_AA)
# UCS sul baricentro feature (richiesta utente): assi X rosso, Y verde
bary_cx = bary_cy = None
if len(fx) > 0:
bary_cx = float(np.mean(fx))
bary_cy = float(np.mean(fy))
bx, by = int(round(bary_cx)), int(round(bary_cy))
axis_len = max(20, int(0.15 * max(out.shape[:2])))
# X axis (rosso, verso destra)
cv2.arrowedLine(out, (bx, by), (bx + axis_len, by),
(0, 0, 255), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "X", (bx + axis_len + 4, by + 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
# Y axis (verde, verso il basso = convenzione image y-down)
cv2.arrowedLine(out, (bx, by), (bx, by + axis_len),
(0, 255, 0), 2, cv2.LINE_AA, tipLength=0.2)
cv2.putText(out, "Y", (bx + 4, by + axis_len + 12),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1, cv2.LINE_AA)
# Origine: cerchio bianco con bordo nero
cv2.circle(out, (bx, by), 4, (0, 0, 0), -1, cv2.LINE_AA)
cv2.circle(out, (bx, by), 3, (255, 255, 255), -1, cv2.LINE_AA)
img_id = _store_image(out)
n_edge_strong = int((mag >= m.strong_grad).sum())
n_edge_total = int(edge_mask.sum() / 255)
return {
"preview_id": img_id,
"n_features": len(fx),
"n_edge_strong": n_edge_strong,
"n_edge_after_hysteresis": n_edge_total,
"mag_max": float(mag.max()),
"mag_p50": float(np.percentile(mag, 50)),
"mag_p85": float(np.percentile(mag, 85)),
"ucs_baricentro": (
{"cx": round(bary_cx, 2), "cy": round(bary_cy, 2)}
if bary_cx is not None else None
),
}
@app.post("/recipes")
def save_recipe(p: SaveRecipeParams):
"""Allena matcher e salva su disco come ricetta riutilizzabile."""
@@ -760,7 +957,7 @@ def match_recipe(p: RecipeMatchParams):
)
t_find = time.time() - t0
tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8)
annotated = _draw_matches(scene, matches, tg)
annotated = _draw_matches(scene, matches, tg, matcher=m)
ann_id = _store_image(annotated)
return MatchResp(
matches=[MatchResult(
@@ -769,6 +966,7 @@ def match_recipe(p: RecipeMatchParams):
) for mt in matches],
train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id,
diag=m.get_last_diag() if hasattr(m, "get_last_diag") else None,
)
+161
View File
@@ -336,6 +336,7 @@ async function doMatchRecipe() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
}
@@ -409,6 +410,7 @@ async function doMatch() {
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length;
renderDiag(data.diag, data.matches.length);
setStatus(`${data.matches.length} match trovati${hasAdv ? " (avanzato)" : ""}`);
}
@@ -436,6 +438,164 @@ function setStatus(s) {
}
// ---------- Init ----------
// ---------- Edge preview (clean rumore) ----------
let _epDebounce = null;
let _epLastImg = null;
async function fetchEdgePreview() {
if (!state.model || !state.roi) {
document.getElementById("edge-preview-info").textContent =
"Disegna prima la ROI sul modello";
return;
}
const body = {
model_id: state.model.id,
roi: state.roi,
weak_grad: parseFloat(document.getElementById("ep-weak").value),
strong_grad: parseFloat(document.getElementById("ep-strong").value),
num_features: parseInt(document.getElementById("ep-nf").value, 10),
min_feature_spacing: parseInt(document.getElementById("ep-sp").value, 10),
use_polarity: document.getElementById("ep-pol").checked,
};
try {
const r = await fetch("/preview_edges", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
_epLastImg = await loadImage(`/image/${j.preview_id}/raw?t=${Date.now()}`);
drawEdgePreview();
const ucs = j.ucs_baricentro
? ` | UCS=(${j.ucs_baricentro.cx},${j.ucs_baricentro.cy})`
: "";
document.getElementById("edge-preview-info").innerHTML =
`<b>${j.n_features}</b> feature scelte (di ${j.n_edge_after_hysteresis} edge totali)<br>` +
`mag: max=${j.mag_max.toFixed(0)} p50=${j.mag_p50.toFixed(0)} ` +
`p85=${j.mag_p85.toFixed(0)}${ucs}`;
} catch (e) {
document.getElementById("edge-preview-info").textContent =
`Errore preview: ${e.message}`;
}
}
function drawEdgePreview() {
const cnv = document.getElementById("c-edge-preview");
if (!_epLastImg) return;
const ctx = cnv.getContext("2d");
// Fit-contain
const r = Math.min(cnv.width / _epLastImg.width,
cnv.height / _epLastImg.height);
const w = _epLastImg.width * r;
const h = _epLastImg.height * r;
const ox = (cnv.width - w) / 2;
const oy = (cnv.height - h) / 2;
ctx.fillStyle = "#000"; ctx.fillRect(0, 0, cnv.width, cnv.height);
ctx.imageSmoothingEnabled = false;
ctx.drawImage(_epLastImg, ox, oy, w, h);
}
function scheduleEdgePreview() {
if (_epDebounce) clearTimeout(_epDebounce);
_epDebounce = setTimeout(fetchEdgePreview, 200);
}
function bindEdgePreviewControls() {
const slid = (id, valEl) => {
const el = document.getElementById(id);
const v = document.getElementById(valEl);
el.addEventListener("input", () => {
v.textContent = el.value;
scheduleEdgePreview();
});
};
slid("ep-weak", "ep-weak-v");
slid("ep-strong", "ep-strong-v");
slid("ep-nf", "ep-nf-v");
slid("ep-sp", "ep-sp-v");
document.getElementById("ep-pol").addEventListener("change",
scheduleEdgePreview);
// Auto-refresh quando il pannello viene aperto
document.getElementById("edge-preview-panel").addEventListener("toggle",
(e) => { if (e.target.open) fetchEdgePreview(); });
document.getElementById("btn-edge-apply").addEventListener("click", () => {
// Copia i valori correnti nei campi avanzati
const map = {
"ep-weak": "adv-weak_grad",
"ep-strong": "adv-strong_grad",
"ep-nf": "adv-num_features",
"ep-sp": "adv-min_feature_spacing",
};
for (const [src, dst] of Object.entries(map)) {
const dstEl = document.getElementById(dst);
if (dstEl) dstEl.value = document.getElementById(src).value;
}
// use_polarity: alla checkbox della modalita Halcon
const polCb = document.getElementById("hc-use-polarity");
if (polCb) polCb.checked = document.getElementById("ep-pol").checked;
// Apri pannello Avanzate per feedback
const advDetails = document.querySelectorAll("#col-params details");
advDetails.forEach((d) => { d.open = true; });
alert("Parametri edge applicati. Esegui MATCH per usare i valori scelti.");
});
}
// ---------- CC: Diagnostica match ----------
function renderDiag(diag, n_matches) {
const el = document.getElementById("diag-content");
if (!diag) {
el.innerHTML = '<em style="color:#888">Diagnostica non disponibile</em>';
return;
}
const dropTotal = (diag.drop_ncc_low || 0) + (diag.drop_min_score_post_avg || 0)
+ (diag.drop_recall_low || 0) + (diag.drop_bbox_out_of_scene || 0)
+ (diag.drop_nms_iou || 0);
// Hint contestuali se 0 match
let hint = "";
if (n_matches === 0) {
if (diag.n_after_pre_nms === 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ Nessun candidato sopra soglia.
Prova: <b>min_score</b> o <b>top_thresh</b> (currently ${diag.top_thresh_used.toFixed(2)})</div>`;
} else if (diag.drop_ncc_low > 0 && dropTotal === diag.drop_ncc_low) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_ncc_low} candidati droppati da NCC.
Prova: <b>verify_threshold</b> (filtro_fp più leggero)</div>`;
} else if (diag.drop_min_score_post_avg > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_min_score_post_avg} match sotto min_score post-NCC.
Prova: <b>min_score</b></div>`;
} else if (diag.drop_recall_low > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_recall_low} match con recall < ${diag.min_recall_used}.
Prova: <b>min_recall</b></div>`;
} else if (diag.drop_bbox_out_of_scene > 0) {
hint = `<div style="color:#f88; margin-top:6px">⚠ ${diag.drop_bbox_out_of_scene} match con bbox fuori scena.
Centro derivato male: aumenta <b>min_score</b> o restringi <b>search_roi</b></div>`;
}
}
const flags = [];
if (diag.use_polarity) flags.push("polarity");
if (diag.use_soft_score) flags.push("soft");
if (diag.subpixel_lm) flags.push("subpix-LM");
el.innerHTML = `
<div><b>Pipeline pruning:</b></div>
<div>varianti: ${diag.n_variants_total} top_eval=${diag.n_variants_top_evaluated}
top_pass=${diag.n_variants_top_passed} full_eval=${diag.n_variants_full_evaluated}</div>
<div><b>Candidati:</b> raw=${diag.n_raw_candidates}
pre_nms=${diag.n_after_pre_nms} final=${diag.n_final}</div>
<div><b>Drop reasons:</b> NCC=${diag.drop_ncc_low}, score=${diag.drop_min_score_post_avg},
recall=${diag.drop_recall_low}, bbox=${diag.drop_bbox_out_of_scene}, NMS=${diag.drop_nms_iou}</div>
<div><b>Soglie:</b> top=${diag.top_thresh_used.toFixed(2)},
min_score=${diag.min_score_used.toFixed(2)},
NCC=${diag.verify_threshold_used.toFixed(2)},
recall=${diag.min_recall_used.toFixed(2)}</div>
${flags.length ? `<div><b>Flag attivi:</b> ${flags.join(", ")}</div>` : ""}
${hint}
`;
// Auto-apri pannello se 0 match (segnala problema)
if (n_matches === 0) {
document.getElementById("diag-panel").open = true;
}
}
// ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() {
if (!state.model || !state.roi) {
@@ -608,6 +768,7 @@ window.addEventListener("DOMContentLoaded", async () => {
document.getElementById("btn-unload-recipe").addEventListener("click",
unloadRecipe);
refreshRecipeList();
bindEdgePreviewControls();
const slider = document.getElementById("p-min-score");
slider.addEventListener("input", (e) => {
document.getElementById("v-score").textContent =
+44
View File
@@ -45,6 +45,40 @@
<canvas id="c-model" width="380" height="420"></canvas>
</div>
<div id="roi-info">ROI: (nessuna)</div>
<details id="edge-preview-panel" style="margin-top:10px">
<summary>🔬 Anteprima edge / pulizia rumore</summary>
<div style="font-size:11px; color:#aaa; margin:4px 0">
Regola le soglie per togliere edge spuri (sporcizie). UCS rosso/verde
sul baricentro feature.
</div>
<div class="ep-grid">
<label class="ep-row">weak_grad <span id="ep-weak-v">30</span>
<input type="range" id="ep-weak" min="5" max="200" value="30" step="1">
</label>
<label class="ep-row">strong_grad <span id="ep-strong-v">60</span>
<input type="range" id="ep-strong" min="10" max="400" value="60" step="1">
</label>
<label class="ep-row">num_features <span id="ep-nf-v">96</span>
<input type="range" id="ep-nf" min="16" max="300" value="96" step="1">
</label>
<label class="ep-row">spacing <span id="ep-sp-v">3</span>
<input type="range" id="ep-sp" min="1" max="15" value="3" step="1">
</label>
<label class="ep-row" style="flex-direction:row; gap:6px">
<input type="checkbox" id="ep-pol"> polarity
</label>
<button class="btn" id="btn-edge-apply" type="button"
style="grid-column:1/-1">
✓ Applica ai parametri Avanzate
</button>
</div>
<div class="canvas-wrap" style="margin-top:6px">
<canvas id="c-edge-preview" width="380" height="380"></canvas>
</div>
<div id="edge-preview-info" style="font-size:11px; color:#888; margin-top:4px">
Disegna ROI e apri questo pannello per generare anteprima
</div>
</details>
</section>
<section class="col" id="col-scene">
@@ -214,6 +248,16 @@
<div class="kv"><span>find:</span><span id="t-find">-</span></div>
<div class="kv"><span>varianti:</span><span id="t-var">-</span></div>
<div class="kv"><span>match:</span><span id="t-match">-</span></div>
<details id="diag-panel" style="margin-top:10px">
<summary>🔍 Diagnostica (CC)</summary>
<div id="diag-content" style="font-family:monospace; font-size:11px;
background:#1a1a1a; padding:8px;
border-radius:3px; margin-top:6px;
line-height:1.5">
<em style="color:#888">Esegui un MATCH per vedere la diagnostica</em>
</div>
</details>
</section>
</main>
+15
View File
@@ -173,3 +173,18 @@ footer h2 {
}
.hc-row.hc-num label { font-size: 11px; color: #aaa; }
.hc-row.hc-num input { width: 100%; }
/* Edge preview panel */
.ep-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 6px 12px;
margin-top: 6px;
font-size: 12px;
}
.ep-row {
display: flex; flex-direction: column; gap: 2px;
font-size: 11px; color: #aaa;
}
.ep-row input[type="range"] { width: 100%; }
.ep-row span { color: #fff; font-weight: bold; font-family: monospace; }
+3
View File
@@ -12,6 +12,9 @@ dependencies = [
"uvicorn[standard]>=0.34",
]
[project.scripts]
pm2d-eval = "pm2d.eval:main"
[dependency-groups]
dev = [
"httpx>=0.28.1",