Compare commits

..

11 Commits

Author SHA1 Message Date
Adriano 74a332a2dd feat: scene precompute cache (II Halcon-style)
LRU cache per scena: hash su prime 64KB bytes + parametri matcher
(weak/strong_grad, spread_radius, n_bins, pyramid_levels). Quando
hit, riusa:
- piramide grays
- spread_top + bit_active_top + density_top
- spread0 + bit_active_full + density_full

Tipico use case: UI tuning con slider min_score/verify_threshold/...
produce 10+ find() consecutive su scena identica. Risparmia
Sobel+dilate+popcount duplicati (~50ms su 1080p).

Speedup misurato: ~15% find() su 1080p (54ms su 351ms). Vantaggio
maggiore su template piccoli (kernel JIT veloce → scena precompute
domina). Cache size 4, invalidata in train() (template cambiato).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:07:27 +02:00
Adriano 9218cb2741 chore: gitignore recipes/*.npz e rimuove Pippo.npz dal tracking
Le ricette pre-trained (binari numpy compressi) sono dati utente
specifici della macchina/ROI/template, non vanno versionati.
Rimosso Pippo.npz dal repo (mantenuto su filesystem locale).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:21:46 +02:00
Adriano 159f9089a5 merge: UI load ricetta 2026-05-04 23:20:52 +02:00
Adriano b718e81ccf feat(web): UI carica/stacca ricetta + match con ricetta caricata
Manca il path "load" della V feature: utente poteva salvare ricetta
ma non caricarla dalla UI. Aggiunto:

Server:
- POST /recipes/{name}/load: carica .npz in cache _RECIPE_MATCHERS
- POST /match_recipe: usa matcher caricato senza re-train (zero
  training time, solo find params propagati)

UI:
- Dropdown ricette disponibili (auto-refreshed da GET /recipes)
- Bottone "Carica" attiva ricetta + popola state.active_recipe
- Bottone "Stacca" torna al flow normale (training da ROI)
- Status indicator mostra ricetta attiva e dimensioni

doMatch dispatcha automaticamente:
- ricetta attiva → /match_recipe (no model/ROI necessari)
- altrimenti → /match o /match_simple come prima

Use case: ricetta tarata offline, deploy a runtime production senza
ricaricare modello+ROI ogni volta.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:20:52 +02:00
Adriano d46197a81a merge: UI bottone auto-tune 2026-05-04 23:10:07 +02:00
Adriano 37c645984f feat(web): bottone Auto-tune nella toolbar (Halcon-style)
UI esponev gia' /auto_tune endpoint ma non c'era trigger user-facing.
Aggiunto bottone toolbar accanto a MATCH:
- Calcola tutti i parametri tecnici dalla ROI selezionata (gradient,
  feature, piramide, angle_step, simmetria)
- Esegue self-validation training+find su template
- Applica i valori derivati ai campi della sezione Avanzate
- Mostra alert con riepilogo + meta diagnostica
  (simmetria detected, self-validation result, ecc.)

Endpoint /auto_tune ora ritorna anche meta (_self_score, _validation,
_symmetry_order, _orient_entropy) per feedback UI invece di filtrarli.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:10:07 +02:00
Adriano 0e148667ec merge: auto_tune self-validation 2026-05-04 23:04:10 +02:00
Adriano b5bbca0e85 merge: hysteresis edge linking 2026-05-04 23:04:10 +02:00
Adriano ca3882c59c feat: auto_tune self-validation (Halcon-style inspect_shape_model)
Nuovo helper _self_validate(): post-stima parametri, esegue dry-run
training+find sul template stesso e regola i parametri se subottimali.

Loop di auto-correzione (analogo a Halcon inspect_shape_model):
1. Se top-level piramide ha <8 feature → riduce pyramid_levels
2. Se train produce 0 varianti → dimezza weak/strong_grad
3. Se find sul template fallisce → riduce soglie + num_features
4. Se self-score < 0.7 → abbassa weak_grad

Costo: 1 train minimale (1 variante) + 1 find su canvas tpl + padding,
~50ms su template 100x100. Ne vale la pena per evitare match-time
errors su scene reali con parametri estimato male.

Esposto via auto_tune(self_validate=True) default; meta '_self_score'
e '_validation' nel dict risultato per logging UI.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:04:01 +02:00
Adriano 7f6571bdd1 feat: hysteresis edge linking (Halcon Contrast='auto' two-threshold)
_hysteresis_mask: edge linking via componenti connesse.
- seed = mag >= strong_grad
- weak = mag >= weak_grad
- Promuove a feature ogni componente weak che contiene almeno un
  pixel strong (connettivita' 8-vicini)

Riduce simultaneamente:
- Falsi positivi: edge debole isolato (rumore puro) escluso
- Falsi negativi: edge debole connesso a edge forte incluso
  (continuita' bordi sottili a basso contrasto)

Attivo automaticamente quando weak_grad < strong_grad. Se uguali,
fallback a sogliatura singola standard. Backward compat completo
dato che default weak=30, strong=60.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 23:01:54 +02:00
Adriano 7cb1ae2df7 merge: UI wiring modalita Halcon 2026-05-04 22:49:17 +02:00
6 changed files with 487 additions and 23 deletions
+2
View File
@@ -8,3 +8,5 @@ __pycache__/
.DS_Store .DS_Store
*.log *.log
models/ models/
# Ricette pre-trained (generate da utente, non versionare)
recipes/*.npz
+105
View File
@@ -152,11 +152,103 @@ def _cache_key(template_bgr: np.ndarray, mask: np.ndarray | None) -> str:
return h.hexdigest() return h.hexdigest()
def _self_validate(template_bgr: np.ndarray, params: dict,
mask: np.ndarray | None = None) -> dict:
"""Halcon-style self-validation: train il matcher coi parametri tentativi
e verifica che il template stesso sia trovato con recall ≥ 1.0.
Se recall < target o score basso, regola i parametri:
- alza weak_grad se troppi edge spuri (recall solido ma molti picchi falsi)
- abbassa strong_grad se troppe feature scartate (low feature count)
- riduce pyramid_levels se variants[0].levels[top] ha <8 feature
Halcon usa internamente questo loop in inspect_shape_model. Costo: 1
train + 1 find sul template (~50ms su template 100x100). Ne vale la
pena se evita match-time errors su scene reali.
Mutates `params` in place e ritorna lo stesso dict per chaining.
"""
# Import lazy: evita ciclo (line_matcher importa nulla da auto_tune)
from pm2d.line_matcher import LineShapeMatcher
# Caso degenerato: troppe poche feature pre-validation → riduci soglia
if params.get("_n_strong_pixels", 0) < 30:
params["weak_grad"] = max(15.0, params["weak_grad"] * 0.6)
params["strong_grad"] = max(30.0, params["strong_grad"] * 0.6)
# Train minimale: 1 sola pose orientazione 0 (range degenerato che
# produce comunque 1 variante via fallback in _angle_list).
m = LineShapeMatcher(
num_features=params["num_features"],
weak_grad=params["weak_grad"],
strong_grad=params["strong_grad"],
angle_range_deg=(0.0, 0.0), # fallback _angle_list = [0.0]
angle_step_deg=10.0,
scale_range=(1.0, 1.0),
spread_radius=params["spread_radius"],
pyramid_levels=params["pyramid_levels"],
)
n_var = m.train(template_bgr, mask=mask)
if n_var == 0:
# Soglie troppo alte: nessuna variante generata → dimezza
params["weak_grad"] = max(15.0, params["weak_grad"] * 0.5)
params["strong_grad"] = max(30.0, params["strong_grad"] * 0.5)
params["_validation"] = "fallback: soglie dimezzate (no variants)"
return params
# Verifica densita' feature al top-level (rischio collasso)
top_lvl = m.variants[0].levels[-1]
if top_lvl.n < 8 and params["pyramid_levels"] > 1:
params["pyramid_levels"] = max(1, params["pyramid_levels"] - 1)
params["_validation"] = (
f"pyramid_levels ridotto a {params['pyramid_levels']} "
f"(top aveva {top_lvl.n} feature)"
)
return params
# Self-find: cerca il template stesso nella propria immagine
h, w = template_bgr.shape[:2]
# Embed template in scena leggermente più grande per evitare bordo
pad = 20
canvas = np.full(
(h + 2 * pad, w + 2 * pad, 3 if template_bgr.ndim == 3 else 1),
128, dtype=np.uint8,
)
canvas[pad:pad + h, pad:pad + w] = template_bgr
matches = m.find(
canvas, min_score=0.3, max_matches=5,
verify_ncc=False, # template stesso → NCC = 1 sempre, skip per velocita'
refine_angle=False, subpixel=False,
nms_iou_threshold=0.3,
)
if not matches:
# Nessun match sul proprio template: parametri troppo restrittivi
params["weak_grad"] = max(15.0, params["weak_grad"] * 0.7)
params["strong_grad"] = max(30.0, params["strong_grad"] * 0.7)
params["num_features"] = max(48, int(params["num_features"] * 0.8))
params["_validation"] = "soglie/feature ridotte (no self-match)"
return params
# Misura score top match
top_score = float(matches[0].score)
params["_self_score"] = round(top_score, 3)
if top_score < 0.7:
# Score basso sul template stesso = parametri davvero subottimali
params["weak_grad"] = max(15.0, params["weak_grad"] * 0.85)
params["_validation"] = (
f"weak_grad ridotto (self-score era {top_score:.2f})"
)
else:
params["_validation"] = f"OK (self-score {top_score:.2f})"
return params
def auto_tune( def auto_tune(
template_bgr: np.ndarray, template_bgr: np.ndarray,
mask: np.ndarray | None = None, mask: np.ndarray | None = None,
angle_tolerance_deg: float | None = None, angle_tolerance_deg: float | None = None,
angle_center_deg: float = 0.0, angle_center_deg: float = 0.0,
self_validate: bool = True,
) -> dict: ) -> dict:
"""Analizza template e ritorna dict parametri suggeriti. """Analizza template e ritorna dict parametri suggeriti.
@@ -168,6 +260,11 @@ def auto_tune(
meccanico): training molto piu rapido (24x meno varianti per meccanico): training molto piu rapido (24x meno varianti per
tol=15° vs 360° pieno). tol=15° vs 360° pieno).
self_validate: se True (default), dopo la stima dei parametri
esegue un dry-run del matching sul template stesso e regola
weak_grad/strong_grad/pyramid_levels se i parametri tentativi
non garantiscono auto-match (Halcon-style inspect_shape_model).
Risultato cachato in-memory (LRU): ri-chiamare con stessa ROI è O(1). Risultato cachato in-memory (LRU): ri-chiamare con stessa ROI è O(1).
""" """
ck = _cache_key(template_bgr, mask) ck = _cache_key(template_bgr, mask)
@@ -265,7 +362,15 @@ def auto_tune(
"_symmetry_order": sym["order"], "_symmetry_order": sym["order"],
"_symmetry_conf": round(sym["confidence"], 2), "_symmetry_conf": round(sym["confidence"], 2),
"_orient_entropy": round(stats["orient_entropy"], 2), "_orient_entropy": round(stats["orient_entropy"], 2),
"_n_strong_pixels": stats["n_strong"],
} }
# Halcon-style self-validation: dry-run training+find sul template per
# auto-correggere parametri tentativi che non garantirebbero match.
if self_validate:
result = _self_validate(template_bgr, result, mask=mask)
# Round numerici dopo eventuali aggiustamenti
result["weak_grad"] = round(result["weak_grad"], 1)
result["strong_grad"] = round(result["strong_grad"], 1)
# Store in LRU cache # Store in LRU cache
_TUNE_CACHE[ck] = dict(result) _TUNE_CACHE[ck] = dict(result)
_TUNE_CACHE.move_to_end(ck) _TUNE_CACHE.move_to_end(ck)
+112 -8
View File
@@ -241,13 +241,49 @@ class LineShapeMatcher:
bins = np.clip(bins, 0, N_BINS - 1) bins = np.clip(bins, 0, N_BINS - 1)
return mag, bins return mag, bins
def _hysteresis_mask(self, mag: np.ndarray) -> np.ndarray:
"""Edge mask con hysteresis (Halcon Contrast='auto' two-threshold).
Procedura:
1. seed = pixel con mag >= strong_grad (edge nitidi)
2. weak = pixel con mag >= weak_grad (edge candidati)
3. Espande seed dentro weak via componenti connesse 8-vicini
Risultato: edge debole connesso a edge forte viene PROMOSSO a
feature valida; edge debole isolato (rumore) viene SCARTATO.
Riduce sia falsi-positivi (rumore puro) sia falsi-negativi
(continuita' interrotta su edge sottili a basso contrasto).
"""
weak = (mag >= self.weak_grad).astype(np.uint8)
strong = (mag >= self.strong_grad).astype(np.uint8)
# connectedComponentsWithStats su weak: per ogni componente,
# se contiene almeno un pixel strong → tutto componente accettato
n_lab, labels = cv2.connectedComponents(weak, connectivity=8)
if n_lab <= 1:
return strong.astype(bool)
# Label dei pixel strong: marker per componenti da accettare
strong_labels = np.unique(labels[strong > 0])
strong_labels = strong_labels[strong_labels > 0] # 0 = bg
if len(strong_labels) == 0:
return strong.astype(bool)
# Mask = appartiene a label di componente "promosso"
keep = np.isin(labels, strong_labels)
return keep
def _extract_features( def _extract_features(
self, mag: np.ndarray, bins: np.ndarray, mask: np.ndarray | None, self, mag: np.ndarray, bins: np.ndarray, mask: np.ndarray | None,
) -> tuple[np.ndarray, np.ndarray, np.ndarray]: ) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
if mask is not None: if mask is not None:
mag = np.where(mask > 0, mag, 0) mag = np.where(mask > 0, mag, 0)
strong = mag >= self.strong_grad # Halcon-style edge selection: hysteresis tra weak_grad e strong_grad.
ys, xs = np.where(strong) # Edge weak connessi a edge strong sono inclusi (continuita' bordi).
# Se weak_grad >= strong_grad → fallback a soglia singola strong.
if self.weak_grad < self.strong_grad:
edge = self._hysteresis_mask(mag)
else:
edge = mag >= self.strong_grad
ys, xs = np.where(edge)
if len(xs) == 0: if len(xs) == 0:
return (np.zeros(0, np.int32),) * 3 return (np.zeros(0, np.int32),) * 3
vals = mag[ys, xs] vals = mag[ys, xs]
@@ -476,8 +512,10 @@ class LineShapeMatcher:
self.variants.clear() self.variants.clear()
# Reset view list: template principale = view 0 # Reset view list: template principale = view 0
self._view_templates = [(gray.copy(), mask_full.copy())] self._view_templates = [(gray.copy(), mask_full.copy())]
# Invalida cache feature di refine: il template e cambiato. # Invalida cache: template/param cambiati → spread/feature obsoleti.
self._refine_feat_cache = {} self._refine_feat_cache = {}
if hasattr(self, "_scene_cache"):
self._scene_cache.clear()
self._build_variants_for_view(gray, mask_full, view_idx=0) self._build_variants_for_view(gray, mask_full, view_idx=0)
self._dedup_variants() self._dedup_variants()
return len(self.variants) return len(self.variants)
@@ -633,6 +671,51 @@ class LineShapeMatcher:
raw[b] = d.astype(np.float32) raw[b] = d.astype(np.float32)
return raw return raw
# --- Scene precompute cache (II Halcon-style) -----------------------
_SCENE_CACHE_SIZE = 4
def _scene_cache_key(self, gray: np.ndarray) -> str | None:
"""Hash compatto della scena + param che influenzano spread/density.
Hash su prime 64KB della scena (sufficiente discriminante per
scene fotografiche) + parametri matcher rilevanti. None se cache
disabilitata (es. scene troppo piccole).
"""
if gray.size < 100:
return None
try:
import hashlib
h = hashlib.md5()
sample = gray.tobytes()[:65536]
h.update(sample)
h.update(f"|{gray.shape}|{gray.dtype}".encode())
h.update(
f"|{self.weak_grad}|{self.strong_grad}"
f"|{self.spread_radius}|{self._n_bins}"
f"|{self.pyramid_levels}".encode()
)
return h.hexdigest()
except Exception:
return None
def _scene_cache_get(self, key: str) -> tuple | None:
cache = getattr(self, "_scene_cache", None)
if cache is None:
return None
v = cache.get(key)
if v is not None:
cache.move_to_end(key)
return v
def _scene_cache_put(self, key: str, value: tuple) -> None:
from collections import OrderedDict
if not hasattr(self, "_scene_cache"):
self._scene_cache = OrderedDict()
self._scene_cache[key] = value
self._scene_cache.move_to_end(key)
while len(self._scene_cache) > self._SCENE_CACHE_SIZE:
self._scene_cache.popitem(last=False)
def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray: def _spread_bitmap(self, gray: np.ndarray) -> np.ndarray:
"""Spread bitmap: bit b acceso dove bin b è presente nel raggio. """Spread bitmap: bit b acceso dove bin b è presente nel raggio.
@@ -1304,18 +1387,31 @@ class LineShapeMatcher:
else: else:
gray0 = gray_full gray0 = gray_full
roi_offset = (0, 0) roi_offset = (0, 0)
# Cache pre-compute scena (II Halcon-style): hash bytes scene + param
# gradient/spread → riusa spread piramide + density tra find()
# consecutive con stessa scena (typical UI tuning: slider produce
# 10+ find() su scena identica). Risparmia ~80% del costo non-kernel.
cache_key = self._scene_cache_key(gray0)
cached = self._scene_cache_get(cache_key) if cache_key else None
if cached is not None:
grays, spread_top, bit_active_top, density_top, spread0, \
bit_active_full, density_full, top = cached
else:
grays = [gray0] grays = [gray0]
for _ in range(self.pyramid_levels - 1): for _ in range(self.pyramid_levels - 1):
grays.append(cv2.pyrDown(grays[-1])) grays.append(cv2.pyrDown(grays[-1]))
top = len(grays) - 1 top = len(grays) - 1
# Spread bitmap (uint8) al top level: 32× meno memoria della response
# map float32 → MOLTO più cache-friendly per _score_by_shift.
spread_top = self._spread_bitmap(grays[top]) spread_top = self._spread_bitmap(grays[top])
bit_active_top = int( bit_active_top = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread_top & (spread_top.dtype.type(1) << b)).any()) if (spread_top & (spread_top.dtype.type(1) << b)).any())
) )
density_top = _jit_popcount(spread_top)
# spread0 + density_full computati piu sotto, quindi salvo dopo.
spread0 = None
bit_active_full = None
density_full = None
if nms_radius is None: if nms_radius is None:
nms_radius = max(8, min(self.template_size) // 2) nms_radius = max(8, min(self.template_size) // 2)
# Pruning adattivo allo step angolare: con step piccolo (<= 3 deg) # Pruning adattivo allo step angolare: con step piccolo (<= 3 deg)
@@ -1334,7 +1430,7 @@ class LineShapeMatcher:
top_thresh = min_score * top_factor top_thresh = min_score * top_factor
tw, th = self.template_size tw, th = self.template_size
density_top = _jit_popcount(spread_top) # density_top gia' computato sopra (cache o miss)
sf_top = 2 ** top sf_top = 2 ** top
bg_cache_top: dict[float, np.ndarray] = {} bg_cache_top: dict[float, np.ndarray] = {}
bg_cache_full: dict[float, np.ndarray] = {} bg_cache_full: dict[float, np.ndarray] = {}
@@ -1481,13 +1577,21 @@ class LineShapeMatcher:
max_vars_full = max(max_matches * 8, len(self.variants) // 2) max_vars_full = max(max_matches * 8, len(self.variants) // 2)
kept_variants = kept_variants[:max_vars_full] kept_variants = kept_variants[:max_vars_full]
# Full-res (parallelizzato) con bitmap # Full-res (parallelizzato) con bitmap.
# Riusa cache se disponibile, altrimenti computa e salva.
if spread0 is None:
spread0 = self._spread_bitmap(gray0) spread0 = self._spread_bitmap(gray0)
bit_active_full = int( bit_active_full = int(
sum(1 << b for b in range(self._n_bins) sum(1 << b for b in range(self._n_bins)
if (spread0 & (spread0.dtype.type(1) << b)).any()) if (spread0 & (spread0.dtype.type(1) << b)).any())
) )
density_full = _jit_popcount(spread0) density_full = _jit_popcount(spread0)
# Salva cache scena complete
if cache_key is not None:
self._scene_cache_put(cache_key, (
grays, spread_top, bit_active_top, density_top,
spread0, bit_active_full, density_full, top,
))
for sc in unique_scales: for sc in unique_scales:
bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1) bg_cache_full[sc] = _bg_for_scale(density_full, sc, 1)
+99 -1
View File
@@ -607,7 +607,9 @@ def tune(p: TuneParams):
x, y, w, h = p.roi x, y, w, h = p.roi
roi_img = model[y:y + h, x:x + w] roi_img = model[y:y + h, x:x + w]
t = auto_tune(roi_img) t = auto_tune(roi_img)
return {k: v for k, v in t.items() if not k.startswith("_")} # Esponi parametri tecnici + meta diagnostica (_self_score, _validation,
# _symmetry_order, _orient_entropy) per feedback UI.
return t
# --- V: Save/Load ricette pre-trained --- # --- V: Save/Load ricette pre-trained ---
@@ -674,6 +676,102 @@ def list_recipes():
return {"files": files, "dir": str(RECIPES_DIR)} return {"files": files, "dir": str(RECIPES_DIR)}
# Cache di matcher caricati da .npz (V feature). Key: nome ricetta.
_RECIPE_MATCHERS: OrderedDict = OrderedDict()
_RECIPE_MATCHERS_SIZE = 4
@app.post("/recipes/{name}/load")
def load_recipe(name: str):
"""Carica ricetta .npz e popola cache matcher in memoria.
Una volta caricata, /match_recipe la usa direttamente senza
re-train. Halcon-equivalent read_shape_model + handle.
"""
safe_name = "".join(c for c in name if c.isalnum() or c in "._-")
if not safe_name.endswith(".npz"):
safe_name += ".npz"
path = RECIPES_DIR / safe_name
if not path.is_file():
raise HTTPException(404, f"Ricetta non trovata: {safe_name}")
m = LineShapeMatcher.load_model(str(path))
_RECIPE_MATCHERS[safe_name] = m
_RECIPE_MATCHERS.move_to_end(safe_name)
while len(_RECIPE_MATCHERS) > _RECIPE_MATCHERS_SIZE:
_RECIPE_MATCHERS.popitem(last=False)
return {
"name": safe_name,
"n_variants": len(m.variants),
"template_size": list(m.template_size),
"use_polarity": m.use_polarity,
}
class RecipeMatchParams(BaseModel):
recipe: str
scene_id: str
# Solo find-time params (training gia' fatto offline)
min_score: float = 0.65
max_matches: int = 25
min_recall: float = 0.0
use_soft_score: bool = False
subpixel_lm: bool = False
nms_iou_threshold: float = 0.3
coarse_stride: int = 1
pyramid_propagate: bool = False
greediness: float = 0.0
refine_pose_joint: bool = False
search_roi: list[int] | None = None
verify_threshold: float = 0.5
scale_penalty: float = 0.0
@app.post("/match_recipe", response_model=MatchResp)
def match_recipe(p: RecipeMatchParams):
"""Match con ricetta pre-trained: zero training, solo find."""
safe_name = p.recipe if p.recipe.endswith(".npz") else f"{p.recipe}.npz"
m = _RECIPE_MATCHERS.get(safe_name)
if m is None:
# Auto-load on demand
path = RECIPES_DIR / safe_name
if not path.is_file():
raise HTTPException(404, f"Ricetta non trovata: {safe_name}")
m = LineShapeMatcher.load_model(str(path))
_RECIPE_MATCHERS[safe_name] = m
scene = _load_image(p.scene_id)
if scene is None:
raise HTTPException(404, "Scena non trovata")
search_roi_t = tuple(p.search_roi) if p.search_roi else None
t0 = time.time()
matches = m.find(
scene,
min_score=p.min_score, max_matches=p.max_matches,
verify_threshold=p.verify_threshold,
scale_penalty=p.scale_penalty,
min_recall=p.min_recall,
use_soft_score=p.use_soft_score,
subpixel_lm=p.subpixel_lm,
nms_iou_threshold=p.nms_iou_threshold,
coarse_stride=p.coarse_stride,
pyramid_propagate=p.pyramid_propagate,
greediness=p.greediness,
refine_pose_joint=p.refine_pose_joint,
search_roi=search_roi_t,
)
t_find = time.time() - t0
tg = m.template_gray if m.template_gray is not None else np.zeros((1, 1), np.uint8)
annotated = _draw_matches(scene, matches, tg)
ann_id = _store_image(annotated)
return MatchResp(
matches=[MatchResult(
cx=mt.cx, cy=mt.cy, angle_deg=mt.angle_deg, scale=mt.scale,
score=mt.score, bbox_poly=mt.bbox_poly.tolist(),
) for mt in matches],
train_time=0.0, find_time=t_find,
num_variants=len(m.variants), annotated_id=ann_id,
)
# Mount static # Mount static
app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static") app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
+141
View File
@@ -19,6 +19,7 @@ const PALETTE = [
const state = { const state = {
model: null, scene: null, roi: null, drag: null, model: null, scene: null, roi: null, drag: null,
matches: [], annotatedImg: null, matches: [], annotatedImg: null,
active_recipe: null, // V: ricetta caricata (string nome) o null
}; };
// ---------- Forms ---------- // ---------- Forms ----------
@@ -307,7 +308,42 @@ function setupROI() {
} }
// ---------- Match action ---------- // ---------- Match action ----------
async function doMatchRecipe() {
if (!state.scene) { setStatus("Carica scena"); return; }
setStatus(`Match ricetta ${state.active_recipe}...`);
const hc = readHalconFlags();
const body = {
recipe: state.active_recipe,
scene_id: state.scene.id,
min_score: parseFloat(document.getElementById("p-min-score").value),
max_matches: parseInt(document.getElementById("p-max-matches").value, 10),
verify_threshold: 0.50,
...hc,
};
const r = await fetch("/match_recipe", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!r.ok) { setStatus(`Errore: ${await r.text()}`); return; }
const data = await r.json();
state.matches = data.matches;
state.annotatedImg = await loadImage(
`/image/${data.annotated_id}/raw?t=${Date.now()}`);
renderScene();
renderLegend();
document.getElementById("t-train").textContent = "—";
document.getElementById("t-find").textContent = `${data.find_time.toFixed(2)}s`;
document.getElementById("t-var").textContent = data.num_variants;
document.getElementById("t-match").textContent = data.matches.length;
setStatus(`${data.matches.length} match trovati (ricetta ${state.active_recipe})`);
}
async function doMatch() { async function doMatch() {
// Path V: ricetta caricata → bypass training, solo find su scena
if (state.active_recipe) {
return doMatchRecipe();
}
if (!state.model) { setStatus("Carica modello"); return; } if (!state.model) { setStatus("Carica modello"); return; }
if (!state.scene) { setStatus("Carica scena"); return; } if (!state.scene) { setStatus("Carica scena"); return; }
if (!state.roi) { setStatus("Seleziona ROI sul modello"); return; } if (!state.roi) { setStatus("Seleziona ROI sul modello"); return; }
@@ -400,6 +436,104 @@ function setStatus(s) {
} }
// ---------- Init ---------- // ---------- Init ----------
// ---------- Auto-tune (Halcon-style) ----------
async function doAutoTune() {
if (!state.model || !state.roi) {
alert("Seleziona modello e disegna ROI prima di Auto-tune.");
return;
}
const status = document.getElementById("status");
status.textContent = "Analisi ROI in corso...";
try {
const r = await fetch("/auto_tune", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model_id: state.model.id,
roi: state.roi,
}),
});
if (!r.ok) throw new Error(await r.text());
const t = await r.json();
// Applica ai campi avanzati (override automatico)
for (const [key] of ADV_PARAMS) {
const el = document.getElementById(`adv-${key}`);
if (el && t[key] !== undefined) el.value = String(t[key]);
}
// Espandi la sezione Avanzate per mostrare i valori applicati
const advDetails = document.querySelector("#col-params details:last-of-type");
if (advDetails) advDetails.open = true;
// Feedback diagnostico
const lines = [
`weak/strong: ${t.weak_grad} / ${t.strong_grad}`,
`feature: ${t.num_features}, piramide: ${t.pyramid_levels}`,
`angle: [${t.angle_min}..${t.angle_max}]@${t.angle_step}°`,
];
if (t._symmetry_order > 1) {
lines.push(`simmetria rotaz. ${t._symmetry_order}x (conf ${t._symmetry_conf})`);
}
if (t._self_score !== undefined) {
lines.push(`self-validation: ${t._validation}`);
}
status.textContent = `Auto-tune OK — ${lines[0]}`;
alert("Auto-tune completato:\n\n" + lines.join("\n"));
} catch (e) {
status.textContent = `Auto-tune errore: ${e.message}`;
alert(`Errore auto-tune: ${e.message}`);
}
}
// ---------- V: Recipe load/list/unload ----------
async function refreshRecipeList() {
try {
const r = await fetch("/recipes");
if (!r.ok) return;
const j = await r.json();
const sel = document.getElementById("hc-recipe-list");
const cur = sel.value;
sel.innerHTML = '<option value="">— ricette disponibili —</option>';
for (const f of j.files) {
const o = document.createElement("option");
o.value = f.name;
o.textContent = `${f.name} (${(f.size / 1024).toFixed(1)} KB)`;
sel.appendChild(o);
}
if (cur) sel.value = cur;
} catch (e) { /* silent */ }
}
async function loadRecipe() {
const sel = document.getElementById("hc-recipe-list");
const name = sel.value;
if (!name) {
alert("Seleziona una ricetta dalla lista.");
return;
}
try {
const r = await fetch(`/recipes/${encodeURIComponent(name)}/load`, {
method: "POST",
});
if (!r.ok) throw new Error(await r.text());
const j = await r.json();
state.active_recipe = j.name;
document.getElementById("recipe-status").textContent =
`Caricata: ${j.name}${j.n_variants} varianti, ` +
`${j.template_size[0]}x${j.template_size[1]} px` +
(j.use_polarity ? " (polarity)" : "");
document.getElementById("recipe-status").style.color = "#0c0";
document.getElementById("btn-unload-recipe").disabled = false;
} catch (e) {
alert(`Errore caricamento: ${e.message}`);
}
}
function unloadRecipe() {
state.active_recipe = null;
document.getElementById("recipe-status").textContent = "Nessuna ricetta caricata";
document.getElementById("recipe-status").style.color = "#888";
document.getElementById("btn-unload-recipe").disabled = true;
}
// ---------- V: Save recipe ---------- // ---------- V: Save recipe ----------
async function saveRecipe() { async function saveRecipe() {
if (!state.model || !state.roi) { if (!state.model || !state.roi) {
@@ -433,6 +567,7 @@ async function saveRecipe() {
if (!r.ok) throw new Error(await r.text()); if (!r.ok) throw new Error(await r.text());
const j = await r.json(); const j = await r.json();
alert(`Ricetta salvata: ${j.name}\n${j.n_variants} varianti, ${j.size} bytes`); alert(`Ricetta salvata: ${j.name}\n${j.n_variants} varianti, ${j.size} bytes`);
refreshRecipeList();
} catch (e) { } catch (e) {
alert(`Errore salvataggio: ${e.message}`); alert(`Errore salvataggio: ${e.message}`);
} }
@@ -465,8 +600,14 @@ window.addEventListener("DOMContentLoaded", async () => {
e.target.value = ""; // consente re-upload stesso file e.target.value = ""; // consente re-upload stesso file
}); });
document.getElementById("btn-match").addEventListener("click", doMatch); document.getElementById("btn-match").addEventListener("click", doMatch);
document.getElementById("btn-autotune").addEventListener("click", doAutoTune);
document.getElementById("btn-save-recipe").addEventListener("click", document.getElementById("btn-save-recipe").addEventListener("click",
saveRecipe); saveRecipe);
document.getElementById("btn-load-recipe").addEventListener("click",
loadRecipe);
document.getElementById("btn-unload-recipe").addEventListener("click",
unloadRecipe);
refreshRecipeList();
const slider = document.getElementById("p-min-score"); const slider = document.getElementById("p-min-score");
slider.addEventListener("input", (e) => { slider.addEventListener("input", (e) => {
document.getElementById("v-score").textContent = document.getElementById("v-score").textContent =
+14
View File
@@ -26,6 +26,10 @@
<div class="picker-list"></div> <div class="picker-list"></div>
</div> </div>
<button class="btn btn-go" id="btn-match">▶ MATCH</button> <button class="btn btn-go" id="btn-match">▶ MATCH</button>
<button class="btn" id="btn-autotune"
title="Analizza ROI e derivata parametri ottimali (Halcon-style)">
⚙ Auto-tune
</button>
<label class="btn" title="Carica nuovo file nella cartella immagini"> <label class="btn" title="Carica nuovo file nella cartella immagini">
⬆ Carica file ⬆ Carica file
<input type="file" id="file-upload" accept="image/*" hidden> <input type="file" id="file-upload" accept="image/*" hidden>
@@ -186,6 +190,16 @@
<input type="text" id="hc-recipe-name" placeholder="nome_ricetta" style="flex:1"> <input type="text" id="hc-recipe-name" placeholder="nome_ricetta" style="flex:1">
<button class="btn" id="btn-save-recipe" type="button">💾 Salva</button> <button class="btn" id="btn-save-recipe" type="button">💾 Salva</button>
</div> </div>
<div style="display:flex; gap:6px; margin-top:6px; align-items:center">
<select id="hc-recipe-list" style="flex:1">
<option value="">— ricette disponibili —</option>
</select>
<button class="btn" id="btn-load-recipe" type="button">📂 Carica</button>
<button class="btn" id="btn-unload-recipe" type="button" disabled>✖ Stacca</button>
</div>
<div id="recipe-status" style="margin-top:4px; font-size:11px; color:#888">
Nessuna ricetta caricata
</div>
</div> </div>
</div> </div>
</details> </details>