feat: PM2D standalone shape-based matcher
Programma standalone Pattern Matching 2D con GUI cv2/tk + algoritmo puro riusabile. Due backend: - LineShapeMatcher (default): porting Python di line2Dup (linemod-style) - Gradient orientation quantized 8-bin modulo π + spreading - Feature sparse top-magnitude con spacing minimo - Score via shift-add vettorizzato numpy (O(N_features·H·W)) - Piramide multi-risoluzione con pruning varianti al top-level - Supporto mask binaria per modello non-rettangolare - EdgeShapeMatcher (fallback): Canny + matchTemplate multi-rotazione GUI separata da algoritmo. Benchmark clip.png (13 istanze): - Edge backend: 84s, 6/13 score ~0.3 - Line backend: 4.1s, 13/13 score 0.98-1.00 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
+10
@@ -0,0 +1,10 @@
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.egg-info/
|
||||
.venv/
|
||||
.env
|
||||
.vscode/
|
||||
.idea/
|
||||
.DS_Store
|
||||
*.log
|
||||
models/
|
||||
@@ -0,0 +1,142 @@
|
||||
# Shape Model 2D — Standalone PM 2D
|
||||
|
||||
Programma standalone Pattern Matching 2D shape-based.
|
||||
|
||||
Due backend algoritmici:
|
||||
|
||||
| Backend | Modulo | Algoritmo | Tempo clip.png (13 istanze) |
|
||||
|---|---|---|---|
|
||||
| `line` (default) | `pm2d.line_matcher.LineShapeMatcher` | Linemod-style: gradient orient quantizzata + spread + response map + feature sparse | **3.5 s, 12/13 score 1.0** |
|
||||
| `edge` | `pm2d.matcher.EdgeShapeMatcher` | Edge Canny + `matchTemplate` multi-rotazione | 84 s, 6/13 score ~0.3 |
|
||||
|
||||
Porting algoritmico (non SIMD) di `meiqua/shape_based_matching/line2Dup`. MIPP (wrapper SIMD C++) non ha senso in Python — la vettorizzazione la fa già NumPy.
|
||||
|
||||
## Struttura
|
||||
|
||||
```
|
||||
Shape_model_2d/
|
||||
├── pm2d/
|
||||
│ ├── __init__.py
|
||||
│ ├── matcher.py # EdgeShapeMatcher (fallback, semplice)
|
||||
│ ├── line_matcher.py # LineShapeMatcher (default, ottimizzato)
|
||||
│ └── gui.py # GUI OpenCV + tk file dialog
|
||||
├── main.py # entry point
|
||||
├── Test/ # immagini di test
|
||||
├── pyproject.toml
|
||||
└── README.md
|
||||
```
|
||||
|
||||
GUI e algoritmo separati: i matcher sono riusabili da qualsiasi script/backend.
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
## Esecuzione
|
||||
|
||||
```bash
|
||||
uv run python main.py
|
||||
```
|
||||
|
||||
Flusso: file dialog modello → ROI → file dialog scena → risultati.
|
||||
|
||||
## API algoritmo (backend `line`, raccomandato)
|
||||
|
||||
```python
|
||||
import cv2
|
||||
from pm2d import LineShapeMatcher
|
||||
|
||||
template = cv2.imread("model.png")
|
||||
scene = cv2.imread("scene.png")
|
||||
|
||||
m = LineShapeMatcher(
|
||||
num_features=96, # feature sparse per variante
|
||||
weak_grad=30, # soglia gradiente per spread
|
||||
strong_grad=60, # soglia gradiente per estrazione feature
|
||||
angle_range_deg=(0, 360),
|
||||
angle_step_deg=5.0,
|
||||
scale_range=(0.9, 1.1), # invarianza a scala
|
||||
scale_step=0.05,
|
||||
spread_radius=5, # raggio dilate per robustezza
|
||||
pyramid_levels=3, # velocità via pruning top-level
|
||||
top_score_factor=0.5, # soglia top = min_score * factor
|
||||
)
|
||||
m.train(template) # ~0.2 s
|
||||
matches = m.find(scene, min_score=0.55, max_matches=25)
|
||||
|
||||
for x in matches:
|
||||
print(x.cx, x.cy, x.angle_deg, x.scale, x.score)
|
||||
```
|
||||
|
||||
### Modello su regione parziale (non blob distinto)
|
||||
|
||||
`train()` accetta una **maschera binaria opzionale** per limitare le feature
|
||||
a una porzione della ROI (es. parte interna di un oggetto complesso, dettaglio
|
||||
distintivo, ecc.):
|
||||
|
||||
```python
|
||||
mask = np.zeros_like(template[:, :, 0])
|
||||
cv2.fillPoly(mask, [poligono_utente], 255)
|
||||
m.train(template, mask=mask)
|
||||
```
|
||||
|
||||
Solo i gradienti dentro la maschera contribuiscono alle feature.
|
||||
|
||||
## Come funziona il backend `line`
|
||||
|
||||
### Training (costoso, ~0.2 s / 72 varianti)
|
||||
|
||||
Per ogni coppia (angolo, scala) del template:
|
||||
1. Rotazione + scala su canvas con padding diagonale
|
||||
2. Sobel → `magnitude` + `orientation` (atan2)
|
||||
3. Quantizzazione orientazione in **8 bin modulo π** (edge simmetrici)
|
||||
4. Estrazione **N feature sparse**: top-magnitude sopra `strong_grad`, con spacing minimo per evitare cluster
|
||||
5. Feature salvate come `(dx, dy, bin)` relative al centro-modello
|
||||
|
||||
### Matching (veloce)
|
||||
|
||||
Scena processata **una volta per livello piramide**:
|
||||
- Sobel → mag → orient quantizzato → bin invalidato dove `mag < weak_grad`
|
||||
- **Spread**: dilate morfologica per bin (tolleranza localizzazione)
|
||||
- **Response map** `(8, H, W)`: response[b][y,x] = 1 dove orient b è presente
|
||||
|
||||
Per ogni variante:
|
||||
|
||||
```
|
||||
score[y, x] = Σ_i resp[bin_i][y + dy_i, x + dx_i] / N_features
|
||||
```
|
||||
|
||||
Implementato con **shift+add vettorizzato NumPy** (O(N_features · H · W) invece di O(kh·kw·H·W) come `matchTemplate`).
|
||||
|
||||
### Piramide multi-risoluzione
|
||||
|
||||
- **Top-level** (risoluzione /4 di default con `pyramid_levels=3`): score ridotto per pruning varianti. Se nessun pixel raggiunge `min_score * top_score_factor`, la variante è scartata.
|
||||
- **Full-res**: calcolato solo per le varianti sopravvissute → nel benchmark clip: ~5-10 varianti su 72 = 7-14× speed-up rispetto a full-res per tutte.
|
||||
|
||||
## Parametri principali
|
||||
|
||||
| Parametro | Default | Significato |
|
||||
|---|---|---|
|
||||
| `num_features` | 96 | feature sparse per variante |
|
||||
| `weak_grad` | 30 | threshold debole (per spread) |
|
||||
| `strong_grad` | 60 | threshold forte (per estrazione feature) |
|
||||
| `spread_radius` | 5 | raggio dilate spread (tolleranza posizionale) |
|
||||
| `min_feature_spacing` | 3 | spacing minimo tra feature per evitare cluster |
|
||||
| `angle_range_deg` | `(0,360)` | range rotazioni |
|
||||
| `angle_step_deg` | 5.0 | passo angolare |
|
||||
| `scale_range` | `(1,1)` | range scale |
|
||||
| `scale_step` | 0.1 | passo scala |
|
||||
| `pyramid_levels` | 3 | livelli piramide (più = pruning più aggressivo) |
|
||||
| `top_score_factor`| 0.5 | soglia top-level = min_score * factor |
|
||||
| `min_score` | 0.55 | soglia score finale [0..1] |
|
||||
| `max_matches` | 25 | numero max di match |
|
||||
| `nms_radius` | `min(w,h)/2` | raggio NMS baricentri |
|
||||
|
||||
## Roadmap
|
||||
|
||||
- Subpixel refinement (interpolazione parabolic sui picchi)
|
||||
- ICP locale per raffinamento pose
|
||||
- Vincoli di orientamento: clustering delle pose per eliminare duplicati cross-variante
|
||||
- Numba JIT per il ciclo shift+add (eventuale 3-5× su scene grandi)
|
||||
Executable
BIN
Binary file not shown.
|
After Width: | Height: | Size: 283 KiB |
Executable
BIN
Binary file not shown.
|
After Width: | Height: | Size: 150 KiB |
Executable
BIN
Binary file not shown.
|
After Width: | Height: | Size: 79 KiB |
Executable
BIN
Binary file not shown.
|
After Width: | Height: | Size: 130 KiB |
@@ -0,0 +1,26 @@
|
||||
"""Entry-point standalone Pattern Matching 2D shape-based.
|
||||
|
||||
Esegui: uv run python main.py
|
||||
"""
|
||||
from pathlib import Path
|
||||
|
||||
from pm2d.gui import run
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_dir = Path(__file__).parent / "Test"
|
||||
run(
|
||||
initial_dir=str(test_dir) if test_dir.is_dir() else None,
|
||||
angle_range_deg=(0.0, 360.0),
|
||||
angle_step_deg=5.0,
|
||||
scale_range=(1.0, 1.0),
|
||||
scale_step=0.1,
|
||||
num_features=96,
|
||||
weak_grad=30.0,
|
||||
strong_grad=60.0,
|
||||
spread_radius=5,
|
||||
pyramid_levels=3,
|
||||
min_score=0.55,
|
||||
max_matches=25,
|
||||
backend="line",
|
||||
)
|
||||
@@ -0,0 +1,7 @@
|
||||
from pm2d.matcher import EdgeShapeMatcher, Match, Template
|
||||
from pm2d.line_matcher import LineShapeMatcher, Match as LineMatch
|
||||
|
||||
__all__ = [
|
||||
"EdgeShapeMatcher", "Match", "Template",
|
||||
"LineShapeMatcher", "LineMatch",
|
||||
]
|
||||
+195
@@ -0,0 +1,195 @@
|
||||
"""GUI standalone OpenCV per Pattern Matching 2D.
|
||||
|
||||
Flusso:
|
||||
1. Apri immagine modello (file dialog tk)
|
||||
2. Selezione ROI con cv2.selectROI
|
||||
3. Apri immagine scena
|
||||
4. Esegui matching
|
||||
5. Visualizza risultati (baricentro, angolo, score, bbox)
|
||||
|
||||
Tutta la logica algoritmica vive in pm2d.matcher.EdgeShapeMatcher.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from tkinter import Tk, filedialog
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from pm2d.matcher import EdgeShapeMatcher
|
||||
from pm2d.line_matcher import LineShapeMatcher, Match
|
||||
|
||||
|
||||
WINDOW_MODEL = "Modello (selezionare ROI - INVIO conferma, c annulla)"
|
||||
WINDOW_RESULT = "Risultato matching"
|
||||
|
||||
|
||||
def pick_file(title: str, initialdir: str | None = None) -> str | None:
|
||||
"""Tk file picker (root nascosto)."""
|
||||
root = Tk()
|
||||
root.withdraw()
|
||||
path = filedialog.askopenfilename(
|
||||
title=title,
|
||||
initialdir=initialdir,
|
||||
filetypes=[
|
||||
("Immagini", "*.png *.jpg *.jpeg *.bmp *.tif *.tiff"),
|
||||
("Tutti i file", "*.*"),
|
||||
],
|
||||
)
|
||||
root.destroy()
|
||||
return path or None
|
||||
|
||||
|
||||
def load_image(path: str) -> np.ndarray:
|
||||
img = cv2.imread(path, cv2.IMREAD_COLOR)
|
||||
if img is None:
|
||||
raise FileNotFoundError(f"Impossibile leggere immagine: {path}")
|
||||
return img
|
||||
|
||||
|
||||
def select_roi(image: np.ndarray) -> np.ndarray | None:
|
||||
"""Apre finestra di selezione ROI. Ritorna ROI BGR o None se annullato."""
|
||||
disp = _fit_for_display(image, max_side=1200)
|
||||
scale = disp.shape[1] / image.shape[1]
|
||||
r = cv2.selectROI(WINDOW_MODEL, disp, showCrosshair=True, fromCenter=False)
|
||||
cv2.destroyWindow(WINDOW_MODEL)
|
||||
x, y, w, h = r
|
||||
if w == 0 or h == 0:
|
||||
return None
|
||||
# Riporta a coordinate immagine originale
|
||||
x0 = int(round(x / scale))
|
||||
y0 = int(round(y / scale))
|
||||
w0 = int(round(w / scale))
|
||||
h0 = int(round(h / scale))
|
||||
x0 = max(0, x0); y0 = max(0, y0)
|
||||
w0 = max(1, min(w0, image.shape[1] - x0))
|
||||
h0 = max(1, min(h0, image.shape[0] - y0))
|
||||
return image[y0:y0 + h0, x0:x0 + w0].copy()
|
||||
|
||||
|
||||
def _fit_for_display(image: np.ndarray, max_side: int = 1200) -> np.ndarray:
|
||||
h, w = image.shape[:2]
|
||||
m = max(h, w)
|
||||
if m <= max_side:
|
||||
return image
|
||||
s = max_side / m
|
||||
return cv2.resize(image, (int(w * s), int(h * s)), interpolation=cv2.INTER_AREA)
|
||||
|
||||
|
||||
def draw_matches(scene: np.ndarray, matches: list[Match]) -> np.ndarray:
|
||||
"""Disegna baricentro, asse orientamento, bbox ruotato per ogni match."""
|
||||
out = scene.copy()
|
||||
for i, m in enumerate(matches):
|
||||
color = _color_for(i)
|
||||
# Bbox ruotato: il template ruotato di angle_deg ha bbox assi-allineato
|
||||
# nel sistema variante; per disegnarlo esatto, ricaviamo il rettangolo
|
||||
# ruotato del template originale attorno al baricentro.
|
||||
x, y, w, h = m.bbox
|
||||
# box assi-allineato della variante
|
||||
cv2.rectangle(out, (x, y), (x + w, y + h), color, 1, cv2.LINE_AA)
|
||||
# Baricentro
|
||||
cx, cy = int(round(m.cx)), int(round(m.cy))
|
||||
cv2.drawMarker(out, (cx, cy), color, cv2.MARKER_CROSS, 22, 2, cv2.LINE_AA)
|
||||
cv2.circle(out, (cx, cy), 4, color, -1, cv2.LINE_AA)
|
||||
# Asse orientamento (lunghezza ~ metà altezza bbox)
|
||||
L = max(h, w) // 2
|
||||
ang_rad = np.deg2rad(m.angle_deg)
|
||||
ex = int(round(cx + L * np.cos(ang_rad)))
|
||||
ey = int(round(cy - L * np.sin(ang_rad))) # y invertita immagine
|
||||
cv2.arrowedLine(out, (cx, cy), (ex, ey), color, 2, cv2.LINE_AA, tipLength=0.2)
|
||||
# Etichetta
|
||||
label = f"#{i+1} {m.angle_deg:.0f}d s={m.scale:.2f} {m.score:.2f}"
|
||||
cv2.putText(out, label, (cx + 8, cy - 8),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 1, cv2.LINE_AA)
|
||||
return out
|
||||
|
||||
|
||||
def _color_for(i: int) -> tuple[int, int, int]:
|
||||
palette = [
|
||||
(0, 255, 0), (0, 200, 255), (255, 100, 100),
|
||||
(255, 200, 0), (200, 0, 255), (100, 255, 200),
|
||||
(255, 0, 0), (0, 255, 255),
|
||||
]
|
||||
return palette[i % len(palette)]
|
||||
|
||||
|
||||
def show_results(scene: np.ndarray, matches: list[Match]) -> None:
|
||||
print(f"\n=== {len(matches)} match trovati ===")
|
||||
for i, m in enumerate(matches):
|
||||
print(f" #{i+1}: cx={m.cx:.1f} cy={m.cy:.1f} "
|
||||
f"angle={m.angle_deg:.1f}d scale={m.scale:.2f} score={m.score:.3f}")
|
||||
overlay = draw_matches(scene, matches)
|
||||
disp = _fit_for_display(overlay, max_side=1400)
|
||||
cv2.imshow(WINDOW_RESULT, disp)
|
||||
print("\nPremere un tasto sulla finestra per chiudere.")
|
||||
cv2.waitKey(0)
|
||||
cv2.destroyAllWindows()
|
||||
|
||||
|
||||
def run(
|
||||
initial_dir: str | None = None,
|
||||
angle_step_deg: float = 5.0,
|
||||
angle_range_deg: tuple[float, float] = (0.0, 360.0),
|
||||
scale_range: tuple[float, float] = (1.0, 1.0),
|
||||
scale_step: float = 0.1,
|
||||
num_features: int = 96,
|
||||
weak_grad: float = 30.0,
|
||||
strong_grad: float = 60.0,
|
||||
spread_radius: int = 5,
|
||||
pyramid_levels: int = 3,
|
||||
min_score: float = 0.55,
|
||||
max_matches: int = 25,
|
||||
backend: str = "line",
|
||||
) -> None:
|
||||
"""Entry-point GUI completo."""
|
||||
print("[1/4] Selezionare immagine MODELLO...")
|
||||
model_path = pick_file("Immagine MODELLO", initialdir=initial_dir)
|
||||
if not model_path:
|
||||
print("Annullato."); return
|
||||
model_img = load_image(model_path)
|
||||
print(f" caricato: {model_path} shape={model_img.shape}")
|
||||
|
||||
print("[2/4] Selezionare ROI sul modello (trascinare, INVIO conferma).")
|
||||
roi = select_roi(model_img)
|
||||
if roi is None:
|
||||
print("ROI vuota, annullato."); return
|
||||
print(f" ROI: {roi.shape[1]}x{roi.shape[0]} px")
|
||||
|
||||
print("[3/4] Selezionare immagine SCENA...")
|
||||
scene_path = pick_file("Immagine SCENA",
|
||||
initialdir=str(Path(model_path).parent))
|
||||
if not scene_path:
|
||||
print("Annullato."); return
|
||||
scene = load_image(scene_path)
|
||||
print(f" caricato: {scene_path} shape={scene.shape}")
|
||||
|
||||
print(f"[4/4] Train + match (backend={backend})...")
|
||||
if backend == "edge":
|
||||
matcher: EdgeShapeMatcher | LineShapeMatcher = EdgeShapeMatcher(
|
||||
angle_step_deg=angle_step_deg, angle_range_deg=angle_range_deg,
|
||||
scale_range=scale_range, scale_step=scale_step,
|
||||
)
|
||||
else:
|
||||
matcher = LineShapeMatcher(
|
||||
num_features=num_features,
|
||||
weak_grad=weak_grad, strong_grad=strong_grad,
|
||||
angle_step_deg=angle_step_deg, angle_range_deg=angle_range_deg,
|
||||
scale_range=scale_range, scale_step=scale_step,
|
||||
spread_radius=spread_radius, pyramid_levels=pyramid_levels,
|
||||
)
|
||||
import time
|
||||
t0 = time.time()
|
||||
n = matcher.train(roi)
|
||||
print(f" train: {n} varianti in {time.time()-t0:.2f}s")
|
||||
t0 = time.time()
|
||||
matches = matcher.find(scene, min_score=min_score, max_matches=max_matches)
|
||||
print(f" find: {len(matches)} match in {time.time()-t0:.2f}s")
|
||||
show_results(scene, matches)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_dir = "/home/adriano/Documenti/Git_XYZ/VisionSuite/Shape_model_2d/Test"
|
||||
run(initial_dir=test_dir if Path(test_dir).is_dir() else None)
|
||||
@@ -0,0 +1,351 @@
|
||||
"""Shape-based matcher stile linemod (line2Dup) - Python puro + numpy/OpenCV.
|
||||
|
||||
Porting algoritmico dell'idea di `meiqua/shape_based_matching` (no MIPP/SIMD —
|
||||
equivalente usando vettorizzazione numpy).
|
||||
|
||||
Training (costoso, fatto una volta per ricetta):
|
||||
- Per ogni variante (angolo, scala) del template:
|
||||
1. Sobel → magnitude + orientation
|
||||
2. Quantizzazione orientation in N_BINS bin (modulo π, edge simmetrici)
|
||||
3. Estrazione feature sparse top-magnitude con spacing minimo
|
||||
4. Salvataggio feature = liste (dx, dy, bin) relative al centro-modello
|
||||
|
||||
Matching (veloce):
|
||||
- Scena processata una sola volta per livello di piramide:
|
||||
Sobel → magnitude → quant orientation → spread (dilate per bin) →
|
||||
response map (N_BINS, H, W) — bit b acceso dove orientamento b presente.
|
||||
- Per ogni variante:
|
||||
score_map[y,x] = Σ resp[b_i][y+dy_i, x+dx_i] / N_features
|
||||
implementato con shift-add vettorizzato (numpy).
|
||||
- Piramide: matching top-level (basso costo, soglia ridotta) +
|
||||
refinement a risoluzione piena attorno ai candidati.
|
||||
|
||||
Il training supporta una `mask` binaria per modellare solo una regione parziale
|
||||
della ROI (modello non-rettangolare).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
N_BINS = 8 # orientamenti quantizzati modulo π
|
||||
|
||||
|
||||
@dataclass
|
||||
class Match:
|
||||
cx: float
|
||||
cy: float
|
||||
angle_deg: float
|
||||
scale: float
|
||||
score: float
|
||||
bbox: tuple[int, int, int, int]
|
||||
|
||||
|
||||
@dataclass
|
||||
class _Variant:
|
||||
"""Template precomputato (una pose)."""
|
||||
angle_deg: float
|
||||
scale: float
|
||||
# Feature come 3 array paralleli (dx, dy, bin) relativi al centro-modello
|
||||
dx: np.ndarray # int32, shape (N,)
|
||||
dy: np.ndarray # int32, shape (N,)
|
||||
bin: np.ndarray # int8, shape (N,)
|
||||
# Bbox kernel (per visualizzazione / limiti ricerca)
|
||||
kh: int
|
||||
kw: int
|
||||
cx_local: float # centro-modello dentro al bbox kernel (solo per bbox visivo)
|
||||
cy_local: float
|
||||
n_features: int
|
||||
|
||||
|
||||
class LineShapeMatcher:
|
||||
"""Shape-based matcher linemod-style - Python/numpy, no SIMD."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
num_features: int = 96,
|
||||
weak_grad: float = 30.0,
|
||||
strong_grad: float = 60.0,
|
||||
angle_range_deg: tuple[float, float] = (0.0, 360.0),
|
||||
angle_step_deg: float = 5.0,
|
||||
scale_range: tuple[float, float] = (1.0, 1.0),
|
||||
scale_step: float = 0.1,
|
||||
spread_radius: int = 4,
|
||||
min_feature_spacing: int = 3,
|
||||
pyramid_levels: int = 2,
|
||||
top_score_factor: float = 0.5,
|
||||
) -> None:
|
||||
self.num_features = num_features
|
||||
self.weak_grad = weak_grad
|
||||
self.strong_grad = strong_grad
|
||||
self.angle_range_deg = angle_range_deg
|
||||
self.angle_step_deg = angle_step_deg
|
||||
self.scale_range = scale_range
|
||||
self.scale_step = scale_step
|
||||
self.spread_radius = spread_radius
|
||||
self.min_feature_spacing = min_feature_spacing
|
||||
self.pyramid_levels = max(1, pyramid_levels)
|
||||
self.top_score_factor = top_score_factor
|
||||
|
||||
self.variants: list[_Variant] = []
|
||||
self.template_size: tuple[int, int] = (0, 0)
|
||||
|
||||
# --- Helpers -------------------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def _to_gray(img: np.ndarray) -> np.ndarray:
|
||||
if img.ndim == 3:
|
||||
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
|
||||
return img
|
||||
|
||||
@staticmethod
|
||||
def _gradient(gray: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
|
||||
gx = cv2.Sobel(gray, cv2.CV_32F, 1, 0, ksize=3)
|
||||
gy = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
|
||||
mag = cv2.magnitude(gx, gy)
|
||||
ang = np.arctan2(gy, gx)
|
||||
ang_mod = np.where(ang < 0, ang + np.pi, ang)
|
||||
bins = np.floor(ang_mod / np.pi * N_BINS).astype(np.int16)
|
||||
bins = np.clip(bins, 0, N_BINS - 1)
|
||||
return mag, bins
|
||||
|
||||
def _extract_features(
|
||||
self, mag: np.ndarray, bins: np.ndarray, mask: np.ndarray | None,
|
||||
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
if mask is not None:
|
||||
mag = np.where(mask > 0, mag, 0)
|
||||
strong = mag >= self.strong_grad
|
||||
ys, xs = np.where(strong)
|
||||
if len(xs) == 0:
|
||||
return (np.zeros(0, np.int32),) * 3
|
||||
vals = mag[ys, xs]
|
||||
order = np.argsort(-vals)
|
||||
spc = max(1, self.min_feature_spacing)
|
||||
occupied = np.zeros(mag.shape, dtype=bool)
|
||||
picked_x: list[int] = []
|
||||
picked_y: list[int] = []
|
||||
picked_b: list[int] = []
|
||||
for idx in order:
|
||||
y, x = int(ys[idx]), int(xs[idx])
|
||||
if occupied[y, x]:
|
||||
continue
|
||||
picked_x.append(x); picked_y.append(y)
|
||||
picked_b.append(int(bins[y, x]))
|
||||
y0 = max(0, y - spc); y1 = min(mag.shape[0], y + spc + 1)
|
||||
x0 = max(0, x - spc); x1 = min(mag.shape[1], x + spc + 1)
|
||||
occupied[y0:y1, x0:x1] = True
|
||||
if len(picked_x) >= self.num_features:
|
||||
break
|
||||
return (np.array(picked_x, np.int32),
|
||||
np.array(picked_y, np.int32),
|
||||
np.array(picked_b, np.int8))
|
||||
|
||||
def _scale_list(self) -> list[float]:
|
||||
s0, s1 = self.scale_range
|
||||
if s0 >= s1 or self.scale_step <= 0:
|
||||
return [float(s0)]
|
||||
n = int(np.floor((s1 - s0) / self.scale_step)) + 1
|
||||
return [float(s0 + i * self.scale_step) for i in range(n)]
|
||||
|
||||
def _angle_list(self) -> list[float]:
|
||||
a0, a1 = self.angle_range_deg
|
||||
if self.angle_step_deg <= 0 or a0 >= a1:
|
||||
return [float(a0)]
|
||||
n = int(np.floor((a1 - a0) / self.angle_step_deg))
|
||||
return [float(a0 + i * self.angle_step_deg) for i in range(n)]
|
||||
|
||||
# --- Training ------------------------------------------------------
|
||||
|
||||
def train(self, template_bgr: np.ndarray, mask: np.ndarray | None = None) -> int:
|
||||
"""Genera varianti rotate+scalate con feature sparse.
|
||||
|
||||
mask: maschera binaria opzionale (stessa shape del template) per
|
||||
limitare il modello a una regione non rettangolare.
|
||||
"""
|
||||
gray = self._to_gray(template_bgr)
|
||||
h, w = gray.shape
|
||||
self.template_size = (w, h)
|
||||
if mask is None:
|
||||
mask_full = np.full((h, w), 255, dtype=np.uint8)
|
||||
else:
|
||||
mask_full = (mask > 0).astype(np.uint8) * 255
|
||||
|
||||
self.variants.clear()
|
||||
for s in self._scale_list():
|
||||
sw = max(16, int(round(w * s)))
|
||||
sh = max(16, int(round(h * s)))
|
||||
gray_s = cv2.resize(gray, (sw, sh), interpolation=cv2.INTER_LINEAR)
|
||||
mask_s = cv2.resize(mask_full, (sw, sh), interpolation=cv2.INTER_NEAREST)
|
||||
diag = int(np.ceil(np.hypot(sh, sw))) + 6
|
||||
py = (diag - sh) // 2
|
||||
px = (diag - sw) // 2
|
||||
gray_p = cv2.copyMakeBorder(
|
||||
gray_s, py, diag - sh - py, px, diag - sw - px,
|
||||
cv2.BORDER_REPLICATE,
|
||||
)
|
||||
mask_p = cv2.copyMakeBorder(
|
||||
mask_s, py, diag - sh - py, px, diag - sw - px,
|
||||
cv2.BORDER_CONSTANT, value=0,
|
||||
)
|
||||
center = (diag / 2.0, diag / 2.0)
|
||||
|
||||
for ang in self._angle_list():
|
||||
M = cv2.getRotationMatrix2D(center, ang, 1.0)
|
||||
gray_r = cv2.warpAffine(
|
||||
gray_p, M, (diag, diag),
|
||||
flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REPLICATE,
|
||||
)
|
||||
mask_r = cv2.warpAffine(
|
||||
mask_p, M, (diag, diag),
|
||||
flags=cv2.INTER_NEAREST, borderValue=0,
|
||||
)
|
||||
mag, bins = self._gradient(gray_r)
|
||||
fx, fy, fb = self._extract_features(mag, bins, mask_r)
|
||||
if len(fx) < 8:
|
||||
continue
|
||||
|
||||
# Feature relative al centro-modello (centro rotazione)
|
||||
cx_c = diag / 2.0
|
||||
cy_c = diag / 2.0
|
||||
dx = (fx - cx_c).astype(np.int32)
|
||||
dy = (fy - cy_c).astype(np.int32)
|
||||
|
||||
# Dimensione bbox per visualizzazione
|
||||
x0 = int(dx.min()); x1 = int(dx.max())
|
||||
y0 = int(dy.min()); y1 = int(dy.max())
|
||||
kw = x1 - x0 + 1
|
||||
kh = y1 - y0 + 1
|
||||
cx_local = -x0 # posizione centro dentro al bbox
|
||||
cy_local = -y0
|
||||
|
||||
self.variants.append(_Variant(
|
||||
angle_deg=float(ang),
|
||||
scale=float(s),
|
||||
dx=dx, dy=dy, bin=fb,
|
||||
kh=kh, kw=kw,
|
||||
cx_local=float(cx_local), cy_local=float(cy_local),
|
||||
n_features=len(fx),
|
||||
))
|
||||
return len(self.variants)
|
||||
|
||||
# --- Matching ------------------------------------------------------
|
||||
|
||||
def _response_map(self, gray: np.ndarray) -> np.ndarray:
|
||||
"""Costruisce response map shape (N_BINS, H, W) float32 0/1."""
|
||||
mag, bins = self._gradient(gray)
|
||||
valid = mag >= self.weak_grad
|
||||
k = 2 * self.spread_radius + 1
|
||||
kernel = np.ones((k, k), dtype=np.uint8)
|
||||
resp = np.zeros((N_BINS, gray.shape[0], gray.shape[1]), dtype=np.float32)
|
||||
for b in range(N_BINS):
|
||||
mask_b = ((bins == b) & valid).astype(np.uint8)
|
||||
d = cv2.dilate(mask_b, kernel)
|
||||
resp[b] = d.astype(np.float32)
|
||||
return resp
|
||||
|
||||
@staticmethod
|
||||
def _score_by_shift(
|
||||
resp: np.ndarray, dx: np.ndarray, dy: np.ndarray, bins: np.ndarray,
|
||||
) -> np.ndarray:
|
||||
"""score[y,x] = Σ_i resp[bin_i][y+dy_i, x+dx_i] / len(dx).
|
||||
|
||||
Implementazione vettorizzata con slicing.
|
||||
"""
|
||||
_, H, W = resp.shape
|
||||
acc = np.zeros((H, W), dtype=np.float32)
|
||||
for i in range(len(dx)):
|
||||
ddx = int(dx[i]); ddy = int(dy[i]); b = int(bins[i])
|
||||
# dst[y, x] += resp[b][y+ddy, x+ddx]
|
||||
y0s = max(0, -ddy); y1s = min(H, H - ddy)
|
||||
x0s = max(0, -ddx); x1s = min(W, W - ddx)
|
||||
if y0s >= y1s or x0s >= x1s:
|
||||
continue
|
||||
y0r = y0s + ddy; y1r = y1s + ddy
|
||||
x0r = x0s + ddx; x1r = x1s + ddx
|
||||
acc[y0s:y1s, x0s:x1s] += resp[b, y0r:y1r, x0r:x1r]
|
||||
if len(dx) > 0:
|
||||
acc /= len(dx)
|
||||
return acc
|
||||
|
||||
def find(
|
||||
self,
|
||||
scene_bgr: np.ndarray,
|
||||
min_score: float = 0.6,
|
||||
max_matches: int = 20,
|
||||
nms_radius: int | None = None,
|
||||
) -> list[Match]:
|
||||
if not self.variants:
|
||||
raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
|
||||
|
||||
gray0 = self._to_gray(scene_bgr)
|
||||
grays = [gray0]
|
||||
for _ in range(self.pyramid_levels - 1):
|
||||
grays.append(cv2.pyrDown(grays[-1]))
|
||||
top = len(grays) - 1
|
||||
sf = 2 ** top
|
||||
|
||||
# Response map top-level (usata SOLO per pruning varianti)
|
||||
resp_top = self._response_map(grays[top])
|
||||
if nms_radius is None:
|
||||
nms_radius = max(8, min(self.template_size) // 2)
|
||||
top_thresh = min_score * self.top_score_factor
|
||||
|
||||
# Pruning varianti via top-level
|
||||
kept_variants: list[int] = []
|
||||
for vi, var in enumerate(self.variants):
|
||||
dx_t = (var.dx // sf).astype(np.int32)
|
||||
dy_t = (var.dy // sf).astype(np.int32)
|
||||
key = ((dx_t.astype(np.int64) << 24)
|
||||
| (dy_t.astype(np.int64) << 8)
|
||||
| var.bin.astype(np.int64))
|
||||
_, uniq_idx = np.unique(key, return_index=True)
|
||||
score = self._score_by_shift(
|
||||
resp_top, dx_t[uniq_idx], dy_t[uniq_idx], var.bin[uniq_idx],
|
||||
)
|
||||
if score.size and score.max() >= top_thresh:
|
||||
kept_variants.append(vi)
|
||||
|
||||
if not kept_variants:
|
||||
return []
|
||||
|
||||
# Full-res: score_by_shift solo per le varianti sopravvissute
|
||||
resp0 = self._response_map(gray0)
|
||||
refined: list[tuple[float, float, float, int]] = []
|
||||
for vi in kept_variants:
|
||||
var = self.variants[vi]
|
||||
score = self._score_by_shift(resp0, var.dx, var.dy, var.bin)
|
||||
# Picchi sopra soglia
|
||||
ys, xs = np.where(score >= min_score)
|
||||
if len(ys) == 0:
|
||||
continue
|
||||
vals = score[ys, xs]
|
||||
# Ordine decrescente (solo i top-K per evitare liste enormi)
|
||||
K = min(len(vals), max_matches * 5)
|
||||
ord_idx = np.argpartition(-vals, K - 1)[:K]
|
||||
for i in ord_idx:
|
||||
refined.append((float(vals[i]),
|
||||
float(xs[i]), float(ys[i]), vi))
|
||||
|
||||
refined.sort(key=lambda c: -c[0])
|
||||
|
||||
kept: list[Match] = []
|
||||
r2 = nms_radius * nms_radius
|
||||
for score, cx, cy, vi in refined:
|
||||
if any((k.cx - cx) ** 2 + (k.cy - cy) ** 2 < r2 for k in kept):
|
||||
continue
|
||||
var = self.variants[vi]
|
||||
bx = int(round(cx - var.cx_local))
|
||||
by = int(round(cy - var.cy_local))
|
||||
kept.append(Match(
|
||||
cx=cx, cy=cy,
|
||||
angle_deg=var.angle_deg,
|
||||
scale=var.scale,
|
||||
score=score,
|
||||
bbox=(bx, by, var.kw, var.kh),
|
||||
))
|
||||
if len(kept) >= max_matches:
|
||||
break
|
||||
return kept
|
||||
+320
@@ -0,0 +1,320 @@
|
||||
"""Pattern Matching 2D shape-based via edge template matching multi-rotazione/scala.
|
||||
|
||||
Algoritmo equivalente a Fase Alpha del documento tecnico Vision Suite:
|
||||
- Estrazione edge Canny dal template (invarianza illuminazione)
|
||||
- Generazione varianti del template edge per ogni (angolo, scala)
|
||||
- matchTemplate NCC sulla scena edge per ogni variante
|
||||
- Picchi locali con NMS spaziale per multi-istanza
|
||||
|
||||
Uso: vedi `EdgeShapeMatcher.train` e `EdgeShapeMatcher.find`.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
|
||||
@dataclass
|
||||
class Match:
|
||||
"""Singola istanza trovata nella scena."""
|
||||
|
||||
cx: float # baricentro x [px] nella scena
|
||||
cy: float # baricentro y [px] nella scena
|
||||
angle_deg: float # rotazione [0, 360)
|
||||
scale: float # fattore scala (1.0 = template originale)
|
||||
score: float # similarità NCC [0, 1]
|
||||
bbox: tuple[int, int, int, int] # x, y, w, h del template ruotato/scalato
|
||||
|
||||
|
||||
@dataclass
|
||||
class Template:
|
||||
"""Variante precomputata del template a un dato (angolo, scala)."""
|
||||
|
||||
angle_deg: float
|
||||
scale: float
|
||||
edge: np.ndarray # immagine edge ruotata+scalata (uint8 0/255)
|
||||
mask: np.ndarray # maschera supporto (uint8 0/255)
|
||||
cx_local: float # baricentro nel sistema locale variante
|
||||
cy_local: float
|
||||
|
||||
|
||||
class EdgeShapeMatcher:
|
||||
"""Matcher shape-based su edge Canny con rotazione + scala precomputate."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
canny_low: int = 50,
|
||||
canny_high: int = 150,
|
||||
angle_step_deg: float = 5.0,
|
||||
angle_range_deg: tuple[float, float] = (0.0, 360.0),
|
||||
scale_range: tuple[float, float] = (1.0, 1.0),
|
||||
scale_step: float = 0.1,
|
||||
match_method: int = cv2.TM_CCOEFF_NORMED,
|
||||
pyramid_levels: int = 3,
|
||||
top_score_factor: float = 0.6,
|
||||
) -> None:
|
||||
self.canny_low = canny_low
|
||||
self.canny_high = canny_high
|
||||
self.angle_step_deg = angle_step_deg
|
||||
self.angle_range_deg = angle_range_deg
|
||||
self.scale_range = scale_range
|
||||
self.scale_step = scale_step
|
||||
self.match_method = match_method
|
||||
self.pyramid_levels = max(1, pyramid_levels)
|
||||
self.top_score_factor = top_score_factor
|
||||
self.templates: list[Template] = []
|
||||
self.template_size: tuple[int, int] = (0, 0) # w, h originale
|
||||
|
||||
@staticmethod
|
||||
def _to_gray(img: np.ndarray) -> np.ndarray:
|
||||
if img.ndim == 3:
|
||||
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
|
||||
return img
|
||||
|
||||
def _edges(self, gray: np.ndarray) -> np.ndarray:
|
||||
return cv2.Canny(gray, self.canny_low, self.canny_high)
|
||||
|
||||
def _scale_list(self) -> list[float]:
|
||||
s0, s1 = self.scale_range
|
||||
if s0 >= s1 or self.scale_step <= 0:
|
||||
return [float(s0)]
|
||||
n = int(np.floor((s1 - s0) / self.scale_step)) + 1
|
||||
return [float(s0 + i * self.scale_step) for i in range(n)]
|
||||
|
||||
def _angle_list(self) -> list[float]:
|
||||
a0, a1 = self.angle_range_deg
|
||||
if self.angle_step_deg <= 0 or a0 >= a1:
|
||||
return [float(a0)]
|
||||
n = int(np.floor((a1 - a0) / self.angle_step_deg))
|
||||
return [float(a0 + i * self.angle_step_deg) for i in range(n)]
|
||||
|
||||
def train(self, template_bgr: np.ndarray) -> int:
|
||||
"""Genera varianti per tutte le combinazioni (angolo, scala)."""
|
||||
gray = self._to_gray(template_bgr)
|
||||
h, w = gray.shape
|
||||
self.template_size = (w, h)
|
||||
edge_orig = self._edges(gray)
|
||||
mask_orig = np.full((h, w), 255, dtype=np.uint8)
|
||||
|
||||
self.templates.clear()
|
||||
scales = self._scale_list()
|
||||
angles = self._angle_list()
|
||||
|
||||
for s in scales:
|
||||
sw = max(8, int(round(w * s)))
|
||||
sh = max(8, int(round(h * s)))
|
||||
edge_s = cv2.resize(edge_orig, (sw, sh), interpolation=cv2.INTER_LINEAR)
|
||||
mask_s = cv2.resize(mask_orig, (sw, sh), interpolation=cv2.INTER_NEAREST)
|
||||
# Re-thresh dopo resize
|
||||
_, edge_s = cv2.threshold(edge_s, 64, 255, cv2.THRESH_BINARY)
|
||||
|
||||
# Padding diagonale per rotazione senza cropping
|
||||
diag = int(np.ceil(np.hypot(sh, sw))) + 4
|
||||
pad_y = (diag - sh) // 2
|
||||
pad_x = (diag - sw) // 2
|
||||
edge_p = cv2.copyMakeBorder(
|
||||
edge_s, pad_y, diag - sh - pad_y, pad_x, diag - sw - pad_x,
|
||||
cv2.BORDER_CONSTANT, value=0,
|
||||
)
|
||||
mask_p = cv2.copyMakeBorder(
|
||||
mask_s, pad_y, diag - sh - pad_y, pad_x, diag - sw - pad_x,
|
||||
cv2.BORDER_CONSTANT, value=0,
|
||||
)
|
||||
center = (diag / 2.0, diag / 2.0)
|
||||
|
||||
for ang in angles:
|
||||
M = cv2.getRotationMatrix2D(center, ang, 1.0)
|
||||
edge_r = cv2.warpAffine(
|
||||
edge_p, M, (diag, diag),
|
||||
flags=cv2.INTER_LINEAR, borderValue=0,
|
||||
)
|
||||
mask_r = cv2.warpAffine(
|
||||
mask_p, M, (diag, diag),
|
||||
flags=cv2.INTER_NEAREST, borderValue=0,
|
||||
)
|
||||
|
||||
# Crop minimo bounding mask
|
||||
ys, xs = np.where(mask_r > 0)
|
||||
if len(xs) == 0:
|
||||
continue
|
||||
x0, x1 = xs.min(), xs.max() + 1
|
||||
y0, y1 = ys.min(), ys.max() + 1
|
||||
edge_c = edge_r[y0:y1, x0:x1]
|
||||
mask_c = mask_r[y0:y1, x0:x1]
|
||||
|
||||
cx_local = (mask_c.shape[1] - 1) / 2.0
|
||||
cy_local = (mask_c.shape[0] - 1) / 2.0
|
||||
|
||||
self.templates.append(
|
||||
Template(
|
||||
angle_deg=float(ang),
|
||||
scale=float(s),
|
||||
edge=edge_c,
|
||||
mask=mask_c,
|
||||
cx_local=cx_local,
|
||||
cy_local=cy_local,
|
||||
)
|
||||
)
|
||||
return len(self.templates)
|
||||
|
||||
def _pyrdown_binary(self, img: np.ndarray) -> np.ndarray:
|
||||
"""pyrDown + re-thresh per mantenere binario 0/255."""
|
||||
d = cv2.pyrDown(img)
|
||||
_, d = cv2.threshold(d, 32, 255, cv2.THRESH_BINARY)
|
||||
return d
|
||||
|
||||
def find(
|
||||
self,
|
||||
scene_bgr: np.ndarray,
|
||||
min_score: float = 0.5,
|
||||
max_matches: int = 10,
|
||||
nms_radius: int | None = None,
|
||||
) -> list[Match]:
|
||||
"""Cerca istanze del template nella scena con strategia piramidale.
|
||||
|
||||
- Top-level: matching brute-force a bassa risoluzione (veloce, soglia ridotta)
|
||||
- Refinement: re-match locale a risoluzione piena per ogni candidato
|
||||
"""
|
||||
if not self.templates:
|
||||
raise RuntimeError("Matcher non addestrato: chiamare train() prima.")
|
||||
|
||||
gray = self._to_gray(scene_bgr)
|
||||
scene_edge0 = self._edges(gray)
|
||||
|
||||
# Piramide scena edge
|
||||
scene_pyr = [scene_edge0]
|
||||
for _ in range(self.pyramid_levels - 1):
|
||||
scene_pyr.append(self._pyrdown_binary(scene_pyr[-1]))
|
||||
top = len(scene_pyr) - 1
|
||||
sf = 2 ** top # scale factor top→0
|
||||
scene_top = scene_pyr[top]
|
||||
|
||||
if nms_radius is None:
|
||||
nms_radius = max(8, min(self.template_size) // 2)
|
||||
|
||||
top_thresh = min_score * self.top_score_factor
|
||||
|
||||
# Top-level brute-force
|
||||
candidates: list[tuple[float, int, int, int]] = []
|
||||
for ti, tpl in enumerate(self.templates):
|
||||
edge_top = tpl.edge.copy()
|
||||
mask_top = tpl.mask.copy()
|
||||
for _ in range(top):
|
||||
edge_top = self._pyrdown_binary(edge_top)
|
||||
mask_top = self._pyrdown_binary(mask_top)
|
||||
th, tw = edge_top.shape
|
||||
if th < 6 or tw < 6:
|
||||
continue
|
||||
if scene_top.shape[0] < th or scene_top.shape[1] < tw:
|
||||
continue
|
||||
res = cv2.matchTemplate(
|
||||
scene_top, edge_top, self.match_method, mask=mask_top,
|
||||
)
|
||||
res = np.nan_to_num(res, nan=-1.0, posinf=-1.0, neginf=-1.0)
|
||||
ys, xs = np.where(res >= top_thresh)
|
||||
for y, x in zip(ys, xs):
|
||||
candidates.append((float(res[y, x]), int(x), int(y), ti))
|
||||
|
||||
# Refinement a risoluzione piena: per ogni candidato top, finestra locale
|
||||
refined: list[tuple[float, int, int, int]] = []
|
||||
margin = sf + 4
|
||||
for _, xt, yt, ti in candidates:
|
||||
tpl = self.templates[ti]
|
||||
th, tw = tpl.edge.shape
|
||||
x0 = xt * sf
|
||||
y0 = yt * sf
|
||||
sx0 = max(0, x0 - margin)
|
||||
sy0 = max(0, y0 - margin)
|
||||
sx1 = min(scene_edge0.shape[1], x0 + tw + margin)
|
||||
sy1 = min(scene_edge0.shape[0], y0 + th + margin)
|
||||
roi = scene_edge0[sy0:sy1, sx0:sx1]
|
||||
if roi.shape[0] < th or roi.shape[1] < tw:
|
||||
continue
|
||||
res = cv2.matchTemplate(
|
||||
roi, tpl.edge, self.match_method, mask=tpl.mask,
|
||||
)
|
||||
res = np.nan_to_num(res, nan=-1.0, posinf=-1.0, neginf=-1.0)
|
||||
_, max_val, _, max_loc = cv2.minMaxLoc(res)
|
||||
if max_val < min_score:
|
||||
continue
|
||||
bx = sx0 + max_loc[0]
|
||||
by = sy0 + max_loc[1]
|
||||
refined.append((float(max_val), bx, by, ti))
|
||||
|
||||
refined.sort(key=lambda c: -c[0])
|
||||
|
||||
# NMS spaziale baricentri
|
||||
kept: list[Match] = []
|
||||
r2 = nms_radius * nms_radius
|
||||
for score, x, y, ti in refined:
|
||||
tpl = self.templates[ti]
|
||||
cx = x + tpl.cx_local
|
||||
cy = y + tpl.cy_local
|
||||
if any((k.cx - cx) ** 2 + (k.cy - cy) ** 2 < r2 for k in kept):
|
||||
continue
|
||||
th, tw = tpl.edge.shape
|
||||
kept.append(
|
||||
Match(
|
||||
cx=cx, cy=cy,
|
||||
angle_deg=tpl.angle_deg,
|
||||
scale=tpl.scale,
|
||||
score=score,
|
||||
bbox=(x, y, tw, th),
|
||||
)
|
||||
)
|
||||
if len(kept) >= max_matches:
|
||||
break
|
||||
return kept
|
||||
|
||||
# --- Persistenza modello ---
|
||||
|
||||
def save(self, path: str) -> None:
|
||||
"""Salva matcher su disco (.npz)."""
|
||||
meta = np.array(
|
||||
[(t.angle_deg, t.scale, t.cx_local, t.cy_local) for t in self.templates],
|
||||
dtype=np.float32,
|
||||
)
|
||||
params = np.array(
|
||||
[self.canny_low, self.canny_high, self.angle_step_deg,
|
||||
self.angle_range_deg[0], self.angle_range_deg[1],
|
||||
self.scale_range[0], self.scale_range[1], self.scale_step,
|
||||
self.template_size[0], self.template_size[1], self.match_method,
|
||||
self.pyramid_levels, self.top_score_factor],
|
||||
dtype=np.float32,
|
||||
)
|
||||
arrays = {f"edge_{i}": t.edge for i, t in enumerate(self.templates)}
|
||||
arrays.update({f"mask_{i}": t.mask for i, t in enumerate(self.templates)})
|
||||
np.savez_compressed(path, params=params, meta=meta, **arrays)
|
||||
|
||||
@classmethod
|
||||
def load(cls, path: str) -> "EdgeShapeMatcher":
|
||||
z = np.load(path)
|
||||
p = z["params"]
|
||||
m = cls(
|
||||
canny_low=int(p[0]),
|
||||
canny_high=int(p[1]),
|
||||
angle_step_deg=float(p[2]),
|
||||
angle_range_deg=(float(p[3]), float(p[4])),
|
||||
scale_range=(float(p[5]), float(p[6])),
|
||||
scale_step=float(p[7]),
|
||||
match_method=int(p[10]),
|
||||
pyramid_levels=int(p[11]) if len(p) > 11 else 3,
|
||||
top_score_factor=float(p[12]) if len(p) > 12 else 0.6,
|
||||
)
|
||||
m.template_size = (int(p[8]), int(p[9]))
|
||||
meta = z["meta"]
|
||||
for i in range(len(meta)):
|
||||
m.templates.append(
|
||||
Template(
|
||||
angle_deg=float(meta[i, 0]),
|
||||
scale=float(meta[i, 1]),
|
||||
edge=z[f"edge_{i}"],
|
||||
mask=z[f"mask_{i}"],
|
||||
cx_local=float(meta[i, 2]),
|
||||
cy_local=float(meta[i, 3]),
|
||||
)
|
||||
)
|
||||
return m
|
||||
@@ -0,0 +1,8 @@
|
||||
[project]
|
||||
name = "shape-model-2d"
|
||||
version = "0.1.0"
|
||||
requires-python = ">=3.13"
|
||||
dependencies = [
|
||||
"numpy>=1.24",
|
||||
"opencv-python>=4.8",
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,86 @@
|
||||
version = 1
|
||||
revision = 3
|
||||
requires-python = ">=3.13"
|
||||
|
||||
[[package]]
|
||||
name = "numpy"
|
||||
version = "2.4.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d7/9f/b8cef5bffa569759033adda9481211426f12f53299629b410340795c2514/numpy-2.4.4.tar.gz", hash = "sha256:2d390634c5182175533585cc89f3608a4682ccb173cc9bb940b2881c8d6f8fa0", size = 20731587, upload-time = "2026-03-29T13:22:01.298Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/14/1d/d0a583ce4fefcc3308806a749a536c201ed6b5ad6e1322e227ee4848979d/numpy-2.4.4-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:08f2e31ed5e6f04b118e49821397f12767934cfdd12a1ce86a058f91e004ee50", size = 16684933, upload-time = "2026-03-29T13:19:22.47Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/62/2b7a48fbb745d344742c0277f01286dead15f3f68e4f359fbfcf7b48f70f/numpy-2.4.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e823b8b6edc81e747526f70f71a9c0a07ac4e7ad13020aa736bb7c9d67196115", size = 14694532, upload-time = "2026-03-29T13:19:25.581Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/87/499737bfba066b4a3bebff24a8f1c5b2dee410b209bc6668c9be692580f0/numpy-2.4.4-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4a19d9dba1a76618dd86b164d608566f393f8ec6ac7c44f0cc879011c45e65af", size = 5199661, upload-time = "2026-03-29T13:19:28.31Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/da/464d551604320d1491bc345efed99b4b7034143a85787aab78d5691d5a0e/numpy-2.4.4-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:d2a8490669bfe99a233298348acc2d824d496dee0e66e31b66a6022c2ad74a5c", size = 6547539, upload-time = "2026-03-29T13:19:30.97Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/90/8d23e3b0dafd024bf31bdec225b3bb5c2dbfa6912f8a53b8659f21216cbf/numpy-2.4.4-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:45dbed2ab436a9e826e302fcdcbe9133f9b0006e5af7168afb8963a6520da103", size = 15668806, upload-time = "2026-03-29T13:19:33.887Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/73/a9d864e42a01896bb5974475438f16086be9ba1f0d19d0bb7a07427c4a8b/numpy-2.4.4-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c901b15172510173f5cb310eae652908340f8dede90fff9e3bf6c0d8dfd92f83", size = 16632682, upload-time = "2026-03-29T13:19:37.336Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/fb/14570d65c3bde4e202a031210475ae9cde9b7686a2e7dc97ee67d2833b35/numpy-2.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:99d838547ace2c4aace6c4f76e879ddfe02bb58a80c1549928477862b7a6d6ed", size = 17019810, upload-time = "2026-03-29T13:19:40.963Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/77/2ba9d87081fd41f6d640c83f26fb7351e536b7ce6dd9061b6af5904e8e46/numpy-2.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:0aec54fd785890ecca25a6003fd9a5aed47ad607bbac5cd64f836ad8666f4959", size = 18357394, upload-time = "2026-03-29T13:19:44.859Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/23/52666c9a41708b0853fa3b1a12c90da38c507a3074883823126d4e9d5b30/numpy-2.4.4-cp313-cp313-win32.whl", hash = "sha256:07077278157d02f65c43b1b26a3886bce886f95d20aabd11f87932750dfb14ed", size = 5959556, upload-time = "2026-03-29T13:19:47.661Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/fb/48649b4971cde70d817cf97a2a2fdc0b4d8308569f1dd2f2611959d2e0cf/numpy-2.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:5c70f1cc1c4efbe316a572e2d8b9b9cc44e89b95f79ca3331553fbb63716e2bf", size = 12317311, upload-time = "2026-03-29T13:19:50.67Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/d8/11490cddd564eb4de97b4579ef6bfe6a736cc07e94c1598590ae25415e01/numpy-2.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:ef4059d6e5152fa1a39f888e344c73fdc926e1b2dd58c771d67b0acfbf2aa67d", size = 10222060, upload-time = "2026-03-29T13:19:54.229Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/5d/dab4339177a905aad3e2221c915b35202f1ec30d750dd2e5e9d9a72b804b/numpy-2.4.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4bbc7f303d125971f60ec0aaad5e12c62d0d2c925f0ab1273debd0e4ba37aba5", size = 14822302, upload-time = "2026-03-29T13:19:57.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/e4/0564a65e7d3d97562ed6f9b0fd0fb0a6f559ee444092f105938b50043876/numpy-2.4.4-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:4d6d57903571f86180eb98f8f0c839fa9ebbfb031356d87f1361be91e433f5b7", size = 5327407, upload-time = "2026-03-29T13:20:00.601Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/8d/35a3a6ce5ad371afa58b4700f1c820f8f279948cca32524e0a695b0ded83/numpy-2.4.4-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:4636de7fd195197b7535f231b5de9e4b36d2c440b6e566d2e4e4746e6af0ca93", size = 6647631, upload-time = "2026-03-29T13:20:02.855Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/da/477731acbd5a58a946c736edfdabb2ac5b34c3d08d1ba1a7b437fa0884df/numpy-2.4.4-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ad2e2ef14e0b04e544ea2fa0a36463f847f113d314aa02e5b402fdf910ef309e", size = 15727691, upload-time = "2026-03-29T13:20:06.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/db/338535d9b152beabeb511579598418ba0212ce77cf9718edd70262cc4370/numpy-2.4.4-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5a285b3b96f951841799528cd1f4f01cd70e7e0204b4abebac9463eecfcf2a40", size = 16681241, upload-time = "2026-03-29T13:20:09.417Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/a9/ad248e8f58beb7a0219b413c9c7d8151c5d285f7f946c3e26695bdbbe2df/numpy-2.4.4-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:f8474c4241bc18b750be2abea9d7a9ec84f46ef861dbacf86a4f6e043401f79e", size = 17085767, upload-time = "2026-03-29T13:20:13.126Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/1a/3b88ccd3694681356f70da841630e4725a7264d6a885c8d442a697e1146b/numpy-2.4.4-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4e874c976154687c1f71715b034739b45c7711bec81db01914770373d125e392", size = 18403169, upload-time = "2026-03-29T13:20:17.096Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/c9/fcfd5d0639222c6eac7f304829b04892ef51c96a75d479214d77e3ce6e33/numpy-2.4.4-cp313-cp313t-win32.whl", hash = "sha256:9c585a1790d5436a5374bac930dad6ed244c046ed91b2b2a3634eb2971d21008", size = 6083477, upload-time = "2026-03-29T13:20:20.195Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/e3/3938a61d1c538aaec8ed6fd6323f57b0c2d2d2219512434c5c878db76553/numpy-2.4.4-cp313-cp313t-win_amd64.whl", hash = "sha256:93e15038125dc1e5345d9b5b68aa7f996ec33b98118d18c6ca0d0b7d6198b7e8", size = 12457487, upload-time = "2026-03-29T13:20:22.946Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/6a/7e345032cc60501721ef94e0e30b60f6b0bd601f9174ebd36389a2b86d40/numpy-2.4.4-cp313-cp313t-win_arm64.whl", hash = "sha256:0dfd3f9d3adbe2920b68b5cd3d51444e13a10792ec7154cd0a2f6e74d4ab3233", size = 10292002, upload-time = "2026-03-29T13:20:25.909Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/06/c54062f85f673dd5c04cbe2f14c3acb8c8b95e3384869bb8cc9bff8cb9df/numpy-2.4.4-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:f169b9a863d34f5d11b8698ead99febeaa17a13ca044961aa8e2662a6c7766a0", size = 16684353, upload-time = "2026-03-29T13:20:29.504Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4c/39/8a320264a84404c74cc7e79715de85d6130fa07a0898f67fb5cd5bd79908/numpy-2.4.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:2483e4584a1cb3092da4470b38866634bafb223cbcd551ee047633fd2584599a", size = 14704914, upload-time = "2026-03-29T13:20:33.547Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/fb/287076b2614e1d1044235f50f03748f31fa287e3dbe6abeb35cdfa351eca/numpy-2.4.4-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:2d19e6e2095506d1736b7d80595e0f252d76b89f5e715c35e06e937679ea7d7a", size = 5210005, upload-time = "2026-03-29T13:20:36.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/63/eb/fcc338595309910de6ecabfcef2419a9ce24399680bfb149421fa2df1280/numpy-2.4.4-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:6a246d5914aa1c820c9443ddcee9c02bec3e203b0c080349533fae17727dfd1b", size = 6544974, upload-time = "2026-03-29T13:20:39.014Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/5d/e7e9044032a716cdfaa3fba27a8e874bf1c5f1912a1ddd4ed071bf8a14a6/numpy-2.4.4-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:989824e9faf85f96ec9c7761cd8d29c531ad857bfa1daa930cba85baaecf1a9a", size = 15684591, upload-time = "2026-03-29T13:20:42.146Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/7c/21252050676612625449b4807d6b695b9ce8a7c9e1c197ee6216c8a65c7c/numpy-2.4.4-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:27a8d92cd10f1382a67d7cf4db7ce18341b66438bdd9f691d7b0e48d104c2a9d", size = 16637700, upload-time = "2026-03-29T13:20:46.204Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/29/56d2bbef9465db24ef25393383d761a1af4f446a1df9b8cded4fe3a5a5d7/numpy-2.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:e44319a2953c738205bf3354537979eaa3998ed673395b964c1176083dd46252", size = 17035781, upload-time = "2026-03-29T13:20:50.242Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/2b/a35a6d7589d21f44cea7d0a98de5ddcbb3d421b2622a5c96b1edf18707c3/numpy-2.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e892aff75639bbef0d2a2cfd55535510df26ff92f63c92cd84ef8d4ba5a5557f", size = 18362959, upload-time = "2026-03-29T13:20:54.019Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/64/c9/d52ec581f2390e0f5f85cbfd80fb83d965fc15e9f0e1aec2195faa142cde/numpy-2.4.4-cp314-cp314-win32.whl", hash = "sha256:1378871da56ca8943c2ba674530924bb8ca40cd228358a3b5f302ad60cf875fc", size = 6008768, upload-time = "2026-03-29T13:20:56.912Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fa/22/4cc31a62a6c7b74a8730e31a4274c5dc80e005751e277a2ce38e675e4923/numpy-2.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:715d1c092715954784bc79e1174fc2a90093dc4dc84ea15eb14dad8abdcdeb74", size = 12449181, upload-time = "2026-03-29T13:20:59.548Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/2e/14cda6f4d8e396c612d1bf97f22958e92148801d7e4f110cabebdc0eef4b/numpy-2.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:2c194dd721e54ecad9ad387c1d35e63dce5c4450c6dc7dd5611283dda239aabb", size = 10496035, upload-time = "2026-03-29T13:21:02.524Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/e8/8fed8c8d848d7ecea092dc3469643f9d10bc3a134a815a3b033da1d2039b/numpy-2.4.4-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:2aa0613a5177c264ff5921051a5719d20095ea586ca88cc802c5c218d1c67d3e", size = 14824958, upload-time = "2026-03-29T13:21:05.671Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/1a/d8007a5138c179c2bf33ef44503e83d70434d2642877ee8fbb230e7c0548/numpy-2.4.4-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:42c16925aa5a02362f986765f9ebabf20de75cdefdca827d14315c568dcab113", size = 5330020, upload-time = "2026-03-29T13:21:08.635Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/64/ffb99ac6ae93faf117bcbd5c7ba48a7f45364a33e8e458545d3633615dda/numpy-2.4.4-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:874f200b2a981c647340f841730fc3a2b54c9d940566a3c4149099591e2c4c3d", size = 6650758, upload-time = "2026-03-29T13:21:10.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/6e/795cc078b78a384052e73b2f6281ff7a700e9bf53bcce2ee579d4f6dd879/numpy-2.4.4-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c9b39d38a9bd2ae1becd7eac1303d031c5c110ad31f2b319c6e7d98b135c934d", size = 15729948, upload-time = "2026-03-29T13:21:14.047Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/86/2acbda8cc2af5f3d7bfc791192863b9e3e19674da7b5e533fded124d1299/numpy-2.4.4-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b268594bccac7d7cf5844c7732e3f20c50921d94e36d7ec9b79e9857694b1b2f", size = 16679325, upload-time = "2026-03-29T13:21:17.561Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/59/cafd83018f4aa55e0ac6fa92aa066c0a1877b77a615ceff1711c260ffae8/numpy-2.4.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:ac6b31e35612a26483e20750126d30d0941f949426974cace8e6b5c58a3657b0", size = 17084883, upload-time = "2026-03-29T13:21:21.106Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/85/a42548db84e65ece46ab2caea3d3f78b416a47af387fcbb47ec28e660dc2/numpy-2.4.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:8e3ed142f2728df44263aaf5fb1f5b0b99f4070c553a0d7f033be65338329150", size = 18403474, upload-time = "2026-03-29T13:21:24.828Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/ad/483d9e262f4b831000062e5d8a45e342166ec8aaa1195264982bca267e62/numpy-2.4.4-cp314-cp314t-win32.whl", hash = "sha256:dddbbd259598d7240b18c9d87c56a9d2fb3b02fe266f49a7c101532e78c1d871", size = 6155500, upload-time = "2026-03-29T13:21:28.205Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/03/2fc4e14c7bd4ff2964b74ba90ecb8552540b6315f201df70f137faa5c589/numpy-2.4.4-cp314-cp314t-win_amd64.whl", hash = "sha256:a7164afb23be6e37ad90b2f10426149fd75aee07ca55653d2aa41e66c4ef697e", size = 12637755, upload-time = "2026-03-29T13:21:31.107Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/78/548fb8e07b1a341746bfbecb32f2c268470f45fa028aacdbd10d9bc73aab/numpy-2.4.4-cp314-cp314t-win_arm64.whl", hash = "sha256:ba203255017337d39f89bdd58417f03c4426f12beed0440cfd933cb15f8669c7", size = 10566643, upload-time = "2026-03-29T13:21:34.339Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "opencv-python"
|
||||
version = "4.13.0.92"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/6f/5a28fef4c4a382be06afe3938c64cc168223016fa520c5abaf37e8862aa5/opencv_python-4.13.0.92-cp37-abi3-macosx_13_0_arm64.whl", hash = "sha256:caf60c071ec391ba51ed00a4a920f996d0b64e3e46068aac1f646b5de0326a19", size = 46247052, upload-time = "2026-02-05T07:01:25.046Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/ac/6c98c44c650b8114a0fb901691351cfb3956d502e8e9b5cd27f4ee7fbf2f/opencv_python-4.13.0.92-cp37-abi3-macosx_14_0_x86_64.whl", hash = "sha256:5868a8c028a0b37561579bfb8ac1875babdc69546d236249fff296a8c010ccf9", size = 32568781, upload-time = "2026-02-05T07:01:41.379Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/51/82fed528b45173bf629fa44effb76dff8bc9f4eeaee759038362dfa60237/opencv_python-4.13.0.92-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0bc2596e68f972ca452d80f444bc404e08807d021fbba40df26b61b18e01838a", size = 47685527, upload-time = "2026-02-05T06:59:11.24Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/db/07/90b34a8e2cf9c50fe8ed25cac9011cde0676b4d9d9c973751ac7616223a2/opencv_python-4.13.0.92-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:402033cddf9d294693094de5ef532339f14ce821da3ad7df7c9f6e8316da32cf", size = 70460872, upload-time = "2026-02-05T06:59:19.162Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/6d/7a9cc719b3eaf4377b9c2e3edeb7ed3a81de41f96421510c0a169ca3cfd4/opencv_python-4.13.0.92-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:bccaabf9eb7f897ca61880ce2869dcd9b25b72129c28478e7f2a5e8dee945616", size = 46708208, upload-time = "2026-02-05T06:59:15.419Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/55/b3b49a1b97aabcfbbd6c7326df9cb0b6fa0c0aefa8e89d500939e04aa229/opencv_python-4.13.0.92-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:620d602b8f7d8b8dab5f4b99c6eb353e78d3fb8b0f53db1bd258bb1aa001c1d5", size = 72927042, upload-time = "2026-02-05T06:59:23.389Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/17/de5458312bcb07ddf434d7bfcb24bb52c59635ad58c6e7c751b48949b009/opencv_python-4.13.0.92-cp37-abi3-win32.whl", hash = "sha256:372fe164a3148ac1ca51e5f3ad0541a4a276452273f503441d718fab9c5e5f59", size = 30932638, upload-time = "2026-02-05T07:02:14.98Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/a5/1be1516390333ff9be3a9cb648c9f33df79d5096e5884b5df71a588af463/opencv_python-4.13.0.92-cp37-abi3-win_amd64.whl", hash = "sha256:423d934c9fafb91aad38edf26efb46da91ffbc05f3f59c4b0c72e699720706f5", size = 40212062, upload-time = "2026-02-05T07:02:12.724Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shape-model-2d"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "opencv-python" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "numpy", specifier = ">=1.24" },
|
||||
{ name = "opencv-python", specifier = ">=4.8" },
|
||||
]
|
||||
Reference in New Issue
Block a user