102 lines
3.1 KiB
Markdown
102 lines
3.1 KiB
Markdown
## Genealog Face Service
|
||
|
||
FastAPI-based face embedding and matching microservice using InsightFace + ONNX Runtime GPU. This service is designed to be called from the `genealog-api` backend via HTTP.
|
||
|
||
### Endpoints
|
||
|
||
- `GET /healthz` – basic health check and model info.
|
||
- `POST /embed-avatar` – JSON body: `{ "image_url": "https://..." }`, returns a single best face embedding for an avatar image.
|
||
- `POST /embed-image` – JSON body: `{ "image_url": "https://..." }`, returns all detected faces and embeddings.
|
||
- `POST /test-avatar` – multipart form with fields:
|
||
- `tag`: string tag for logging / correlation
|
||
- `avatar`: avatar image file (face to match)
|
||
- `image`: target image file (search space)
|
||
|
||
All embeddings are normalized float vectors suitable for cosine-similarity comparison.
|
||
|
||
`/embed-avatar` notes:
|
||
|
||
- Images are decoded with Pillow and EXIF orientation is applied (e.g. iPhone photos) before running detection.
|
||
- If no face is detected, the service will fall back to a center square crop and run the recognition model directly to still produce an embedding. In this case, the `score` field will be `0.0` and `bbox` is the used crop.
|
||
|
||
### Installation (WSL2, Python venv)
|
||
|
||
From `/home/hung/genealog-face`:
|
||
|
||
```bash
|
||
python -m venv .venv
|
||
source .venv/bin/activate
|
||
pip install -r requirements.txt
|
||
```
|
||
|
||
GPU support assumes:
|
||
|
||
- WSL2 with GPU enabled.
|
||
- NVIDIA drivers installed on Windows.
|
||
- `nvidia-smi` works inside WSL.
|
||
|
||
The service uses `insightface` with `CUDAExecutionProvider` first, falling back to CPU if needed.
|
||
|
||
### Running the service
|
||
|
||
Use the helper script (recommended):
|
||
|
||
```bash
|
||
cd /home/hung/genealog-face
|
||
./run_face_service.sh
|
||
```
|
||
|
||
Defaults:
|
||
|
||
- Host: `0.0.0.0`
|
||
- Port: `18081`
|
||
- Model: `buffalo_l`
|
||
- Detection size: `1024`
|
||
- Workers: `nproc` (all CPU cores detected)
|
||
|
||
You can override via environment variables:
|
||
|
||
```bash
|
||
PORT=18081 \
|
||
FACE_MODEL_NAME=buffalo_l \
|
||
FACE_DET_SIZE=1024 \
|
||
UVICORN_WORKERS=20 \
|
||
./run_face_service.sh
|
||
```
|
||
|
||
To run in the background:
|
||
|
||
```bash
|
||
nohup ./run_face_service.sh > face_service.log 2>&1 &
|
||
```
|
||
|
||
Logs are written to `face_service.log` in the repo root.
|
||
|
||
### Integration with genealog-api (Docker)
|
||
|
||
The `genealog-api` service expects this face service to be reachable at:
|
||
|
||
- `FACE_SERVICE_URL: http://host.docker.internal:18081`
|
||
|
||
You only need to ensure the service is running in WSL on port `18081` before starting the Docker stack.
|
||
|
||
### Autostart on Windows reboot (via WSL2)
|
||
|
||
You can have Windows start this service automatically at logon using Task Scheduler:
|
||
|
||
1. Open **Task Scheduler** → **Create Task…**.
|
||
2. **General** tab:
|
||
- Name: `GenealogFaceService`.
|
||
- Configure to run for your Windows user.
|
||
3. **Triggers** tab:
|
||
- New → Begin the task: **At log on**.
|
||
4. **Actions** tab:
|
||
- Program/script: `wsl.exe`
|
||
- Arguments:
|
||
```text
|
||
-d Ubuntu -- bash -lc "cd /home/hung/genealog-face && nohup ./run_face_service.sh >> face_service.log 2>&1"
|
||
```
|
||
5. Save the task (provide credentials if prompted).
|
||
|
||
After this, logging into Windows will start WSL and launch the face service in the background, ready to be used by `genealog-api`.
|