mertalev ab7fbba5d4 working clip search (kinda) 1 anno fa
..
app ab7fbba5d4 working clip search (kinda) 1 anno fa
.dockerignore 93863b0629 feat: facial recognition (#2180) 2 anni fa
.gitignore 93863b0629 feat: facial recognition (#2180) 2 anni fa
Dockerfile 41461e0d5d chore(ml): memory optimisations (#3934) 1 anno fa
README.md cb437829f3 chore(docs): updated ML documentation (#4063) 1 anno fa
README_es_ES.md 1cf3378499 Add Spanish translations of Readme (#3511) 2 anni fa
README_fr_FR.md b8777d7739 Add french documentation (#4010) 1 anno fa
locustfile.py b7fd5dcb4a dev(ml): fixed `docker-compose.dev.yml`, updated locust (#3951) 1 anno fa
log_conf.json e411eeaedb better logging 1 anno fa
poetry.lock 95b615fddb updated deps 1 anno fa
pyproject.toml 660bf6cdc3 linting 1 anno fa
requirements.txt 165b91b068 feat(ml)!: switch image classification and CLIP models to ONNX (#3809) 1 anno fa
responses.json 258b98c262 fix(ml): load models in separate threads (#4034) 1 anno fa
start.sh 0a24ff90bb fix(ml): set higher default worker timeout (#4007) 1 anno fa

README.md

Immich Machine Learning

  • Image classification
  • CLIP embeddings
  • Facial recognition

Setup

This project uses Poetry, so be sure to install it first. Running poetry install --no-root --with dev will install everything you need in an isolated virtual environment.

To add or remove dependencies, you can use the commands poetry add $PACKAGE_NAME and poetry remove $PACKAGE_NAME, respectively. Be sure to commit the poetry.lock and pyproject.toml files to reflect any changes in dependencies.

Load Testing

To measure inference throughput and latency, you can use Locust using the provided locustfile.py. Locust works by querying the model endpoints and aggregating their statistics, meaning the app must be deployed. You can change the models or adjust options like score thresholds through the Locust UI.

To get started, you can simply run locust --web-host 127.0.0.1 and open localhost:8089 in a browser to access the UI. See the Locust documentation for more info on running Locust.

Note that in Locust's jargon, concurrency is measured in users, and each user runs one task at a time. To achieve a particular per-endpoint concurrency, multiply that number by the number of endpoints to be queried. For example, if there are 3 endpoints and you want each of them to receive 8 requests at a time, you should set the number of users to 24.