* basic refactor and styling
* removed batching
* module entrypoint
* removed unused imports
* model superclass, model cache now in app state
* fixed cache dir and enforced abstract method
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* using pydantic BaseSetting
* ML API takes image file as input
* keeping image in memory
* reducing duplicate code
* using bytes instead of UploadFile & other small code improvements
* removed form-multipart, using HTTP body
* format code
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
Manifest list digests can be found with:
```sh
docker buildx imagetools inspect python:3.11.4-bullseye
docker buildx imagetools inspect python:3.11.4-slim-bullseye
docker buildx imagetools inspect ghcr.io/nginxinc/nginx-unprivileged:1.25.0-alpine3.17
```
The node images are pinned in #2736Fixes#2751
Partially fixes#2752
* updated dockerfile, added typing, packaging
apply env change
* added arm64 support
* added ml version pump, second try for arm64
* added linting config to pyproject.toml
* renamed ml input field
* fixed linter config
* fixed dev docker compose
* env variables for tags, faces and eager startup
* chore(server,ml): remove object detection job and endpoint (#2627)
* removed object detection job
* removed object detection endpoint
* env variables for tags, faces and eager startup
* download without caching models if not eager
* simplified `get_cached_model`
* re-added env for clip text model
* default NODE_ENV to production for server image
* update node image to use 3.17 alpine in server
* update web docker image to use alpine 3.17
* remove NODE_ENV from production docker-compose
* NODE_ENV is also needed default in machine-learning
* Use multi stage build to slim down ML image size
* Use gunicorn as WSGI server in ML image
* Configure gunicorn server for ML use case
* Use requirements.txt file to install python dependencies in ML image
* Make ML listen IP configurable
* Revert "Use requirements.txt file to install python dependencies in ML image"
This reverts commit 32e706c7f3.
* Separate out pip installs in ML builder image
* Use multi stage build to slim down ML image size
* Use gunicorn as WSGI server in ML image
* Configure gunicorn server for ML use case
* Use requirements.txt file to install python dependencies in ML image
* Make ML listen IP configurable
* Install nightly release of pytorch to enable ML support for arm CPUs
* Remove linux/arm/v7 from ML docker builds
* Add --no-cache-dir to torch installation command in ML image build
* Use PIP_NO_CACHE_DIR option in ML build to further decrease image size
* fix(machine-learning): Add the command to execute at startup
Previously it wasn't set in the Docker container but it should be.
* fix(docker): remove machine-learning command arg
* fix(docker): machine-learning CMD argument
BREAKING CHANGES
* Users have to update the docker-compose file, machine-learning portion.
* Temporary dropping machine-learning support for Arm64 and Armv7
* chore: remove @tensorflow/tfjs-node-gpu as it is unused
* chore: remove ffmpeg from machine-learning docker image
* chore: remove unneeded dependencies + move dev dependencies in server
* chore: reduce server image size
* chore: machine-learning remove extraneous dependencies
* chore: web remove extraneous dependencies
* chore: web Dockerfile reduce production image size
* chore: add exiftool-vendored.pl as a dependency
* feat(docker) revert ubuntu base image
This PR reverts the base image for immich-server back to alpine
Adds LICENSE to all Images
Quiets apt-get commands when building
ensures write-permission for root group on app folders
Signed-off-by: PixelJonas <5434875+PixelJonas@users.noreply.github.com>
* Test build old Docker content
* Revert and retry
Signed-off-by: PixelJonas <5434875+PixelJonas@users.noreply.github.com>
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* infra: switch port to 3003 for machine learning container
fixes#289
* Changed port of machine-learning-endpoint to match with new port
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* refactor microservices to machine-learning
* Update tGithub issue template with correct task syntax
* Added microservices container
* Communicate between service based on queue system
* added dependency
* Fixed problem with having to import BullQueue into the individual service
* Added todo
* refactor server into monorepo with microservices
* refactor database and entity to library
* added simple migration
* Move migrations and database config to library
* Migration works in library
* Cosmetic change in logging message
* added user dto
* Fixed issue with testing not able to find the shared library
* Clean up library mapping path
* Added webp generator to microservices
* Update Github Action build latest
* Fixed issue NPM cannot install due to conflict witl Bull Queue
* format project with prettier
* Modified docker-compose file
* Add GH Action for Staging build:
* Fixed GH action job name
* Modified GH Action to only build & push latest when pushing to main
* Added Test 2e2 Github Action
* Added Test 2e2 Github Action
* Implemented microservice to extract exif
* Added cronjob to scan and generate webp thumbnail at midnight
* Refactor to ireduce hit time to database when running microservices
* Added error handling to asset services that handle read file from disk
* Added video transcoding queue to process one video at a time
* Fixed loading spinner on web while loading covering the info panel
* Add mechanism to show new release announcement to web and mobile app (#209)
* Added changelog page
* Fixed issues based on PR comments
* Fixed issue with video transcoding run on the server
* Change entry point content for backward combatibility when starting up server
* Added announcement box
* Added error handling to failed silently when the app version checking is not able to make the request to GITHUB
* Added new version announcement overlay
* Update message
* Added messages
* Added logic to check and show announcement
* Add method to handle saving new version
* Added button to dimiss the acknowledge message
* Up version for deployment to the app store