Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.

That’ll be all.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    Doing a volume like the default Immich docker-compose uses should work fine, even through restarts. I’m not sure why your setup is blowing up the volume.

    Normally volumes are only removed if there is no running container associated with it, and you manually run docker volume prune

    • Avid Amoeba@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Because I clean everything up that’s not explicitly on disk on restart:

      [Unit]
      Description=Immich in Docker
      After=docker.service 
      Requires=docker.service
      
      [Service]
      TimeoutStartSec=0
      
      WorkingDirectory=/opt/immich-docker
      
      ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
      ExecStartPre=-/usr/bin/docker compose down --remove-orphans
      ExecStartPre=-/usr/bin/docker compose rm -f -s -v
      ExecStartPre=-/usr/bin/docker compose pull
      ExecStart=/usr/bin/docker compose up
      
      Restart=always
      RestartSec=30
      
      [Install]
      WantedBy=multi-user.target
      
      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 hours ago

        That’s wild! What advantage do you get from it, or is it just because you can for fun?

        Also I’ve never seen a service created for each docker stack like that before…

        • Avid Amoeba@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          Well, you gotta start it somehow. You could rely on compose’es built-in service management which will restart containers upon system reboot if they were started with -d, and have the right restart policy. But you still have to start those at least once. How’d you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d. But then I’m splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down. Not great. Instead I’d write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that’s kinda what I’m doing isn’t it? Except if I start it with docker compose up without -d, I don’t need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald too, and I can use systemd’s restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It’s way more powerful than compose’s restart policy. Finally, I like to clean up any data I haven’t explicitly intended to persist across service restarts so that I don’t end up in a situation where I’m debugging an issue that manifests itself because of some persisted piece of data I’m completely unaware of.

          • MangoPenguin@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Interesting, waiting on network mounts could be useful!

            I deploy everything through Komodo so it’s handling the initial start of the stack, updates, logs, etc…

      • PieMePlenty@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Wow, you pull new images every time you boot up? Coming from a mindset of having rock solid stability, this scares me. You’re living your life on the edge my friend. I wish I could do that.

        • Avid Amoeba@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          I use a fixed tag. 😂 It’s more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn’t change, the pull is a noop.

      • waitmarks@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        But why?

        why not just down up normally and have a cleanup job on a schedule to get rid of any orphans?

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          But why?

          I a world where we can’t really be sure what’s in an upgrade, a super-clean start that burns any ephemeral data is about the best way to ensure a consistent start.

          And consistency gives reliability, as much as we can get without validation (validation is “compare to what’s correct”, but consistency is “try to repeat whatever it was”).