• 0 Posts
  • 14 Comments
Joined 2 年前
cake
Cake day: 2023年7月1日

help-circle

  • Finally managed to carve out some time since the birth of my daughter two months ago to tinker around a bit. Decided to tackle my gripe to semi-automate updating my services when there is a new release.

    Now I have Renovate running on my self-hosted Forgejo instance using Forgejo’s actions and a “Podman in Podman” image for its runners. Don’t ask me why I wanted to do a PINP instead of DIND - I guess I like to punish myself. But at least this means everything I deploy is running with Podman 😄


  • Did the switch from Docker to Podman a couple of months ago. Now I host all my services (arr-stack, Forgejo, Nextcloud, Authelia, Traefik, Immich… to name a few) on my VPS and mini pc/home server with Podman.

    I recently sat up headscale to connect my VPS running the Traefik Proxy to my home lab to make some of my services running on there accessible from the internet. It was quite the journey, to say the least, as networking is not my forte either.

    But feel free to drop me a pm if you need some inspiration or support, maybe I can help.


  • That’s where UserNS=keep-id:uid=1000,gid=1000 comes into play. It “maps” the containers’ user to your local user on the host to some extent, there is a deeper explanation of what exactly it does in this GitHub issue: https://github.com/containers/podman/issues/24934

    Well the pod only links the container together, it’s not one systemd file. Every container has its own file, so does the pod and the network (separated by ‘—’ in my code block above). You still can start and stop each container as a service separately or just the whole pod with all containers linked to it. Pods have the advantage that the containers in them can talk to each other more easily.

    The network I just created to separate my services from each other. Thinking of it, this was the old setup, as I started using gluetun and run it as a privileged container, it’s using the host network anyway. I edited my post above and removed the network unit file.


  • Just wanted to leave a comment to say thank you for sharing your findings and taking the time to write your post here. I am sure you spent a lot of time and effort researching all of this. These kinds of posts are why I love Lemmy and its communities.

    Not currently in need of a new VPN, pretty happy with Proton and my plan is active for another 1.5 years. But I still wanted to say I appreciate your post.


  • Nice, did the move from docker to podman a couple of months ago myself. Now running the arr stack, nextcloud, immich and some other services as quadlets. File permission due to podmans rootless nature usually was the culprit if something was not working properly.

    I can share my quadlet systemd files I use for the arr stack. I deployed it as a pod:

    [Unit]
    Description=Arr-stack pod
    
    [Pod]
    PodName=arr-stack
    # Jellyseerr Port Mapping
    PublishPort=8055:5055
    # Sonarr Port Mapping
    PublishPort=8089:8989
    # Radarr Port Mapping
    PublishPort=8078:7878
    # Prowlarr Port Mapping
    PublishPort=8096:9696
    # Flaresolverr Port Mapping
    PublishPort=8091:8191
    # qBittorrent Port Mapping
    PublishPort=8080:8080
    ---
    [Unit]
    Description=Gluetun Container
    
    [Container]
    ContainerName=gluetun
    EnvironmentFile=global.env
    EnvironmentFile=gluetun.env
    Environment=FIREWALL_INPUT_PORTS=8080
    Image=docker.io/qmcgaw/gluetun:v3.40.0
    Pod=arr-stack.pod
    AutoUpdate=registry
    PodmanArgs=--privileged
    AddCapability=NET_ADMIN
    AddDevice=/dev/net/tun:/dev/net/tun
    
    Volume=%h/container_volumes/gluetun/conf:/gluetun:Z,U
    
    Secret=openvpn_user,type=env,target=OPENVPN_USER
    Secret=openvpn_password,type=env,target=OPENVPN_PASSWORD
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=qBittorrent Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=qbittorrent
    EnvironmentFile=global.env
    Environment=WEBUI_PORT=8080
    Image=lscr.io/linuxserver/qbittorrent:5.1.2
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/qbittorrent/conf:/config:Z,U
    Volume=%h/Downloads/completed:/downloads:z,U
    Volume=%h/Downloads/incomplete:/incomplete:z,U
    Volume=%h/Downloads/torrents:/torrents:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Prowlarr Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=prowlarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/prowlarr:2.0.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:9696/prowlarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    Volume=%h/container_volumes/prowlarr/conf:/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Flaresolverr Container
    
    [Container]
    ContainerName=flaresolverr
    EnvironmentFile=global.env
    Image=ghcr.io/flaresolverr/flaresolverr:v3.4.0
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Radarr Container
    
    [Container]
    ContainerName=radarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/radarr:5.27.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:7878/radarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/radarr/conf:/config:Z,U
    Volume=/mnt/movies:/movies
    Volume=%h/Downloads/completed/radarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Sonarr Container
    
    [Container]
    ContainerName=sonarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/sonarr:4.0.15
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:8989/sonarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/sonarr/conf:/config:Z,U
    Volume=/mnt/tv:/tv
    Volume=%h/Downloads/completed/sonarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Jellyseerr Container
    
    [Container]
    ContainerName=jellyseerr
    EnvironmentFile=global.env
    Image=docker.io/fallenbagel/jellyseerr:2.7.3
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/jellyseerr/conf:/app/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    

    I run my podman containers in a VM running Alma Linux. Works pretty great so far.

    Had the same issue when debugging systemctl errors, journalctl not being very helpful. At one point I just ran podman logs -f <container> in another terminal in a while loop just to catch the logs of the application. Not the most sophisticated approach, but it works 😄






  • Nice, did the same for some services I run at home. Now in the process of migrating my stuff on my vps from docker compose to quadlets. It’s a bit more involved but worth the QoL stuff quadlets bring with them, like automatic updates and systemd integration. I’m curious, which is your Linux distro of choice to run your podman quadlets on?