There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

  • ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.

    At home it’s 12.

    • slazer2au@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Yeah that shit is more common than people think.

        A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.

        There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.

  • mogethin0@discuss.online
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.

      • ToTheGraveMyLove@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I’m using docker. Tried to set up Jellyfin in one but I couldn’t for the life of me figure out how to get it to work, even following the official documentation. Ended up just running the jellyfin package from my distros repo, which worked fine for me. Also tried running a tor snowflake, which worked, but there was some issue with the NAS being restricted and I couldn’t figure out how to fix that. I kinda gave up at that point and saved the whole container thing to figure out another day. I only switched to Linux and started self-hosting last year, so I’m still pretty new to all of this.

        • kylian0087@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          If you do decide to look in to containers again and get stuck please make a post. We are glad to help out. A tip I can give you when asking for help. Tell the system you are using and how. Docker with compose files or portainer or something else etc. If using compose also add the yaml file you are using.

          • ToTheGraveMyLove@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 days ago

            I will definitely try again at some point in the next year, so I will keep that in mind! I appreciate the kind words. A lot of what you said is over my head at the moment though, so I’ve got my work cut out for me. 😅

            • F04118F@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Docker Compose is really the easiest way to self-host.

              Copy a file, usually provided by the developers of the app you want to run, change some values if instructed by the # comments, run docker compose up and it “just works”.

              And I say that as someone who has done everything from distro-provided packages to compiling from source, Nix, podman systemd, and currently running a full-blown multi-node distributed storage Kubernetes cluster at home.

              Just use docker compose.

        • Chewy@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I’m pretty sure I was at the same point years ago. The good thing is, next time you look into containers it’ll likely be really easy and you’ll wonder where you got stuck a year or two ago.

          At least that’s what has happened to me more times than I can remember.

  • smiletolerantly@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 days ago

    Zero.

    About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,…).

    There’s additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.

    SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.

    A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.

    In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run restore-<whatever>.

    On an average day, I spend 0 minutes managing the homelab.

    • BCsven@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Why VMs instead of contsiners? Seems like way more processing overhead.

      • smiletolerantly@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 days ago

        Eh… Not really. Qemu does a really good job with VM virtualizarion.

        I believe I could easily build containers instead of VMs from the nix config, but I actually do like having a full VM: since it’s running a full OS instead of an app, all the usual nix tooling just works on it.

        Also: In my day job, I actually have to deal quite a bit with containers (and kubernetes), and I just… don’t like it.

        • BCsven@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          Yeah, just wondered because containers just hook into the kernal in a way that doesn’t have overhead. Where as a VM has to emulate the entire OS. But hey I get it, fixing stuff inside the container can be a pain

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 days ago

      On an average day, I spend 0 minutes managing the homelab.

      0 is the goal. Well done !

      Edit: Ha! Some masochist down-voted that.

  • Culf@feddit.dk
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    Am not using docker yet. Currently I just have one Proxmox LXC, but am planning on selfhosting a lot more in the near future…

  • kaedon@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    12 LXCs and 2 VMs on proxmox. Big fan of managing all the backups with the web ui (It’s very easy to back to my NAS) and the helper scripts are pretty nice too. Nothing on docker right now, although i used to have a couple in a portainer LXC.

  • Nico198X@europe.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    13 with podman on openSUSE MicroOS.

    i used to have a few more but wasn’t using them enough so i cut them.