Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who’ve tried it and went back to Docker, why?
I’m doing my 3rd rebuild of a K8s cluster after learning things that I’ve done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.
What’s your story?
Went swarm instead. I dont need a department of k8s consultants.
I do aks. I can’t say love is the right word for it. Lol
AKS is a shame. Most of azure, actually. I do my best to find ways around the insanity but it always seems to leak back in with something insane they chose to do for whatever Microsoft reason they have.
I run a 2 node k3s cluster. There are a few small advantages over docker swarm, built-in network policies to lock down my VPN/Torrent pod being the main one.
Other than that writing kubernetes yaml files is a lot more verbose than docker-compose. Helm does make it bearable, though.
Due to real-life my migration to the cluster is real slow, but the goal is to move all my services over.
It’s not “better” than compose but I like it and it’s nice to have worked with it.
I have a K3OS cluster built out of a bunch of raspberry pis, it works well.
The big reason I like kubernetes is that once it is up and running with git ops style management, adding another service becomes trivial.
I just copy paste one if my e is ting services, tweak the names/namespaces, and then change the specific for the pods to match what their docker configuration needs, ie what folders need mounting and any other secrets or configs.
I then just commit the changes to github and apply them to the cluster.
The process of being able to roll back changes via git is awesome
I’d love to hear more about your GitHub to K8s setup. I’ve been thinking about doing something similar, but I’m not sure how to keep my public stuff public while injecting my personalized (private) configuration during deployment.
K3s seems pretty cool
K3s? What did you do to the other 5 letters??
Its k3s because its fast and light. It can run on a single core with 512mb of ram
I like the Kubes
Seems a bit overkill for a personal use selfhosting set-up.
Personally, I don’t need anything that requires multiple replicas and loadbalencers.
Do people who have homelabs actually need them? Or is it just for learning?
A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.
A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.
Yes, but docker does provide features that are useful at the level of a hobbyist self-hosting a few services for personal use (e.g. reproducibility). I like using docker and ansible to set up my systems, as I can painlessly reproduce everything or migrate to a different VPS in a few minutes.
But kubernetes seems overkill. None of my services have enough traffic to justify replicas, I’m the only user.
Besides learning (which is a valid reason), I don’t see why one would bother setting it up at home. Unless there’s a very specific use-case I’m missing.
I find mine useful as both a learning process and as a thing need. I don’t like using cloud services where possible so I can set things up to replace having to rely on those such as next loud for storage, plex and some *arr servers for media etc. And I think once you put the hardware and power costs vs what I’d pay for all the subs (particularly cloud storage costs) it comes out cheaper at least with hardware I’m using.
Yes, those are all great uses of it. But could all still be achieved with docker containers running on some machines at home, right?
Have you ever had a situation where features provided by kubernetes (like replicas, load balancers, etc) came in handy?
I’m not criticizing, I’m genuinely curious if there’s a use-case for kubernetes for personal self-hosting (besides learning).
I was a big proponent of k3s in the homelab, but I’m starting to think otherwise these days. I still expel choice words towards Docker’s networking, but it starts becoming more of a philosophical issue with what the company is doing and whoever decided this kind of networking is nice.
Is the networking on Podman any better? I understand using k8s at home to learn, but what if you don’t care about learning? I have never seen a point to k8s in homelabs other than in home-datacentres, and I’m starting to veer away from k3s too, since I don’t need extreme HA over 3 machines for my services (I would have used Proxmox if I wanted that).
Yeah, could someone give me a primer on how Podman is better than Docker? I’m adamant that I don’t want to use anything with the name “Docker” in my lab.
Not sure if it’s exactly what you’re looking for, but here’s a “primer” on Podman for Docker users: https://lemmy.world/post/213870
Thank you!
For me, I find that I learn more effectively when I have a goal. Sure, it’s great to follow somebody’s “Hello World” web site tutorial, but the real learning comes when I start to extend it to include CI/CD for example.
As far as a use case, I’d say that learning IS the use case.
The Lemmy instance I’m speaking from right now is running in my k8s cluster.
I manage like 200 servers in Google cloud k8s but I don’t think I’d do that for home use. The core purpose is to manage multiple servers and assign processes between them, auto scaling, cluster internal network - running docker containers for single instance apps for personal use doesn’t require this kind of complexity
My NAS software has a docker thing just built into it. I can upload or specify a package and it just runs it on the local hardware. If you have a Linux shell, I guess all you really have to do is run
dockerd
to start the daemon, make sure your network config allows connections, and upload your docker containers to it for runningMy thinking is the same, I see lots of k8s mentions on here and from coworkers at home and all I use is docker and VMs because I don’t want all that complexity I have to deal with at work.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System Git Popular version control system, primarily for code HA Home Assistant automation software ~ High Availability HTTP Hypertext Transfer Protocol, the Web LXC Linux Containers NAS Network-Attached Storage SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) k8s Kubernetes container management package nginx Popular HTTP server
11 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #82 for this sub, first seen 26th Aug 2023, 23:55] [FAQ] [Full list] [Contact] [Source code]
HA is high availability. Home Assistant is usually shortened to HASS.
It does list that as another possible meaning, if I’m reading the table correctly
Good catch! I didn’t notice that.
Kubernetes is great if you run lots of services and/or already use kubernetes at work. I use it all the time and I’ve learned a lot on my personal cluster that I’ve taken to work to improve their systems. If you’re used to managing infra already then it’s not that much more work, and it’s great to be able to shutdown a server for maintenance and not have to worry about more than a brief blip on your home services.
I love kubernetes. At the start of the year I installed k3s von my VPS and moved over all my services. It was a great learning opportunity that also helped immensely for my job.
It works just as well as my old docker compose setup, and I love how everything is contained in one place in the manifests. I don’t need to log in to the server and issue docker commands anymore (or write scripts / CI stages that do so for me).
Are most of your services just a single pod? Or do you actually have them scaled? How do you then handle non-cloud-native software?
Love is a strong word, but kubernetes is definitely interesting. I’m finishing up a migration of my homelab from a docker host running in a VM managed with Portainer to one smaller VM and three refurbished lenovo mini PCs running Rancher. It hasn’t been an easy road, but I chose to go with Rancher and k3s since it seemed to handle my usecase better than Portainer and Docker Swarm could. I can’t pass up those cheap mini PCs
Does rancher connect the pcs together? I have like 3 mini pcs sitting around, and I’ve always wanted to kinda combine them somehow
Like being able to combine cpu power or something. Idk if this is possible without getting a mobo with multiple cpu slots, but if it is. I’d love to learn!
Yeah, Kubernetes is designed to run in a cluster so you can pool processing power and memory from multiple devices. I banged my head against the wall for hours trying to figure out how to set up a cluster by hand, but then discovered if you install Rancher in a regular docker container it can handle all that for you
No shit. So you’re saying I can hook up like three mini pcs and make a mega at home server!? I gotta look into this. Did you follow a guide or anything you think is good enough or is as easy as a Google?
My recommendation is to look into k3sup and Rancher. I had a lot of trouble trying to install rancher in a docker container and migrating to a cluster after, and k3sup makes it really easy to set up a k3s cluster without having to configure everything manually
You can accomplish the same task with docker swarm, but I figured it would be better to learn something that wasn’t abandonware
I haven’t dug into the storage side yet since I have a separate NAS, but it will probably be beneficial to set up something like Ceph, GlusterFS, or Longhorn if you don’t have one
Oh I just realized this is for kubernates. Unraid is all dockers. Can a docker swarm also pool resources?
Yep, similar concept. Not sure how well unraid will handle the swarm behavior but I can imagine there’s someone out there who has tried it before
I like the concept, but hate the configuration schema and tooling which is all needlessly obtuse (eg. helm)
Helm is one of the reasons I became interested in Kubernetes. I really like the idea of a package where all I have to do is provide my preferences in a values file. Before swarm was mature, I was managing my containers with complicated shell scripts to bring stuff up in the right order and it became fragile and unmaintainable.
Running an RKE cluster as VMs on my ceph+proxmox cluster. Using Rook and external ceph as my storage backend and loving it. I haven’t fully migrated all of my services, but thus far it’s working well enough for me!