My k3s Commands

I’m using k3s a lot on my home network. Although I use k9s, i like to script up the deployment of all of my apps. I also keep all of my .yaml files in a self hosted git repository on my local network. I keep those .yaml files in a hierarchy of folders, and i prefix them with numbers to denote the order in which to deploy them.

So, with that in mind, i’d like to explain the handful of commands that i’ve been working on.

The first command is ka (for “k3s apply”). It’s job is to traverse that folder structure looking for .yaml files, and executing kubectl apply -f on those files. It can also look for urls in .url files, and recently i’ve added the ability to turn a folder of files into a ConfigMap.

In contrast to ka, there’s also kd which reverses the order in which it traverses the folder hierarchy, and executes kubectl delete -f on those files.

Those two commands both also honour a -r switch to recurse the hierarchy. With them I can tear down or build up my whole collection of applications. Until recently, my approach was to create my own container images, push them into my self-hosted repository, and then submit a .yaml file that used that image. On @Entropic 's advice i’ve been shifting to ConfigMaps and deployments that use generic images, which has been going brilliantly.

There have also been a few recent additions to those two main commands. ks is my version of ps (or docker ps) in that it lists deployments and pods, along with their states. kl is “k3s log” which honours the -f switch for following the logs of a container. Finally ke runs kubectl exec -ti against a container, which is handy to spawn a bash shell in a running container.

Of course, all of these results can be accomplished with either kubectl or k9s, but while i’ve been transitioning to ConfigMaps, it’s been handy for me to be able to kd -r, ka -r and kl sys backup for example. In that example i have a pod called backup in my sys namespace.

I’m pretty happy with these little scrips for now, but i wouldn’t be surprised if something more comprehensive and mature already exists. Please school me if i’ve reinvented the wheel.

@jdownie, @Entropic are these bash scripts?

I’ve got back to dabbling in k8s since being confined by the cyclone and hear that terraform and ansible are both used for k8s deployment, admittedly with different foci.

Also have you used helm charts? Anything that makes getting a cluster up easier is a "good thing"™.

Nah, python.

I might put what I’m doing in a GitHub repo.

I finally succeeded in getting rolling with kubernetes.

I followed Jim Turland’s guides from about 18 months ago. This allows you to (eventually) run k3s with rancher and longhorn. I also managed to get the deployment working with the config files in a git repository via fleet. This worked well, at least for a simple app.

I am far from confident as a homelab κυβερνήτες but I am hoping the current configuration will never let me down.

You got me! :rofl:

I would like to get rancher running again. That was a nice UI. I’ve been using k9s.

Thanks James for the description above of how your scripts work.

The ones I have used at Jim’s Garage have ubuntu server 24.04 from a cloud init image as the base. They are upgraded with a few more script commands to sync via ssh-copy-id. They then use k3sup to install k3s . Later components are added through manifests.

I have also setup a k8s cluster as a learning experience. This was a much more manual process using the same ubuntu server 24.04 image and adding in the various components via scripts. However I did note that you can install some things via apt (kubectl and containerd, and a few other kube* commands). However adding in the correct version of calico via a script did involve a lot perplexifying (that’s the modern version of googling) and a simpler approach would have been very welcome.

I’ll keep playing around in kubernetes. By making every mistake in the book I hope to learn how it all works.

I’ve been pretty lax in preparing a get-together lately, so i’ve really gotta do something. I don’t think I can pull it together for an in-person meetup, but would anybody like a k3s themed workshop? I’m putting my homelab back together at the new house and i’m doing more k3s stuff again. My k3s cluster is my main container hosting platform at home and I love it. I’d love to hear in person how other people’s journeys have been going and maybe solve some problems together if we can agree on a time. Anybody interested in a k3s themed online get together?

I’m in. This was the basis of my last kubernetes adventure.

@zeeclor , are you planning a routine online catch up tomorrow night anyway?

I looked at the calendar for the next “first Thursday”, and that’s the day that we have removalists booked. Maybe I can do my online k3s meetup on Thursday 10 April instead of the in person meetup that I can’t do on Thursday 17 this April.

Sounds good.

I have updated the calendar with “The meeting is delayed by one week this month. James is presenting a k3s workshop for the homelab environment.”.

See you on 10 April at 7.30 pm AEST at Jitsi Meet

A question for the brains trust!

I’ve been doing a bit of reading about Kubernetes, k3s, RKE2, Talos, Managed/Hosted services, etc. since James’ excellent demo last Thursday. I’m still not 100% sure it’ll fit my use case, but I am curious about the high availability potential. I’m getting the vibe that having duplicated control planes is a lot of extra work, but perhaps there’s already a distro or implementation that has solved this issue.

@jdownie, from what I could gather in your demo the h0/h1/h2 machines each had their own master/control planes + 2x worker nodes - is that correct? Or was the master being hosted on your development machine? If there are 3x masters, how seamless is the “hand off” between them if one of those masters fails? (almost certainly using the wrong terminology here, sorry).

I’m really interested in a hybrid setup using a VPS + two home machines. I gather from documentation that if the control plane fails, the individual nodes will keep running but obviously the scaling and so on will not be occurring. Do you know what happens in the case where there are multiple masters and the network gets severed between the individual machines hosting the nodes? e.g., one VPS with 1x master and 2x worker nodes, and two local machines with 1x master and 2x worker nodes each. If the two sites lose network connectivity will the remote site continue to run with its own master? Will things come back “cleanly” when that network is restored (e.g., the two masters at home will “outvote” the remote master when the network comes back) or will the cluster be left in an undesired state with two masters?

Still wrapping my head around how this all fits together, but I’m enticed by the possibility of being able to have a power outage/hardware failure/NBN go down and have some self hosted services still available via a self-managed VPS.

Cheers,

Belfry

Hi Befry, like you I am getting my head around it so here’s my take.

In your scenario, the active two master nodes have the quorum and one of those will be elected the leader (or one already is). When the link comes back the remote master and worker will resync with the cluster. (Some pods may have been created on the local nodes for those that were initially on remote worker node and this may also readjust over time.)

I think the difficulty will be, at least for a homelabber, that if the link goes down you have probably also lost internet connection and hence there is no path to the cluster.

If you were to redirect the DNS records to somewhere else connections would probably still fail because the remote, sole master node is not active. (It cannot elect itself the leader due to an insufficient quorum.)

As a general observation, my impression from watching YouTube is that Kubernetes is a lot of work and may be beyond most users. However, if you get everything working it is magic and then you don’t want to go back.

I wish i had clever answers for you guys, but frankly i managed to get it to work, so i left it at that. I should probably do some testing on exactly what you’re talking about but i’ve never gotten around to it. Maybe we could have another Jitsi meeting to torture my setup by pulling out power cables.

I also think that i could/should be doing more with the way that i’m setting up my deployments. I could/should have multiple replicas running across my various nodes so that if a node goes down there are already containers running on the others.

These are all questions that I have too, so please say the word if you’d like to have a more in-depth look at my setup.

Kubernetes is big and I am currently just skirting around the edges but during my wanderings I’ve had a couple of thoughts about the issues above.

I’ve seen some talk of using BGP with tailscale to achieve what Belfry is aiming for. It would presumably require two connections to the net so the backup connection could be 4/5G. (Guessing LoRaWAN won’t do it. :melting_face:)

“The network is the computer” was Sun’s slogan back in the day. These days with kubernetes you could update that to the “cluster is the network”. Bringing up and tearing down a cluster is at the core of kubernetes management. David McKone says just backup your data; you don’t need to do backup infrastructure. With git and ansible you’ll be back up with a minimum of administrator input. (Admittedly that might take a while but a properly configured cluster is designed to mostly not go down.)

McKone and Jeff Geerling are strong proponents of Ansible. They suggest it is more reliable than just using scripts and relatively simple to learn. “Relatively” may be a relative term but I’ll give it a go.