Given that a large portion of the group run Home Assistant in some form, I’m curious as to what our methods of running HA are and the hardware we’re using.
My initial install was Home Assistant Operating System on an older Atom machine with 2GB RAM and 8GB storage. Current install is HAOS in a VM with 1 core, 4GB RAM, and a 32GB disk.
Is anyone running a container? What sort of resources does it use? Any older Supervised/Core installs out there?
I’m running mine in my k3s cluster. It’s performing fine I guess . Because my k3s nodes are VMs, I have some flexibility around allocation of resources if I ever need to scale up a specific node and peg my HA deployment to that node. I haven’t hit any problems that have called for that course of action… yet.
@jdownie Possibly a daft question from a non-k3s user - are you able to see how much RAM and other resources HA is using?
I’m trying to squeeze my install as a bit of an experiment. It’s fine running on one thread, and so far it seems that an 8GB disk is the practical minimum. It anecdotally also seems to use about 90% of whatever RAM is available (i.e., if I give it 4GB it’ll use about 3.6GB, if I give it 2GB it’ll use about 1.8GB, and if I give it 1GB, it’ll use about 900MB). I think I’ll be pushing my luck moving any lower than 1GB. (Edit: Coming back to this a day later with a clearer head, this is probably just Linux caching doing its thing. If anyone knows how I can get into the HAOS base Linux to run a free, please let me know!)
Have gone down to 1GB this afternoon, and haven’t noticed any loss of functionality or slowdowns at all. Will leave it be for a few days and see how things go. There seems to be some limited discussion online with a few people pointing out that a 1GB Pi is officially supported. My end goal is to try and better allocate existing hardware resources and it’d be nice to fit HA into something a bit smaller, but RAM doesn’t seem to have as good of a clearly defined “edge” as the other resources do.
To answer this question, i had to ask ChatGPT. I basically said “i’ve been asked XXX, how should i answer that question”. So, feel free to ask more questions if i haven’t really addressed your concerns.
This command…
kubectl top pods -n application -l app=home-assistant
…gives me this output…
NAME CPU(cores) MEMORY(bytes)
home-assistant-c45fbc7cf-xw6dr 50m 1068Mi
…where…
CPU is in millicores (m)
Memory is typically Mi or Ki
That output is a snapshot of “right now”. I watch that with, ahem, “watch”, and CPU hovers at about 50.
My Home Assistant “deployment” only contains one “pod” (container). It happens to be scheduled on my h1k0 “node” along with 18 other pods. That “millicore” measure is used in k9s too, and that node is idling at about 300 of 6000 millicores. I read that as 30% of one sixth of the total capacity of that node. That node’s memory usage is also only about 4G ot 16G available. The only other deployment on that node right now is my “servarr” deployment which contains eight pods (sonarr, radarr, prowlarr, etc).
I think my cluster of crazy over spec’ed. I need to deploy some more containers!
I’m not constraining HA’s memory or CPU usage in my deployment definition. Apparently 1G of RAM is satisfying it’s needs folr now.