Upgrading from initial homelab setup

Hey all,

I’m pretty new to the homelab space (and posting in forums!) and looking to step things up a bit. Right now I’m running an old (about 13 years old) PC I originally used for gaming, and it’s got 4 x 6TB drives in it. It’s been a good intro setup, but I’m starting to hit limitations and want to move to something more serious and upgradeable (current setup has a cheap wall fan pointing into the case to keep it cool, all the original fans have died, not good… Didnt realise how hot the HDD’s get when running ZFS).

I’m kind of split between two directions and would love to hear what others would do instead as well:

  1. Move everything (drives, maybe CPU/mobo too) into better hardware with room for more drives and better airflow — something more suited to NAS use and future upgrades.

  2. Leave the current machine alone, and build a dedicated NAS just for storage (moving the drives over), maybe with hot-swap support and more focus on efficiency.

I’m planning to stick with TrueNAS or use Ubuntu server — not really into Synology/QNAP having a small QNAP that’s currently used for secondary important backups and limited to just that — and would love any recommendations on cases (especially ones that can take 7+ 3.5” drives to really future proof) or just general setup advice.

My current homelab setup is used for storage of media such as Plex and audio books along with many docker containers, where I test and host for others to access as well. At a later stage I would love to self host an AI setup, but that is not for a while and expect to create a dedicated machine for it, recycling my current PC.

Appreciate any help!

I love these sort of questions because you ask 10 people and you’re going to get 25 different answers :joy:!

For what it’s worth, I’d take the following approach:

  • Build a new (not necessarily new hardware, but physically separate) TrueNAS server, using brand new SSDs rather than hard drives
  • Configure it the way you want it to be configured
  • Stress test the hell out of it
  • Test test test
  • Move the data and the docker containers to the new machine. (I assume you’re using TrueNAS Scale and running docker that way, but please correct me if that’s not the case)
  • Test test test
  • Decommission the old machine
  • Test test test
  • Then fix up any hardware issues (fans, etc.), test the hardware (run diagnostics on the HDDs, especially if they’ve been hot) and use that machine for future projects (AI, etc.)

It’s a bit of a mix of your two options (well, essentially option two but without moving the drives). I think 13 year old hardware that’s already showing signs of problems is a good base for an experimental play machine (e.g., AI, Proxmox VMs, etc.) but not a good starting point for a new NAS/docker host which I assume you’re want to operate problem free for many years to come.

The answer will change depending on what you’re optimising for as well. If the constraint is cost, then a different approach might be better. If you’re optimising for power, then maybe spending a bit more for something super efficient may be better. If you’re optimising for reliability, then maybe building two cheaper computers to run in parallel may be better… etc.

You mentioned efficiency - I think some of the Asustor NASes will run TrueNAS Scale and they seem to be reasonably well regarded. I’ve never owned one or used one, but a quick bit of searching threw up several models that are x86-64 based, can take RAM upgrades, etc. such as the AS5404T. I’ve not used this unit myself, and wasn’t even aware of this specific model until 10 minutes ago, but these kind of units that can run TrueNAS may be a good starting point for the money (vs. new motherboard, new cpu, new ram, etc.) and still give you the 13 year old machine to restore as a machine for other projects. I’ve seen various bits and pieces online about running TrueNAS on Asustor NASes rather than the included firmware so that may be worth looking into further.

If you really wanted to reuse the 6TB drives (which is absolutely fair enough), my approach would be to whittle down the media as much as possible to reduce the amount of data being transferred, buy/borrow some other storage with that much free space and/or free up space on your QNAP, copy it all across to the free space, hash it to make sure the data integrity is good, and then run each of those 6TB drives through the respective manufacturer diagnostic tool. I’m likely being paranoid here, but if I were transferring those drives into a new build, I’d want to do so with absolute confidence that they’re in 100% working condition rather than get a month into the new build and start having to replace drives. Especially so if they’ve been knocked around a bit or maybe overheated in a Brisbane summer and a case with fans that are playing up.

For what it’s worth, I’ve been throwing these cheap Orico SSDs at everything lately. I have a 1TB in an infrequently used desktop machine, and I’m running 4x 256GB ones in my Proxmox server (also ZFS). The 256GB ones were $23 each back in Feb. They’re almost certainly not going to outrun Corsair or Samsung SSDs I’m using elsewhere, but it’s really hard to argue with that sort of pricing and they haven’t missed a beat.

Over the last few decades I’ve used second hand SCSI-2 drives, SAS drives, ex-server dedicated RAID controllers with their own onboard RAM and CPUs, onboard softRAID, JBOD, the legendary Promise ATA66 modified into a FastTrack66, IDE, SATA, various brands of commercial and home built NAS based on Windows/Linux/BSD, and now SATA SSDs and M.2 NVMe drives. I honestly think throwing several “good enough” drives at a server is the way to go for home use based on how cheap SSDs have become. I’m also curious about the hot-swap comment as I’ve only ever had two drives fail, and the 20 minutes to shut down the machine, swap out the drive, and then let the RAID etc rebuild for a few hours wasn’t a huge drama. Hot swap capable hardware is going to add a level of cost you may not want to achieve a level of reliability that you may not need. Personally, I’d be spending that cash on a cheap UPS instead, or in my case have an extra $23 SSD sitting in its original box next to the server. If one of the drives fail, I’ll shut the server down, swap the SSD out with my cold spare, boot it up and move on. Your needs may be different, and I also recognise the very real desire to have the extra cool hot swappable setup though :grin:.

It’s my usual sort of non-answer (sorry :joy:), but to summarise my ramblings: ponder the constraints and what’s really important to you (physical space, running cost, reliability, the love of the project to build something yourself, uptime, energy efficiency, upfront hardware cost, etc.), and do your best to build and test something new and separate, then copy data across and do a “cut over” rather than trying to mess around too much with your 13 year old “production” machine and risk data loss.

I’m also curious about what others think (see above about loving the “ask 10 people and get 25 different answers” type discussions!)

If you’re coming to Chermside tomorrow evening, let me know. I’ve got a perfect candidate for you which I’ll throw in the car and you can have for free if it’ll suit. PM me for more details if you’re interested.

My father was a ‘depression child’, he grew up during the Great Depression of the 1930’s when no one had any jobs or money and as a result we never had anything new, but we never went without because dad would recondition everything like new, in fact it was so good we thought it was new (until we were old enough to know better).

When I was about twelve years old, my dad said to me “Terry, always buy the best tools you can afford, because you’ll buy cheap junk tools again and again but they will always break in the end and you’ll be forever replacing them. Good tools will last a lifetime and allow you to do your best work while enjoying using the highest quality tools.”

I grew up mad about electronics and at 72 I’m a retired electronics technician and have been fixing and making gear all my life.

So my advice will be different again:

  • Always buy the (almost) latest and fastest computing gear you can afford
  • Budget to replace it every five years, especially now as mankind is on a exponential technology improvement cycle and five years is a very long time in tech.
  • PC gear is made for the absolute lowest price, basically it’s junk. In 1984 I had to borrow $10,000 to but a Olivetti PC with a Intel 80286 (XT) and a EGA screen. They had to use quality components back then as they didn’t have cheap (junk) alternatives. In the 1970’s a Burroughs tape drive was the size of a large refrigerator and cost a million dollars, again all components were mil spec because that was all they had. So don’t think that anything but the chipsets are the best we can do because everything else is made for the lowest cost … and they wear out fast.
  • Hard drives last 5 years for the best ones you can buy
  • SSD drives when they fail bzzt all the data is instantly gone forever, but hard drives fail gradually, so use hard drives with ZFS in a mirror config for data you absolutely can’t afford to lose. Plus SSD’s become very slow when nearly full.
  • NVME drives are super fast and about $110 for 1TB. The really super fast ones are expensive but worth it as system drives, your PC will thank you. Only use them for System drives as you can always reinstall if one dies.
    https://www.pccasegear.com/products/67447/team-t-force-z44a7q-m-2-pcie-gen4-nvme-ssd-1tb
  • Power supply capacitors have a limited life of so many thousand hours, they’re consumable. Sure you can get “Sprague Extralytics” which seen to last a lifetime, they were used in DEC VAX computers where every repair, no matter the fault was $1500. I never knew the price of the PSU’s which were about twice the size of a PC psu but I bet it was $10,000. PC PSU’s suck in dust and that dust gets damp and shorts out. That’s why they go BANG! unless theyre in a climate and dust controlled clean room, like all proper server rooms. If your PC isnt in such a room and you’re not taking it apart and cleaning it every 6 months, replace it after 5 years.
  • Motherboards, get one with dual GPU slots for lots of monitors and or local AI capability, read the fine print as the manufacturers cheat and use the PCIE lanes for more than one device. I.E. a second NVME steals all the lanes from the second PCI-E gpu slot. B550-A Pro, I mean you.
  • PC cases, always get one with removable DUST FILTERS that way you can easily vacuum the filters and the innards stay dust free (mostly). Something like this one has excellent filtering but the fans are noisy and the harmonics beat, get a silent one. Look for ‘silent’ and ‘removable dust filters’ in the reviews. https://www.pccasegear.com/products/65532/asus-gt302-tuf-gaming-argb-mid-tower-e-atx-case-white
  • CPU Intel is dead, their power hungry slow tech is over. Get AMD Ryzen instead. I used to have a Intel I7 Haswell in a Intel mobo and I thought it was the greatest thing, but even the fastest brownout would reset the pc :frowning: When it died I switched to AMD Ryzen 5500 ($119) with the same PSU, its faster, quieter and even long brownouts don’t affect it. This thing is a power miser, fast and saved me $150 over the equivalent Intel setup.
    https://www.pccasegear.com/products/57586/amd-ryzen-5-5500-with-wraith-stealth

A Computer is a big investment in money and time, yet essential for todays skillset.

You have to do everything online these days, don’t skimp on the one essential tool you’ll always need to do that. Plus AI is like a unpaid tutor, one who really knows his stuff and will tutor you on any subject at 2am, it’s a godsend for upskilling!

Wow, @techman and @Belfry have blown me away with their answers. All great advice, but @Belfry nailed it with his opening sentiment, so here i come with my third perspective.

I like headless, unseen machines. Around my house there are laptops everywhere, and depending on which room i’m in, i’m lying, sitting or standing at one of them. The laptops themselves are for my desktop experience, and hidden in the house somewhere is where i do my “hosting” for the real work.

…and the “real work” is the thing you’re interested in investing in, so welcome to the party! Persistent storage and reliable backups are going to form the foundation of whatever you do. In one of my earlier posts i talked about my transition through a two bay, four bay to a six bay NAS. The six bay is a QNAP and i’m very frustrated with the cheap crappy power supplies that keep dying on me. I was happiest with my HP Proliant N40L which offered four bays, and booted TrueNAS from a USB drive.

Upgrading NASes did provide a backup solution for me. I’d boot up my old NASes, run backup jobs, and then shut them down again.

So, if you can get that storage foundation laid, you are then free to change your mind endlessly with the other homelab hosting stuff you want to explore on another machine. I was a Proxmox guy or a while, but i had a very unique network card error on multiple hosts that nobody else seemed to get, so now i’m hosting everything on Fedora Server.

I have a presentation a few weeks ago on source control management with git, and @techman complemented my presentation with his own on fossil as an alternative. The reason i mention this is that if your important work is in a repository that you fundamentally push/pull to a remote host (maybe that NAS) you can “distro hop” and your work will move with you. Of course you can use any remote storage solution. I just think it helps to force yourself to master tools like git because those stills are useful elsewhere.

@techman’s advice on buying the best tool is great, but i’m still somebody that will buy two crap tools from Bunnings before springing for the thing that i should have bought in the first place. I reckon the fun of self hosting is in testing as you go, so i think @Belfry might be bringing his work ethic home with him a bit. Having said that, if Plex is down it’s priority #1 in our house.

Anyway, good to hear from you on this forum @matthew919 . We’d love to hear how any of this broad range of advice might have swayed your plans.

Maybe we’ll see you tonight?

This video might interest you @matthew919

I’m also interested to see what everybody else thinks about the NAS that he used…

@matthew919

The above is all excellent advice. In brief:

  1. What are your aims?
  2. How are you assessing the tradeoff between price and functionality?
  3. Have you got your data secure?

With regard to data security, I would like to say that I follow the old advice of having three copies of your data on two different media at two different sites. I would like to say it but I mostly cannot. I achieve it for only some very limited data but mostly fail.

At the last homelab meeting in Chermside @Kangie noted the current kerfuffle on the LKML where it looks like bcachefs will be pulled from the 6.17 kernel. There are personalities involved in the issue but this is hardly surprising since there are always personalities involved.

However I have some sympathy for the sentiment often expressed in kernel development circles about the importance of stability in file systems. Most recently Kent Overstreet, the developer of bcachefs, echoed this saying “That’s an easy rule for the rest of the kernel, where all your mistakes are erased at a reboot. Filesystems don’t have that luxury.” As someone who back in the day had all their data eaten by reiserfs, I am not very adventurous when it comes to new filesystems.

How do you keep your data safe? I suspect that @jdownie has got this pretty well sorted. You can defend against hardware failure with real time syncing. James demonstrated syncthing at the last online meeting. It looks pretty cool but I have yet to take it for a drive.

Instantaneous syncing is good but when you stuff up your config, programs etc it’s nice to be able to roll back to a particular point in time. There are multiple solutions, including git and fossil which James and the @techmain discussed at the last meeting and that gives you point restoration for when you have really screwed things up. Again I need to be better at this.

I know you are a keen docker user and docker is a revolution in application management and dissemination. Being able to reproduce the environment for everything except the data is a godsend.

Thanks all, appreciate all the input which helped me come to a decision. I ended up talking to others outside of HLB (which I am also trying to get them involved here!). Long term, I want to get a server rack. So rather than my initial want on a regular desktop case, I have gone for a rack mountable case.

I initially wanted to use ZFS, but the downsides of halved data available, all drives always in use and higher temperatures and the required downtime if a drive fails have always pushed me to managing the data myself. For the vast majority of my data, I am just using rsync between the drives on a periodic basis, once a week. For more critical files, they will be on a daily basis and also backed up on a second NAS.

The only issue I have encountered was one of the 10TB drives when getting wiped and attempting to remount is stuck in a cycle of mounting on “loop1” and showing as only 80MB of data available, which for the life of me i have not been able to fix. Last attempt this morning is worse, drive is not even recognised by lsblk, so storing for the moment in hopes I can fix it another day.

I have moved over all media, and using the old PC with a graphics card i forgot i had to start running FrigateNVR for camera monitoring.

Completely went over budget of 2K, came to a grand total of $2.7K but happy with the end result, full parts list below and image of it’s temporary location until the dream day of having a server rack.

  • Seagate IronWolf 10TB x 5
  • Gigabyte 850W Aorus Elite P850W 80+ Platinum
  • Noctua Multi Socket CPU Cooler (NH-D9L)
  • Silverstone RM41-H08 4U Rackmount Server Case
  • AMD Ryzen 7 7700X 8 Core AM5 5.4 GHz CPU
  • Corsair Vengeance 64GB (2x32GB) C40 6000MHz DDR5
  • Noctua 120mm PWM Fan - Round Frame (NF-A12X25R-PWM)
  • Kingston 1TB Gen4 M.2 NVMe SSD
  • Kingston 2TB Gen4 M.2 NVMe SSD
  • Gigabyte B850 Eagle WiFi6E AM5 ATX Motherboard
  • LSI 9211-8i (IT Mode) Fujitsu D2607 Storage Expansion Kit
  • WAVLINK 2.5 Gigabit Ethernet PCIE Network

Thanks to the AI gods, I have the fifth drive up and running now. I do believe it had to do with the drive previously being used in ZFS. But I am just happy ~$400 didnt go down the drain

That’s a really nice build, @matthew919! Glad you got the 10TB drive issues sorted out.

Did you end up going with TrueNAS or Ubuntu server?

I hadn’t seen that RM41-H08 case before either and will have to keep that in mind for my next upgrade. I do love a short depth rack mount case.

Decided going with Ubuntu and managing it myself which is going well so far. But time will tell…
And the case is great! Did try adding the image, but get stuck on “processing upload”…

Hi Matt, we are just using Discourse default of jpg, jpeg, png, gif, heic, heif, webp, and avif with a 10 MB size limit. If the files are bigger than that just gimp the down.

All good stuff. I only have two Ironwolf 8 TB and they’re monsters of data storage. And 4x Ironwolf 4TB drives all ZFS. $2700 is pretty cheap for all that stuff :slight_smile: imho.

The cpu is in TSMC 6nm FinFET and has integrated graphics. Only the one gpu ? :wink: