I love these sort of questions because you ask 10 people and you’re going to get 25 different answers
!
For what it’s worth, I’d take the following approach:
- Build a new (not necessarily new hardware, but physically separate) TrueNAS server, using brand new SSDs rather than hard drives
- Configure it the way you want it to be configured
- Stress test the hell out of it
- Test test test
- Move the data and the docker containers to the new machine. (I assume you’re using TrueNAS Scale and running docker that way, but please correct me if that’s not the case)
- Test test test
- Decommission the old machine
- Test test test
- Then fix up any hardware issues (fans, etc.), test the hardware (run diagnostics on the HDDs, especially if they’ve been hot) and use that machine for future projects (AI, etc.)
It’s a bit of a mix of your two options (well, essentially option two but without moving the drives). I think 13 year old hardware that’s already showing signs of problems is a good base for an experimental play machine (e.g., AI, Proxmox VMs, etc.) but not a good starting point for a new NAS/docker host which I assume you’re want to operate problem free for many years to come.
The answer will change depending on what you’re optimising for as well. If the constraint is cost, then a different approach might be better. If you’re optimising for power, then maybe spending a bit more for something super efficient may be better. If you’re optimising for reliability, then maybe building two cheaper computers to run in parallel may be better… etc.
You mentioned efficiency - I think some of the Asustor NASes will run TrueNAS Scale and they seem to be reasonably well regarded. I’ve never owned one or used one, but a quick bit of searching threw up several models that are x86-64 based, can take RAM upgrades, etc. such as the AS5404T. I’ve not used this unit myself, and wasn’t even aware of this specific model until 10 minutes ago, but these kind of units that can run TrueNAS may be a good starting point for the money (vs. new motherboard, new cpu, new ram, etc.) and still give you the 13 year old machine to restore as a machine for other projects. I’ve seen various bits and pieces online about running TrueNAS on Asustor NASes rather than the included firmware so that may be worth looking into further.
If you really wanted to reuse the 6TB drives (which is absolutely fair enough), my approach would be to whittle down the media as much as possible to reduce the amount of data being transferred, buy/borrow some other storage with that much free space and/or free up space on your QNAP, copy it all across to the free space, hash it to make sure the data integrity is good, and then run each of those 6TB drives through the respective manufacturer diagnostic tool. I’m likely being paranoid here, but if I were transferring those drives into a new build, I’d want to do so with absolute confidence that they’re in 100% working condition rather than get a month into the new build and start having to replace drives. Especially so if they’ve been knocked around a bit or maybe overheated in a Brisbane summer and a case with fans that are playing up.
For what it’s worth, I’ve been throwing these cheap Orico SSDs at everything lately. I have a 1TB in an infrequently used desktop machine, and I’m running 4x 256GB ones in my Proxmox server (also ZFS). The 256GB ones were $23 each back in Feb. They’re almost certainly not going to outrun Corsair or Samsung SSDs I’m using elsewhere, but it’s really hard to argue with that sort of pricing and they haven’t missed a beat.
Over the last few decades I’ve used second hand SCSI-2 drives, SAS drives, ex-server dedicated RAID controllers with their own onboard RAM and CPUs, onboard softRAID, JBOD, the legendary Promise ATA66 modified into a FastTrack66, IDE, SATA, various brands of commercial and home built NAS based on Windows/Linux/BSD, and now SATA SSDs and M.2 NVMe drives. I honestly think throwing several “good enough” drives at a server is the way to go for home use based on how cheap SSDs have become. I’m also curious about the hot-swap comment as I’ve only ever had two drives fail, and the 20 minutes to shut down the machine, swap out the drive, and then let the RAID etc rebuild for a few hours wasn’t a huge drama. Hot swap capable hardware is going to add a level of cost you may not want to achieve a level of reliability that you may not need. Personally, I’d be spending that cash on a cheap UPS instead, or in my case have an extra $23 SSD sitting in its original box next to the server. If one of the drives fail, I’ll shut the server down, swap the SSD out with my cold spare, boot it up and move on. Your needs may be different, and I also recognise the very real desire to have the extra cool hot swappable setup though
.
It’s my usual sort of non-answer (sorry
), but to summarise my ramblings: ponder the constraints and what’s really important to you (physical space, running cost, reliability, the love of the project to build something yourself, uptime, energy efficiency, upfront hardware cost, etc.), and do your best to build and test something new and separate, then copy data across and do a “cut over” rather than trying to mess around too much with your 13 year old “production” machine and risk data loss.
I’m also curious about what others think (see above about loving the “ask 10 people and get 25 different answers” type discussions!)
If you’re coming to Chermside tomorrow evening, let me know. I’ve got a perfect candidate for you which I’ll throw in the car and you can have for free if it’ll suit. PM me for more details if you’re interested.