NAS Hard Drives

I've worked in IT for 25+ years now. I do not trust NAS units for data storage. I've seen data corruption far too many times when controllers act up. If you are going to use a NAS, much more important than cycling drives is to simply have a good backup of your data. External USB drives are cheap and they already can go as big as 18TB if not more. Cloud backup is also an option, but depending on how much data is involved it can take a very long time to transfer.
 
Last edited:
I've worked in IT for 25+ years now. I do not trust NAS units for data storage. I've seen data corruption far too many times when controllers act up. If you are going to use a NAS, much more important than cycling drives is to simply have a good backup of your data. External USB drives are cheap and they already can go as big as 18TB if not more. Cloud backup is also an option, but depending on how much data is involved it can take a very long time to transfer.

NAS isn't for backup. It's for availability. I'd recommend anyone with a NAS have backups of anything important.

I have a ghetto Linux based NAS running in a tower with random drives in it. I was planning on going with something requiring less hands on, like an 8-bay Synology, but those Chia fucktards made getting new drives impossible. So, I wait. I'm not doing anything mission critical with mine. It's primarily used as a media server.
 
I've worked in IT for 25+ years now. I do not trust NAS units for data storage. I've seen data corruption far too many times when controllers act up. If you are going to use a NAS, much more important than cycling drives is to simply have a good backup of your data. External USB drives are cheap and they already can go as big as 18TB if not more. Cloud backup is also an option, but depending on how much data is involved it can take a very long time to transfer.
Agreed. Frankly, I don't trust the blackbox NAS appliances for anything. But, that's just me.

My storage server is mostly for minimizing downtime in the case of a minor failure and because I trust neither OS X nor Windows to actually store things well, at least compared to FreeBSD/ZFS.

Backups are off-site and automatic. Though, I'm very close to switching to a different one that's a better match as soon as I can justify the expense.

As a side-effect, it's also fast enough that I don't notice the difference between network storage and local storage, and that's with only an NVMe drive in my desktop.
 
1. Do you cycle drives in your RAID devices to try to stem the tide of unexpected failures? If so, how often? How long do you trust a modern spinny disk under moderate use?

2. Is the above disk still a good choice in 2022?

1. Not necessary, the entire design element for a redundant array originally was to maximize the volume storage size until a single element of a disk would fail. Hard Disk Drives should no longer be used IMHO due to the increased energy required to keep them powered and susceptibility to hardware failure. A typical HDD lasts anywhere from 5-12 years from my experience with storage arrays (small data centers, small business, church, etc.).

2. No. To me it depends on what type of files you're storing. Are these multimedia archives or are they active files that you occasionally recall? Several options for something like this: Polar Backup, Google Drive, OneDrive, others mentioned - BackBlaze, AWS Glacier both solid solutions depending on the type of data and frequency of use.

Whatcha got M@-man?
 
NAS isn't for backup. It's for availability. I'd recommend anyone with a NAS have backups of anything important.
Of course, I never suggested a NAS should be used for backups. I suggested that a USB drive is used for backups of the NAS in the event that the NAS fails or corrupts data.
 
I have lost all faith in the RAID systems. I've had a bunch, including an 8 drive QNAP 15TB,
RAID 6 (which is now a paper weight), and every other kind, in servers and standalone.
Every one has failed at some point and the recovery success was bleak.
Swapping out bad drives is ok to a point, but they all need to be the same drive size & model,
which will eventually become unavailable.

I now use single external USB drives (latest ones I got are 5 TB). Buy a few (they're pretty cheap
these days) and make copies on all. When they fail, which they will, just replace the failed one with
whatever the latest and greatest is, and carry on. They won't all fail at the same time.

Get SSD (which may be the only kind you can get now). No moving parts and faster transfer rates.
And don't leave them powered up 24/7. Just connect them when you need them.
 
Hey Matt,
I don't personally subscribe to replacing drives just based on age. I have several at 10+ years with 0 reallocated blocks. I do agree with initially buying drives from different vendors to try to ensure they are from separate manufacturing batches.

IMO, it would be more cost effective to either upgrade to a NAS with more hot spares installed, RAID 10, or a second NAS that could back up the entire storage requirements.
Check into options your NAS has for checking and alerting/reporting drive health. There are usually warning signs before they completely fail.

I hate Seagate because they screwed me across several server installs years ago. I'm sure they are fine now, but still can't bring myself to buy them. :)
I've had good luck with WD, but there is chance of lemons with any of them-- thus purchasing from different batches when installing RAID is a good idea.
To my expensive experience Seagate HD had some heating problems in the past. Only burned one but what a shit when it happens overseas...
 
My RAID is mirrored, and backed up to Backblaze. The HDs in the RAID (a Synology) were purchased on the same date. I guess it's possible that they are going to hit their wonky point at about the same time, which would be annoying. It's also likely that I'll swap in some larger drives before then.
 
I have lost all faith in the RAID systems. I've had a bunch, including an 8 drive QNAP 15TB,
RAID 6 (which is now a paper weight), and every other kind, in servers and standalone.
Every one has failed at some point and the recovery success was bleak.
Swapping out bad drives is ok to a point, but they all need to be the same drive size & model,
which will eventually become unavailable.

I now use single external USB drives (latest ones I got are 5 TB). Buy a few (they're pretty cheap
these days) and make copies on all. When they fail, which they will, just replace the failed one with
whatever the latest and greatest is, and carry on. They won't all fail at the same time.

Get SSD (which may be the only kind you can get now). No moving parts and faster transfer rates.
And don't leave them powered up 24/7. Just connect them when you need them.
That is because parity raid is dead.

Standard practice has been raid 10 for a long time, for good reason.

Raid 1 mathematically cannot be worse than a single drive. Raid 10 still works out better in practice.
 
That is because parity raid is dead.

Standard practice has been raid 10 for a long time, for good reason.

Raid 1 mathematically cannot be worse than a single drive. Raid 10 still works out better in practice.
Raid 10 also has better performance.
 
Get a NAS that has predictive disk failure that way you dont need to replace disks that have not failed. RAID10 is supper reliable and if you can have a hot spare or two even better. If thats over kill the RAID5 or 1 or cloud, have copy of your data stored offsite... i.e recover from a fire or disaster at your business/residence.
 
Get a NAS that has predictive disk failure that way you dont need to replace disks that have not failed. RAID10 is supper reliable and if you can have a hot spare or two even better. If thats over kill the RAID5 or 1 or cloud, have copy of your data stored offsite... i.e recover from a fire or disaster at your business/residence.
RAID 1 is fine if you don't need much storage and don't need the performance advantage of striping reads.

RAID 5 is dead unless you're using tiny disks. The reason is uncorrectable read errors, which happen occasionally. The tipping point is with 2TB drives...if you have 3 2TB drives in a RAID 5 the read error rate for all drives is high enough that a raid rebuild only has a 50% chance of completing under ideal circumstances....but it doesn't flip on a switch, that's just where a failure becomes more likely than a success. I wouldn't trust RAID5 to work over about 500GB disks.

There's a similar critical size for RAID6, though I don't know it off the top of my head. I believe it's higher than the 8 TB I would naively guess, but not by a lot. By the time you're talking about huge modern hard drives, RAID6 is the equivalent of RAID0 15 years ago.

The RAID 1/10 rebuild is also simpler...there's no parity calculation. If you're also using something that does not use parity but does use checksums (ZFS, BTRFS), then an uncorrectable read error would show up in the checksum and trigger the resilver to just try reading that block again. Plus, it's individual blocks that tend to die, not whole drives, at least a lot more often. So, you can actually do it by adding a 3rd disk to the mirror group, resilver, and then remove the defective one...a lot of the time.

Yes, it's expensive. You have to buy twice as many drives as you need instead of just a couple more. But, it actually works.
 
Back
Top Bottom