I’m installing 3x2TB HDDs into my desktop pc. The drives are like-new.
Basically they will replace an ancient 2tb drive that is failing. The primary purpose will basically be data storage, media, torrents, and some games installed. Losing the drives to failure would not be catastrophic, just annoying.
So now I’m faced with how to set up these drives. I think I’d like to do a RAID to present the drives as one big volume. Here are my thoughts, and hopefully someone can help me make the right choice:
- RAID0: Would have been fine with the risk with 2 drives, but 3 drives seems like it’s tempting fate. But it might be fine, anyhow.
- RAID1: Lose half the capacity, but pretty braindead setup. Left wondering why pick this over RAID10?
- RAID10: Lose half the capacity… left wondering why pick this over RAID1?
- RAID5: Write hole problem in event of sudden shutoff, but I’m not running a data center that needs high reliability. I should probably buy a UPS to mitigate power outages, anyway. Would the parity calculation and all that stuff make this option slow?
I’ve also rejected considering things like ZFS or mdadm, because I don’t want to complicate my setup. Straight btrfs is straightforward.
I found this page where the person basically analyzed the performance of different RAID levels, but not with BTRFS. https://larryjordan.com/articles/real-world-speed-tests-for-different-hdd-raid-levels/ (PDF link with harder numbers in the post). So I’m not even sure if his analysis is at all helpful to me.
If anyone has thoughts on what RAID level is appropriate given my use-case, I’d love to hear it! Particularly if anyone knows about RAID1 vs RAID10 on btrfs.


Sorry but I find this claim irreconcilable with how SLES and Fedora default to btrfs with their installations, or how a company like Meta uses it across their entire fleet.
I don’t know if Meta uses the raid feature directly or if they use, as you suggested, mdraid with btrfs on top. I know that that’s what Synology does.
They most likely run smaller pools and have their redundancy and replication provided by the application layers on top, replicating everything globally. The larger you go in scale, the further up in the stack you can move your redundancy and the less you need to care about resilience at the lower levels of abstraction.
ZFS is fairly slow on SSDs and BTRFS will probably beat it in a drag race. But ZFS won’t loose your data. Basically, if you want between a handful TB and a few PB stored with high reliability on a single system, along with ”modest” performance requirements, ZFS is king.
As for the defaults - BTRFS isn’t licence encumbered like ZFS, so BTRFS can be more easily integrated. Additionally, ZFS performs best when it can use a fairly large hunk of RAM for caching - not ideal for most people. One GB RAM per TB usable disk is the usual recommendation here, but less usually works fine. It also doesn’t use the ”normal” page cache, so the cache doesn’t behave in a manner people are used to.
ZFS is a filesystem for when you actually care about your data, not something you use as a boot drive, so something else makes sense as a default. Most ZFS deployments I’ve seen just boot from any old ext4 drive. As I said, BTRFS plays in the same league as Ext4 and XFS - boot drives and small deployments. ZFS meanwhile will happily swallow a few enclosures of SAS-drives into a single filesystem and never loose a bit.
tl;dr If you want reasonable data resilience and want raid 1 - BTRFS should work fine. You get some checksumming and modern things. As soon as you go above two drives and want to run raid5/6 you really want to use ZFS.