![]() If we had used the full disks, the usable space on the eight-disk RAID6 topology would have been roughly 65TiB-and it would have taken several hours to format, with similar agonizing waits for every topology tested. Using these relatively small partitions instead of the entire disks was a practical necessity, since ext4 needs to grovel over the entire created filesystem and disperse preallocated metadata blocks throughout. When we tested mdadm and ext4, we didn't really use the entire disk-we created a 1TiB partition at the head of each disk and used those 1TiB partitions. We also had to invoke arcane arguments- mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0-to avoid ext4's preallocation from contaminating our results. In the next article in this series-on ZFS tuning and optimization-we'll update to the brand-new Ubuntu 20.04 LTS and a much newer ZFS on Linux 0.8.3. We tested with 0.7.5 anyway-much to the annoyance of at least one very senior OpenZFS developer-because when we ran the tests, 18.04 was the most current Ubuntu LTS and one of the most current stable distributions in general. It's worth noting that ZFS on Linux 0.7.5 is two years old now-there are features and performance improvements in newer versions of OpenZFS that weren't available in 0.7.5. Each of the tests was run with both 4K and 1M blocksizes, and I ran the tests both with a single process and iodepth=1 as well as with eight processes with iodepth=8.įor all tests, we're using ZFS on Linux 0.7.5, as found in main repositories for Ubuntu 18.04 LTS. We ran them locally on the Hot Rod, and we used three basic random-access test types: read, write, and sync write. Advertisementįurther Reading How fast are your disks? Find out the open source way, with fioAs always, we used fio to perform all of our storage tests. The first four bays of the chassis have our own backup data on them-but they were idle during all tests here and are attached to the motherboard's SATA controller, entirely isolated from our test arrays. The Storage Hot Rod's also got a dedicated LSI-9300-8i Host Bus Adapter (HBA) which isn't used for anything but the disks under test. Rosewill RSV-L4112- Typically $260, currently unavailable due to CV19 LSI-9300-8i 8-port Host Bus Adapter- $148 at AmazonĨx 12TB Seagate Ironwolf- $320 ea at AmazonĮVGA 850GQ Semi Modular PSU- $140 at Adorama Specs at a glance: Summer 2019 Storage Hot Rod, as tested It's got oodles of RAM and more than enough CPU horsepower to chew through these storage tests without breaking a sweat. Equipment as testedįurther Reading I updated my crusty old Pentium G-based server-the results are worth sharingWe used the eight empty bays in our Summer 2019 Storage Hot Rod for this test. If you change the model of disk, your raw numbers will change accordingly-but for the most part, their relation to a single disk's performance will not. All of our charts relate the performance of ZFS pool topologies at sizes from two to eight disks to the performance of a single disk. If you aren't yet entirely solid on the difference between pools and vdevs or what ashift and recordsize mean, we strongly recommend you revisit those explainers before diving into testing and results.Īnd although everybody loves to see raw numbers, we urge an additional focus on how these figures relate to one another. We're going to draw heavily on lessons learned as we explore ZFS topologies here. But at some point, people (particularly computer enthusiasts on the Internet) want numbers.įirst, a quick note: This testing, naturally, builds on those fundamentals. It's also important to understand what ZFS is and how it works. To truly understand the fundamentals of computer storage, it's important to explore the impact of various conventional RAID (Redundant Array of Inexpensive Disks) topologies on performance. This has been a long while in the making-it's test results time. Understanding RAID: How performance scales from one disk to eight.ZFS 101-Understanding ZFS storage and performance.ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner.Return to RAID: The Ars readers “What If?” edition.OpenZFS 2.1 is out-let’s talk about its brand-new dRAID vdevs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |