SSD Parity vs HDD Parity vs SSD Array in Unraid

We are once again performing a simple test to observe some potential performance differences in different array configurations. Please note that we are not trying to achieve the fastest possible network performance, we already know how to do that. If speed is your goal, you need NVMe cache drives 1TB or greater, especially if you plan to hit 1GBps.

I highly recommend checking out these two past posts that are similar in topic. I will be merging some of the information from the previous posts here.

Not Universally Applicable

First and foremost, these tests only allow us to get a very rough idea of what to expect. What you see here will not directly translate to all flash storage devices and that is because not all storage devices have the same architecture. The variance in architectures alone can account for great speed increases or drastically slow performance. Keep that in mind as you continue on downward.

Hardware

We are now ready to start setting the stage. I always like to start with hardware so we can get an idea of what all is being used to make these tests work. For this test I landed on the somewhat lackluster 2TB Samsung 860 Evo TLC MZ-76E2T0B/AM SSD. Here is the list of drives used for the current tests.

Our servers are both rocking 10GbE network adapters and we have a UniFi 10Gb Switch in the middle to reduce any network bottle necks. I have set the MTU to be 9000, so we really shouldn’t have to worry about the network at this point. Here is a very basic diagram to help build a mental image for you.

Data Transmitter (sender)

Data Receiver (client)

topology.png

Here is a physical representation of the servers. The top one is our Receiver and the bottom server is our Transmitter.

Untitled 2.001.jpeg

Lastly, the sender is transferring the 40GB video from a share that lives on the cache NVMe drive. This will give us the fastest possible speeds.


The Array

Alright, our stage is getting pretty well set. Let’s list off a couple more things before we jump right to the results of our tests. Each of the following Test cases is not using cache because our goal isn’t to have the absolute fastest possible transfer speeds. Our goal is to observe if there is anything to be gained by having a non-standard configuration. Also, having a cache would artificially inflate our results, so that’s another reason we can’t have cache drives.

Test 1

  • No cache drives

  • 3 hard drives as data drives

  • 1 hard drive as parity

Test 2

  • No cache drives

  • 3 hard drives as data drives

  • Samsung SSD as Parity

Test 3

  • No cache drives

  • All SSD Array

Finally, here is a picture of all the drives used minus the Toshiba drive used from previous tests.

Screen Shot 2020-11-27 at 3.23.17 PM.png

Test Setup

The test setup is remarkably similar to the previous testing we did with “All NVMe SSD Array in Unraid” where I will literally be borrowing the same script to do all of our write testing. The only difference here is we will not be trying to fill the array to the brim but will be trying to do at least 41 transfers of the same video file in order to capture the total time and speed.

Execution Steps

The steps to execute this test are simple.

  1. Run the script for each of the Tests listed way above

  2. After each test is done, delete all of the files out of the array in PNAS

  3. Rename or edit the script as necessary to not write over the CSV file that gets created

The Script

This is the script I am using to execute the test. This script simply copies over our 45GB video file 41 times with a 30 second break in between each secure copy. Each time a transfer takes place the results get written to a CSV file that we can later analyze. You should know that secure copy is not a great benchmarking tool but for our purposes it works fine.

#!/bin/bash
i="0"
while [ $i -lt 41 ]
do
script -q -c "echo run-$i;scp -r /mnt/user/Videos/Completed/2020/5700XT.mov 
172.16.1.10:/mnt/user/downloads/$i-5700XT.mov" >> /tmp/resultsNVME.csv
i=$[$i+1]
sleep 30
done

Sample output

Screen Shot 2020-11-27 at 2.32.37 PM.png


Test Results

Here you go, you have been teased long enough. Here are the results!

First up is the Average Transfer Speed in MBps.

Screen Shot 2020-11-27 at 4.51.50 PM.png
Screen Shot 2020-11-27 at 4.53.40 PM.png

Next up the all important time factor!

Screen Shot 2020-11-27 at 8.03.04 PM.png
Screen Shot 2020-11-27 at 8.04.56 PM.png

What the heck happened?

Why is the Samsung SSD Parity configuration such a slow performance??? Well it turns out the 860 Evo doesn’t have the best IOPS in the world. Which is exactly a problem we should expect when we are using an SSD with poor IOPS. The Toshiba and LITEON NVMe drives both have Random Read and Write IOPS that are well above the 300K/200K mark, while the Samsung 860 Evo is at or below 90K.

The all Samsung SSD configuration is very promising, I have read from many different places on the internet that an all SSD array is the way to go, giving the best overall read and write performance. However, you should know that this is currently not recommended by Unraid. Subject to change with future releases.

Now again, if you are looking for ultimate speed, you should instead be using NVMe or SSDs in cache, preferably 1TB drives at a minimum because they typically have larger on-board cache. You can also setup a RAID 0 or RAID 10 cache in Unraid if you are dead set on maximizing every drop of performance.

Conclusion

You probably still have a lot of questions and honestly, I’ve probably already answered a bunch of them in previous posts. You should definitely at least start there (hint scroll to the bottom of the links).

However, even with that being said there is one question that I must answer.

Does the 2TB Samsung 860 Evo MZ-76E2T0B/AM support Trim?

Yes, yes they do. You are looking for sdb and sdc, you can verify this by looking at the Unraid drive list given above or just take my word for it, either way, either way is fine.

Screen Shot 2020-11-27 at 2.49.58 PM.png