I must admit is was nice to see a 10TB volume in Disk Management. Given this is just in a standard desktop case not some rack mounted SAN attached monster. It's an obsene amount of disk for a dekstop really. It made me feel all giddy with excitement.
So I ran this bad boy through it's paces and was a little dissapointed to see little to no improvement in disk performane at all.
So looks like I'm hitting the maximum throughput on the RAID card... or the PCIe BUS... At a guess anyway.
The next step was to have a look a single drive performance with that controller. So 6 single disk arrays were created.
OS was installed to the first drive leaving the other 5 for Software raid testing. Performance on a single drive looks like this.
107MB/s is pretty dam quick! But why does a three drive stripe give the same 450MB/s that we got out of a 6 drive stripe? I think really we're looking a the performance of the write Cache on the RAID controller rather than raw drive speed.
So here's a quick breakdown of the Software RAID0 testing I ran through.
These tests were with Directory Opus copying a 12GB file from the Software RAID to the System drive. Peak Speed seems to scale with the number of drives in the Software stripe. However average speeds were consistent across all configurations. The peak was always achieved within the first 5% of the copy and then throughput slowed down for the rest for the operation, probably keeping pace with the single drive target of the copy. So while all this is mildly interesting it's safe to say the whole 1GB/s throughput execerise resulted in an epic fail. Don't worry though I'll go away and think about it a bit and see if I can't open up that bottleneck and get closer to the important milestone in IO performance.
No comments:
Post a Comment