Quantcast
Channel: File Services and Storage forum
Viewing all articles
Browse latest Browse all 10672

SOFS Throughput Issues

$
0
0

A question very similar to mine exists here.

I have a SOFS cluster (3 hosts). I connected each without nic teaming at first and later tested with nic teaming. I'm using a single 10GbE Netgear M7100-24X switch. The CSV is configured as a 2-way mirror through storage space using a SAS JBOD with 24 disks. Each host is configured the same way with 32 GB of RAM. 6 GB is set for CSV cache.

I ran ntttcp test (v5.28) with 8 threads. Sending to the SOFS host, I get over 1100 MB/s throughput. Receiving from the SOFS, I get just under 680 MB/s throughput. So the switch looks to be working fine.

When using LAN Speed Test (Lite), connections directly to the file share folders (\\host#\c$\ClusterStorage\Volume1\Shares\folder) for a 200MB file for each server averages to about 700 Mbps write and 2000 Mbps read. Connection to the cluster role (\\sofs\folder), results in 90 Mbps write and 2000 Mbps read. However, after waiting for a minute for it to start running, the speed test starts and pauses repetitively. I know this doesn't mean much because it isn't testing transfers from SMB to SMB.

Since I can't set up another SMB to test SMB to SMB transfer, I'm jumping straight to Hyper-V. In VMM, I added SOFS file share folder to an existing vm cluster. After that, I migrated a vm to one of the hosts in the cluster with high availability checked and saw that it indeed used the \\sofs\folder.

Using LAN Speed Test (Lite) on that vm and back to that particular vm host, I'm getting under 90 Mbps write and 340 Mbps read. If you recall the earlier results directly to \\sofs\folder, the write speed is similar to just regular file transfer speed, but the read speed is 6 times lower. Sending with ntttcp, I'm getting an average of 11 MB/s throughput, which does explain the 90 Mbps write. And receiving from the host, I'm getting an average of 42 MB/s throughput, which also explains the 340 Mbps read speed. But another vm hosted by the same server without SOFS is giving me 350 MB/s sending and 360 MB/s receiving to and from that host respectively. Although way faster, this does seem a little bit slower. I then ran Passmark Network Test to be thorough. Max speed of the vm using SOFS is 100 Mbps sending and 330 Mbps receiving. The vm without SOFS is 7500 Mbps sending and 6000 Mbps receiving. I don't know why ntttcp differs from Passmark this much. (Maybe ntttcp not as optimized for 10GbE?)

But disregarding discrepancies on the results for the vm without SOFS, it is still clear that the vm with SOFS as storage is way slower. To rule out nic teaming as the solution to my problems, I've set up nic teaming (switch independent and dynamic) to all the SOFS hosts. I didn't get much difference in the results. As I do not have another switch, I don't think nic teaming helped with the load balancing. And I haven't set up link aggregation (MLAG) on the switch either.

Is this the speeds that I should be getting, or are there other optimizations or configurations you can suggest? I'll be honest, a single vm on SOFS doesn't lag very much if at all despite its awful throughput I'm currently getting. What I'm scared of is if I put 50 vms and have SQL Server run off the SOFS.



Viewing all articles
Browse latest Browse all 10672

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>