Quantcast
Channel: File Services and Storage forum
Viewing all articles
Browse latest Browse all 10672

Tiered Storage Spaces with LSI RAID Controller 9260-8i (no JBOD) - Performance Drop

$
0
0

Hello

I have a Lab-Server with a LSI-Raid-Controller 9260-8i and 2x 256GB SSDs / 6x 600GB HDDs. First I configured the LSI-Raid-Controller with a RAID 1 (2x 600GB HDD) and installedWindows Server 2012 R2 with Hyper-V Role on this RAID 1. This works just fine. Then I configured the LSI-Raid-Controller with additional 6x "Raid 0 Drive Groups" where each Drive Group has one single physical drive in it. And then I created 6 virtual drives out of these 6 Drive Groups. So far so good: my Windows Server 2012 R2 now sees 6 new Harddrives (4x 600GB HDD and 2x 256GB SSD). I then created a Storage Pool out of these 6 drives (with PowerShell /assign MediaType SSD/HDD) and on top of the Storage Pool a "Tiered Storage Space" with Mirror Layout (2x 256GB SSDs mirrored and 2x2x 600GBs  HDD mirrored). This gives me a Tiered Storage Space of about 1.3TB. On this Storage Space I created a Virtual Drive of 1.3TB capacity. Success!! It seems to work fine.... Even I do not have a Storage-Controller supporting JBOD directly, I was able to create a Tiered Storage Space!!Now where's the problem? Fine-Tuning the LSI-Raid-Controller Settings and the resultingDisk Performance....

1) LSI-Raid-Controller: Virtual Drive Properties: What should I choose? Read Policy (Ahead or no) / Write Policy (Write Back with BBU or Write Through) / IO Policy (Direct IO or cached IO) / Disk Cache Policy (enable or disabled or unchanged) / Stripe Size (256 KB or ??). Do these settings conflict with the Windows Server Storage Space Layout?

2) Windows Server Disk Management (under "Disk XY"):  Write Cache Policy? (activate Write Cache on this Device) 

3) Windows Server Device Manager (under "Drives" - Microsoft Storage Space Device):  Write Cache Policy? (activate Write Cache on this Device)

4) Performance - Results with Crystal Disk Mark: the inital Results after setting up the Storage were quite good (Seq R: 550 MB/s and W: 590 MB/s //  512K R: 490MB/s and W: 618 MB/s // 4K R: 18MB/s and W: 37 MB/s  //  4KQD32 R:270 MB/s and W:37 MB/S) But 2 months later the values dropped to: Seq R: 290 MB/s and W: 170 MB/s //  512K R: 120MB/s and W: 239 MB/s // 4K R: 1.5MB/s and W: 31 MB/s  //  4KQD32 R: 9 MB/s and W: 71 MB/S). Huge loss of performance - SSD full? 

5) Since this is a Hyper-V Server I put some VMs on it. The Performance within the VMs has also dropped accordingly. Are there anybest practices when placing VHDX-Files on  a Tired Storage Space? I could of course assign one or two VHDX-Files directly to the SSD Tier, but actually I don't want that because that would use too much SSD-Space.

Any Experts on this Subject?

Mark







Viewing all articles
Browse latest Browse all 10672

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>