Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

Storage Spaces & NVMe performance issues - Windows 2019

$
0
0

I have a test environment which originally had the following configuration:-

2 x HP DL380 Gen 10 Servers running Windows 2019
4 x 1.4TB SSD - Cache

8 x 1.6TB HDD - Capacity

Storage Spaces etc and using VMFleet as my benchmarking tool building 20Vms on each host I was able to achieve the following results on a 4k 100% read test (all data in cache):-

We wanted to see if the Iops could be pushed higher so upgraded the servers to the following:-
2 x 1.4Tb NVMe
4 x 1.4Tb SSD
8 x 1.6Tb HDD

Building the same vmfleet configuration but specifying that NVMe was cache and setting SSD as performance the same stress test just produces similar iops etc

I have destroyed and rebuilt the configuration several times but am still seeing the same results which is confusing me as to whether I have a config issue or something else

Firmware as up to date as it can be on the physical servers (still waiting on a lot of 2019 drivers).

Any pointers as to where I should look to improve this are gratefully received.


Storage Spaces Single Parity Column Count

$
0
0

I am seeking help to define the correct PowerShell Syntax for creating a single parity Storage Space utilizing four disk drives of equal capacity (I.E. 4 x 2TB drives) in a single storage pool. I believe in theory this should result in approximately 75% of the pool for data, and 25% of the pool for parity info.

Furthermore, it is unclear to me if using the Windows 10 GUI for storage spaces will create the requisite 4 columns if  presented with four physical disks within a storage pool.

As I have researched this topic, it seems as though possibly, PowerShell will create a better more space efficient Storage Space than the Windows 10 GUI will.

Any help guidance will be appreciated.

Slow to access exe from clustered share

$
0
0

2 node virtual 2016 cluster.

Share created on clustered shared volume E.

When I try to run an executable from a windows 10 or another 2016 server it will take up to 20 minutes to actually run.

Text files open ok.  

Windows 7 pcs are not having this issue.

Ive tried different exes, recreated share, still no luck

Storage Replica fail over testing

$
0
0

I am trying replace our current DFS with Storage Replica.  I've setup a prod file server and a dr file server.  I've created a SR partnership using the following command:

New-SRPartnership -SourceComputerName "prodserver" –SourceRGName rg01 -SourceVolumeName "d:" -SourceLogVolumeName "e:" -DestinationComputerName "drserver" –DestinationRGName rg02 -DestinationVolumeName "d:" -DestinationLogVolumeName "e:"

I see that data is replicating fine. 

I am trying to test fail over by shutting down prodserver.  On the drserver, it shows that the D:\ is ReFS and is not accessible.  I tried running the following command to make drserver primary:

Set-SRPartnership -NewSourceComputerName "drserver" -SourceRGName "rg02" -DestinationComputerName "prodserver" -DestinationRGName "rg01"

I get the following error:

Set-SRPartnership : There is no partnership between source replication group rg02 and destination replication group
rg01. Please check that partnership has been removed from all computers.
At line:1 char:1
+ Set-SRPartnership -NewSourceComputerName "drserver" -SourceRGName "r ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (MSFT_WvrAdminTasks:root/Microsoft/...T_WvrAdminTasks) [Set-SRPartnershi
   p], CimException
    + FullyQualifiedErrorId : MI RESULT 6,Set-SRPartnership

How do I use drserver as my new file server with prodserver offline?

Thanks

UNABLE TO COPY PASTE ANY FILE FROM ONE DESTINATION TO OTHER

$
0
0

I Am using windows 10. Now I am Trying to copy and paste my files to one place to other but unfortunately the pasted file is an error file that has been copied by me before 4 days. Kindly suggest me the necessary action through which i can come out from this problem.

AMAR MIHIR DASH


how do i eliminate edb files

$
0
0

how do I eliminate edb files in windows 10

Upgrade to Server 2019 storage spaces problem - not migrated due to partial or ambiguous match

$
0
0

I upgrade a server from 2016 to 2019. After the upgrade one of my Storage Spaces drives went missing. The drives are showing in Device Manger but aren't showing Disk Management and the Storage Pool is gone. At first I thought the problem was a cheap controller (4 Port SATA in home test server) wasn't being supported in Server 2019. I swapped out with a LSI SATA / SAS card that has certified drivers. Still having same problem. All 4 disks are showing this error in device manager:

Device SCSI\Disk&Ven_ATA&Prod_INTEL_SSDSA2MH08\5&10774fde&0&000400 was not migrated due to partial or ambiguous match.

Last Device Instance Id: SCSI\Disk&Ven_Msft&Prod_Virtual_Disk\2&1f4adffe&0&000003
Class Guid: {4d36e967-e325-11ce-bfc1-08002be10318}
Location Path:
Migration Rank: 0xF000FC000000F120
Present: false
Status: 0xC0000719

This doesn't seem to directly be Storage Spaces issue but since they were that way in 2016 I figure it might be related. Anyone have any suggestions?

Windows Server Deduplication: Best way to verify post-Unoptimisation integrity? (ie. inflating optimised chunkstore->normal files)

$
0
0

We have a single disk file server that we have needed to disable/convert from deduplicated state back to basic file structure.

We have run the -unoptimise command which has inflated our folders back again, and reduced chunk store by half, but not gotten rid of it all completely (which is strange). We have also run out of disk expand space and need to get it back somehow.

We have run scrubbing and garbage collect but still the chunk store stays at 6TB (which is 1/3 of the capacity). Next step is to just shift delete the chunk store blobs after removing dedup completely. 

How can I be sure those blobs arent real data? I want to run a file verifier across the folder structure to create some hashes, then snapshot server, delete chunkstore, and then run the verify again to confirm all the data in our folders is actually there and not got some pointing at the now deleted blobs). We are working under the assumption that windows dedup has borked its algorithm and gone awol (many reports of similar experience) creating blobs that it forgets about. This would make sense because there really should never be as much data on this server based on growth projections pre-dedup.

Will a usual hash tool work on deduped files? Any other quicker ways?


Robocopy Version XP010 - Excluding Multiple Directories using /XD

$
0
0

I'm attempting to use Robocopy to routinely copy data between 2 servers.  In the file structure being copied there are several folders + their associated sub-folders e.g DfsrPrivate and Projects\Archived for this example I don't want to copy.

 

I've attempted to use the switches  /XD DfsrPrivate /XD Projects\Archived

/XF is also used to exclude all .bak files. (referenced after the 2 /XD switches)

 

This results in the log file header below:


-------------------------------------------------------------------------------
   ROBOCOPY     ::     Robust File Copy for Windows     ::     Version XP010
-------------------------------------------------------------------------------

  Started : Wed Dec 05 00:51:28 2007

   Source : \\[Servername]\Data\
     Dest : D:\Data\

    Files : *.*
    
Exc Files : *.bak
    
 Exc Dirs :  DfsrPrivate
     Projects\Archived
     
     
  Options : *.* /S /E /COPYALL /ZB /MAXAGE:1 /R:10 /W:30

------------------------------------------------------------------------------

 

This results in the DfsrPrivate being excluded but Projects\Archived and all the subfolders below are not.

 

Has any one had experience with trying this, and had success?

Windows Server 2016 and PowerPath

$
0
0
Hello,

it´s the first time for me to work with PowerPath. I'm a bit awkward.

Our environment:

VMware vCenter 6.5

Storage two VNX5400 and a EMC VPLEX Metro.

PowerPath Version 6.3

We have some Windows 2008 and 2012 Server - both versions run in seperat Clusters - everything works fine.

The installation was done before my time.
Now we need a new Windows 2016 Cluster. I have installed the OS and i see the presented RAW-LUNs from the VPLEX. My problem is that the validation test from microsoft has problems with the LUNs.

My question is:

Do i have to install a additional Software (PowerPath) on the Windows Server that PowerPath works correct? Because in this document (https://www.emc.com/collateral/TechnicalDocument/docu89663.pdf) the speak about PowerPath installer for Windows. Or is this only for installations on Hardware?

Thank you...

Storage Replica - Log service encountered a corrupted metadata file

$
0
0

Hi,

I have a WS2019 stretch cluster lab running Storage Replica Async and I have managed to break the replication, hoping someone can suggest how best to recover from a scenario like this.

It was working fine, and I actually enabled Deduplication on the cluster file server and tested that out. It appeared to be ok, but then I attempted to move the cluster group to another node and at this point Storage Replica failed with this error:

Log service encountered a corrupted metadata file.

I assume that the cluster may not have liked the fact that there were writes going on at the time the async disk replication attempted failover -- whether standard filesystem writes or Deduplication optimisation I'm not sure.

Now that it is in this state, how do I recover? If I attempt to online the disk resource it comes online for a few seconds then repeats the above error. Is there a way to repair or reset the log without starting from scratch? Or do I just need to disable replication and recreate it?

Thanks,
Simon.

Event viewer reports bad blocks, is the disk dying?

$
0
0

I have an ssd, is about 5 years old, a 500gb drive. Last week i had to format the pc twice because of registry errors (i was not doing anything on the registry). 

I have the suspicion that the drive is dying, but I want some help with the event viewer data (it reports multiple disk error everytime i turn on the pc). But maybe the problem lies somewhere else 

The device, \Device\Harddisk0\DR0, has a bad block.

0000: 00800003 00000001 00000000 C0040007 

file server scale out clustering multi site

$
0
0

Dear Forum,

my company are planning to deploy file server scale out with multi site, and we plan using 2 node. node-1 at Head office and node-2 at DR site, and we want to separate client access for users at DR site access to node-2 & Client at HQ access to node-1.

pls kindly check my draft diagram and i would like to get any best practices advice for this deployment. 

Disk Queue Length Average is High

$
0
0

Dear Support,

We have file server on Windows Server 2012 R2. 

Shared folder has been kept on Storage LUN mapped to file server.

have been facing performance issue. Disk queue length is high approximate 50.  LUN Size 3 TB and free space 500GB.

Please let us know, how to fix

Thanks,

Ritesh


R!t@$#

Windows 10 Storage Space, installed Server 2016, can't access

$
0
0
My desktop was running Windows 10 Home, and I had two 3TB HDD's setup in Storage Spaces. I switched jobs and got a Server 2016 Standard license and installed it on my desktop, and now don't see, nor can find the drives other than in the BIOS. I don't see the disks or volume, any thoughts on how to get it recognized and connected? If not, if I reinstall 10, will it show up again?

Error 0x80070057 - The Parameter is Incorrect (FSRM)

$
0
0

Whenever I try to Schedule a new Report Task in the File Server Resource Manager I get the error the following error:

"0x80070057, The parameter is incorrect."

Details:
Invalid parameter

When I press "Ok" on this error it moves on to the next error:
An unexpected error has occured. Please check application event log for more information.

Pressing "Ok" on this error closes the Storage Repors Task Properties without saving.

After that I checked the Application logs in Event Viewer and it comes up with the following:

An unexpected error occurred in the File Server Resource Manager MMC snap-in

Invalid parameter 
   at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
   at System.Management.ManagementObject.Put(PutOptions options)
   at Microsoft.Storage.SrmMmc.StorageReportWMI.Commit(Boolean refresh)
   at Microsoft.Storage.SrmMmc.ReportTask.CommitWMI()
   at Microsoft.Storage.SrmMmc.ReportTask.Commit()
   at Microsoft.Storage.SrmMmc.StorageReportTaskPropertySheet.CommitChanges()

The reports I try to generate are Duplicate Files, Files by File Group and Large Files.

Files by File Group has "Audio and Video Files" and "Image Files" set. Duplicate Files has no parameters set and Large Files has the minimum file size set to 50MB

Under Scope I have selected all of the options and added the folder that has to be included in the scope. In our case it's E:\Data\Plants

Under Delivery I have added my e-mail address to recieve reports on, and the schedule has been set to 5AM monday till friday.

Trying to Google this results in no useful answers. Hopefully anyone could help me find a solution for this.

S2d Added SSD's as Capacity disks, how do i force it to Journal (cache)

$
0
0

We added 2 NEW SSD's to each Hyper converged S2D Cluster. 2 servers see the 2 NEW SSD's as Journal along with the original 2 SSD's. the other 2 Nodes added the NEW SSD's as Capacity.

I have updated HP firmware, updated windows with latest Updates. 2-2019 update.

I tried, Retired SSD's marked as Capacity, reboot server, Unretire, and they still show as Capacity.

Any Help much appreciated.

Sharepoint Permission

$
0
0
Dear All,

        I have a folder in that lot user has permissions and it has sub folder also. I mistakenly instead off deleting particular user i pressed delete unique permission.How to rollback to normal.Because now permission is coming from site permission?

DFSR Private folder huge and doesn't match DFS management console.

$
0
0
We have inherited this DFS situation and neither my coworker or I have ever used DFS other than for AD.  I am showing all of my user folders in DFSR\private folder.  We had a comm failure at one of the locations then we saw this.  I don't see these folders or their location listed in the DFS management console, but I do show them in DFSR/Private with TreeSize.  Is it safe to delete these folders? I have a good backup of the data.  We are also planning on removing DFS for the file servers at our two locations.

Migrate from work folders to OneDrive

$
0
0

Hi all,

We are migrating users to office 365 OneDrive and we need a way to automatically stop work folders and delete the data on the client.

Is there a way to do this without user interaction?

Rahamim.

Viewing all 10672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>