Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

My documents/Favorites/Desktop backup

$
0
0

We are looking to deploy the work folders to a pilot group of users. As part of this, we are wondering is there a way to have it set so the my documents/favorites and desktop folders can show up in the working folder. If a user days or deletes a file on the desktop it automatically deletes it from the work folder?

We are looking to potentially use this as a backup solution for them to use and automatically backup to the server instead of having to use a third party program.


Running Get-SRGroup from 2016 to remotely monitor a 2019 replica server seems bugged.

$
0
0
I have two 2019 server with groups/partnerships replicating successfully on them.

I want to monitor their status from a 2016 server so I installed the Storage Replica module through Server features. For some reason 'get-<g class="gr_ gr_17 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="17" id="17">srgroup</g> -<g class="gr_ gr_19 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="19" id="19">computername</g><g class="gr_ gr_20 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="20" id="20">replicaservername</g>' command does not return anything when run remotely from 2016 targetting 2019. get-<g class="gr_ gr_21 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="21" id="21">srpartnership</g> works correctly remotely in the same source/destination. Also, <g class="gr_ gr_16 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar only-ins replaceWithoutSep" data-gr-id="16" id="16">same</g> command works from 2019 to 2019. It's just the 2016 -> 2019 and the one command get-<g class="gr_ gr_22 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="22" id="22">srgroup</g>

I'm guessing this is a bug but <g class="gr_ gr_24 gr-alert gr_tiny gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="24" id="24">i</g> can't find anything about it. Anyone else experienced this or have info on it?

Windows 10 Sync Center - Conflict office files creates a [filename]~xxxxxxx.tmp

$
0
0

Hi,

I'm being told to post here. Below is the link to the thread i posted:

https://social.technet.microsoft.com/Forums/windows/en-US/662be346-4d19-4e1a-9d34-ae7cac075345/windows-10-sync-center-conflict-files-creates-a-filenamexxxxxxxtmp?forum=win10itpronetworking

https://social.technet.microsoft.com/Forums/office/en-US/ee466c3b-8d0f-4418-900d-41dc30d68ce2/windows-10-sync-center-conflict-office-files-creates-a-filenamexxxxxxxtmp?forum=officeitproprevious

I am in the midst of migrating all workstation to Windows 10 Pro from Windows 7 Pro. Our Domain Controller is also a File Server running on Windows 2012 R2. All staff's Desktop and My Documents are redirected to the file server \\servername\home$\%username%. Roaming profiles and offline files are enabled on all users. So far we have no issues with this setup for windows 7 users. However when tested on Windows 10 Pro machine, a weird behavior occurs.

Whenever a user edit and save an office file offline (at home), and connect back to the network (the following day), they will always get file sync conflict and .tmp file is created in that folder. See picture below.

When i check in the file server, the original file has been renamed to ~RFxxxxxxx.TMP suffix (or maybe the original file was deleted and then the .tmp file was created).

Because of this behavior, i am given 2 option to resolve this conflict. 1) Keep this version and copy it to the other location 2)Delete the version in both location. See below:

Of course i will choose option 1 which is to keep this version and copy it to the other location. Doing so, the latest copy of the file will be copied to the file server BUT the .tmp is still there. As such i have to manually delete the .tmp file. 

Imagine doing this everyday for notebook users. It is quite annoying.

Any help will be appreciated. 

Server 2019 Storage Spaces - a lot of confusion

$
0
0

Hi there!

I'm trying to use storage spaces, partly to learn about how that works, partly for actual use. But I ran into some problems, maybe it's my missing understanding of it.

So I have seven 2TB drives in a pool. Now I created a virtual disk with "simple" and "thin". It took some time to understand that even "simple" ALWAYS means striping (or is there a way around this? Setting columns to 1 perhaps?), which is no good, because ONE drive failure means all data is gone. The data I'm planing on storing there isn't that important, but losing all of it because of one drive when I don't even need "striped" performance? Nope.

So I tried to setup a "parity" disk with the GUI in the server manager, again with all seven physical disks. But the GUI only allows me to select "auto" or "3" for columns (where "auto" will set it to 3 anyway), which means a lot of wasted space as I found out. Why is that? Why can't I select 7?

Also I tried to setup a "parity" disk with dual redundancy in the GUI for my most important data. It's said I need seven physical disks. I have seven physical disks. What I don't have is that double redundancy disk, as I always get an error the likes of "not enough resources for this". Why? Seven disks are present ans selected for that.

Alright, trying to get the single parity disk to work with Powershell. Powershell doesn't even let me create a parity disk with three colums, while at the same time this is working via GUI. I don't get it. This is the line I'm using:

> New-VirtualDisk -StoragePoolFriendlyName Pool1 -FriendlyName Data1Red -ProvisioningType Thin -NumberOfColumns 7 -ResiliencySettingName Parity -Size 5TB -PhysicalDiskRedundancy 1 

and this is the response I get

> New-VirtualDisk : Not Supported

> Extended information:
> The storage pool does not have sufficient eligible resources for the creation of the specified virtual disk.
>
> Recommended Actions:
> - Choose a combination of FaultDomainAwareness and NumberOfDataCopies (or PhysicalDiskRedundancy) supported by the > storage pool.
> - Choose a value for NumberOfColumns that is less than or equal to the number of physical disks in the storage fault domain > selected for the virtual disk.
>
> Activity ID: {566fc3b1-d19c-4b15-8bda-2a7621192f52}
> In Zeile:1 Zeichen:1
> + New-VirtualDisk -StoragePoolFriendlyName Pool1 -FriendlyName Daten1Re ...
> + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 >    + CategoryInfo          : InvalidOperation: (StorageWMI:ROOT/Microsoft/...SFT_StoragePool) [New-VirtualDisk], CimException
>     + FullyQualifiedErrorId : StorageWMI 1,New-VirtualDisk

I tried all options for FaultDomainAwareness as suggested by the displayed error, but none is working. Am I supposed to tell New-VirtualDisk what physical drives from the pool it has to use with PhysicalDisksToUse? If so, how do I do that? I found nothing about what name I have to type for the physical drives.

I spent hours and hours to get this to work, please help.

Also, I think I discovered a bug. If I leave open the property window of a virtual drive with all three options (from the left part of the window) on display, the physical drives in the list start to multiply. Instead of 7 disks, I see 14, the same 7 below the first. After another while 21 and so on.

DFS installation and connectivity issues

$
0
0

Good day Technet Team,

I am currently setting up DFS with replication on Windows 2012 R2, the share was established between two file servers however when specifying the namespace server an error is prompting the following error "Cannot connect to the lab.*****.com domain". The two file servers are existing on the same domain and the DFS feature was installed on both application servers. Can anyone assist?

Server 2016 DFS migration

$
0
0

I am migrating DFS shares on old file servers to server 2016. I can add the members based on existing memberships using powershell but when I try to set the membership to read only using powershell I get the following error:

Set-DfsrMembership : The read only property is not supported. The read only property is not supported.
At line:1 char:1
+ Set-DfsrMembership -GroupName $ServerDFSGroup -ComputerName $NewServe ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (DFSR membership...None specified.:String) [Set-DfsrMembership], DfsrException
    + FullyQualifiedErrorId : DfsrCore.ThrowIfInconsistent,Microsoft.DistributedFileSystemReplication.Commands.SetDfsrMembershipCommand

As a work around I have been manually setting them to read only using the DFS Management pane. Can anyone explain why I would be getting this error and how to work around it? I have tried setting it to read only when I set the other DFS properties and I've also tried setting it independently in a separate command. I am using the same variables to set the other DFS properties without issue.

Issues with Multi-resilient volumes (Mirror-accelerated parity) in a regular (not S2D) Storage Spaces configuration.

$
0
0

Hello,

I'm trying to configure Multi-Resilient Volume (Mirror-Accelerated Parity) using "regular" Clustered Storage Spaces (not S2D) on Windows Server 2019. It is worth noting that I've tried the same scenario on Windows Server 2016 and got the same outcome. Also, it doesn't matter whether I use clustered or standalone Storage Spaces, the outcome is still the same.

My problem is that I'm unable to achieve storage efficiency higher than 50% for Multi-Resilient Volume, no matter what disk proportion I use in Performance and Capacity tiers.

Lab configuration:

2x Hyper-V VMs (OS: Windows Server 2019 DC)

10x VHD Sets (ex- Shared VHDx) placed on Cluster Shared Volume and attached to both VMs

I wanted to create a two-way mirror (2 disks) + single parity MRV (8 disks). Each disk is ~ 430GB. 

Configuration steps:

#Creating new Pool
Get-PhysicalDisk -CanPool $true
New-StoragePool –FriendlyName MainPool –StorageSubsystemFriendlyName "Clustered Windows Storage*" –PhysicalDisks (Get-PhysicalDisk -CanPool $True)

#Setting up media types for tiering
Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk -CanPool $true |Set-PhysicalDisk -MediaType HDD
Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk -CanPool $true | select -skip 8 |Set-PhysicalDisk -MediaType SSD

#Configuring Capacity tier
New-StorageTier -MediaType HDD -StoragePoolFriendlyName MainPool -FriendlyName CapacityTier -ResiliencySettingName Parity -FaultDomainAwareness PhysicalDisk -PhysicalDiskRedundancy 1 -NumberOfColumns 8 -Interleave 65536 
$CapacityTier = Get-StorageTier -FriendlyName CapacityTier


#Configuring Perfromance tier
New-StorageTier -MediaType SSD -StoragePoolFriendlyName MainPool -FriendlyName PerfromanceTier -ResiliencySettingName Mirror -FaultDomainAwareness PhysicalDisk -PhysicalDiskRedundancy 1 -NumberOfDataCopies 2 -NumberOfColumns 1 -Interleave 65536 
$PerfromanceTier = Get-StorageTier -FriendlyName PerfromanceTier

#Creating Multi-Resilient Volume
New-Volume -StoragePoolFriendlyName MainPool -FriendlyName MRVolume -FileSystem ReFS -AccessPath "X:" -ProvisioningType Fixed -AllocationUnitSize 4KB -StorageTierFriendlyName PerfromanceTier, CapacityTier -StorageTierSizes 400GB, 600GB

Result:

Get-StorageTier reports that both Performance and Capacity tiers have their resiliency type configured as "Mirror", and both have 50% storage efficiency.I obviously expect Capacity (Parity) tier to have more than 50% efficiency.

It would be great to hear the official response regarding this matter.

Is MRV in non-S2D configuration allowed for production?

Why the capacity tier with resiliency type = "parity" reports itself as "mirror"?

Is there a "proper" way to create the mirror+parity MRV with the efficiency of more than 50%?



Failover cluster / Storage Replica - disks fail when copying large data portions

$
0
0

I'm having a failover cluster with 2 servers. Both servers have 2 disks; 1 800GB for data and 1 20GB for logs. Of course these are GPT disks and I'm able to build a cluster and create a file server role.

I've used Storage Replica to replicate the 800GB disks. This works and I can host the file server role on both nodes in the cluster. I've created a share on the fileserver role and added some data to this share. No problems so far.

Now I want to copy more data to the fileserver role, but when doing to the replication will fail after having copied some GBs of data.

The event log of the cluster gives the following error:

Cluster resource '29382f4b-948c-4768-acc6-10fea296553a' of type 'Storage Replica' in clustered role 'SR Group 1' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

I can bring the disks online again and start the Storage Replica role, but I don't want it to fail.

In case someone wonders; I did run the "Validate Cluster..." wizard and I passed it.


Windows Server 2016 - DFS-R and DFS-N Roaming Profiles

$
0
0

Hi all, ive been reading the following article: - https://support.microsoft.com/en-gb/help/2533009/information-about-microsoft-support-policy-for-a-dfs-r-and-dfs-n-deplo . And this outlines a scenario in which MS doesn't support the use of DFS-R and DFS-N for roaming profiles. 

Firstly, is this still valid for Server 2016, as it is a five year old article now? If it is still valid then I need to investigate other options for the scenario of:-

1) VDI users in two active site locations.

2) All VDI users have Roaming Profiles and also use Document Redirection.

3) Users profiles are as small as possible, due to policies in place to restrict their ability to do pretty much anything.

4) The Profiles share is separate to the Documents share, and this would be the same for Namespaces (i.e. there will be two separate shares/namespaces).

There will be around 200 users active in one site, and 100 active in the other. The WAN link between sites is 1Gbps, with latency around 1ms. We would like to be able to publish the Roaming Profile Namespace and also the Document Namespace, and use these in the updated GPO's for all users. 

We need to provide Active/Active site resilience to allow either 'set' of users to fail-over to the other site in a DR situation, and for their Desktops/settings/documents to be fully available. The VDI environment is using VMWare.

Thanks for any help.


Phil

Managing Open files

$
0
0

Hi,

I am just wondering what the codes/numbers mean on the 'lock' column when using 'managing open files' on Windows Server.  When someone has a files open it can have 0,1,2 or 3 and I have looked else where and cant seem to find an answer. 



Can VSS snapshot recover from ransomware Attack ?

$
0
0

Hi everyone,

Well, I want to know if enabling VSS snapshot on the Disks drives can handle ransomware attack, I mean if I have multiple Healthy VSS snapshots and my data is infected by ransomware, could I recover from one of healthy snapshots ?

Thank you.

Storage Replica - Data and Log disk Sizing

$
0
0

Planning to setup new servers and using Storage Replica for DR.

Servers are Hyper-V VMs so I can size my data disks as needed.

I have 120TB of data.

Q1. I've read on unofficial sites that data disks should be no larger than 10TB for SR - is this correct?

Q2. I can scale out the VMs and have multiple servers each with multiple volumes if required. I'm trying to gauge the recommended approach - do I have 1 server with 12 x 10TB volumes, or 2 servers with 6 x 10TB volumes, or 1 server with 2 x 60TB volumes etc. i.e. is there a hard limit imposed by SR?

Q3. Log volumes should be at least 10% of data volume. This adds significant capacity. Again I read this on unoffical site. If true, is this 10% of EACH data disk, 1 log diks per data disk, or can 1  log disk cover all data disks:
e.g.
1 server, 12 data disks (10TB each), 12 log disks (1TB each)
OR
1 server, 12 data disks (10TB each), 1 log disk (1TB or 10TB?)
Thanks

Lock down use of files outside Active Directory

$
0
0

Hi,

I don't remember the feature in Windows Server that locks down the use of files, created in a AD Domain, outside the domain.

Example:

A file is created (.docx, .xlsx etc.) in a AD enviroment and it cannot be opened by computers outside the domain.

Any info is appreciated
Regards

DFS Event ID 6404: Invalid local path

$
0
0
Hello, I am having an issue with Replication. I've run a report and it returned this ID number. After looking at other posts/forums I have tried deleting and restoring the DFSR database and restarting the service, this did not resolve the issue.

DFS-N User Folders

$
0
0

If using Windows Server 2019 File Server Namespace is there a limitation to number of DFS namespace folders?

Is it typical to have a top-level shared folder on the File Server e.g. Users. as the folder target. A single DFS namespace link points to the single folder target. All users will be able to see all sub-folders in the share e.g. UserA, User B etc. but use NTFS permissions to restrict access.

Or can I hide the other users folders?

Or does it make sense to have thousands of shares (one per user), thousands of folder targets, thousands of namespace links?


Windows 2012 R2 File Server Folder access fails for few Windows 7 Machine

$
0
0

Hi 

Recently we migrated our file server from Win 2008 R2 to Win 2012 R2. We have few users using Windows 7 machines.

Of this in few Win 7 machines users are not able to connect to their shares . The same user can access the  folder on a different Win 7 machine. Need some inputs on this  

Search in folder shows results located in "Temporary Burn Folder" (SBS2011)

$
0
0
I came across this issue searching for a folder that a user had misplaced/deleted.

When I search from the server (SBS2011) inside a particular folder, the search results show the folder paths for the results as being located in a Temporary Burn Folder on the same drive instead of the actual folder path.

Example:
 I search for a folder called "TEST" in D:\Data\Docs\IT Docs\Temp
 The result shows the TEST folder as being located in D:\Data\Temporary Burn Folder\IT Docs\Temp

Other notes:
 - It appears to happen on any search inside a particular shared folder (D:\Data\Docs).
 - I'm not searching via the Share, but via the local folder on the server.
 - This doesn't occur in other Shared folders on the server - the search results correctly display as being located in the respective folder.
 - It doesn't matter if I logon to the server with another user account, this particular share still shows the same issue.
 - If I search the shared folder from another computer, the folders are correctly identified in the network share.

<See attached image>

DFS Cache

$
0
0
i set DFS Cache at the root to 600 seconds  and the link folder to 600 and tested shutting down one of the servers which is a root and link server and i notice it takes about 2 minutes until it failsover. Why is it about two minutes? i would expect it to be close to 600 seconds. The cache shows close to 600 seconds when i start the test

Storage Replica Partnership Failover Options...Failed to provision partition error

$
0
0

Hi all, has anyone come across a method for re-establishing a new storage replica server-server partnership following the loss of the primary / source computer? I can failover to the secondary successfully but face an error when 'failing back' to a new primary.

Is it possible to start a new replication partnership with a disk that has previously been used for replication?

Here is the situation I'm considering:

  1. Establish a new partnership ......New-SRPartnership -SourceComputerName COMPUTER1 -SourceRGName "RG01" -SourceVolumeName "D:" -SourceLogVolumeName "L:" -DestinationComputerName COMPUTER2 -DestinationRGName "RG02" -DestinationVolumeName "D:" -DestinationLogVolumeName  "L:"
  2. Total loss of COMPUTER1
  3. I've considered two separate methods from COMPUTER2 to failover, a) Set-SRPartnership to change the direction of replication and b) Remove-SRPartnership and Clear-SRMetadata. Either way COMPUTER2 can now successfully be used by clients
  4. A new server, COMPUTER3 is built. I want to create a SR partnership between COMPUTER2 and COMPUTER3. 
  5. New-SRPartnership fails with the below error....

New-SRPartnership : Unable to create replication group RG03, detailed reason: Failed to provision
partition 5e862dd9-ecb1-49a3-b086-29857cfcf215. This is an error relating to COMPUTER2


Folder Redirection in DFS Namespace

$
0
0

I'm having hard time figuring this out.

I deployed a DFS Replication environment. Also I want to deploy folder redirection using gpo. 

I want user's Desktop, Documents and Downloads redirected to the DFS

My configuration:

1. I created a security group named sgUserProfile. Then added users to the group

2. Under the shared folder ( target folder of DFS ) I created folder named User Profile. In that folder's security I added sgUserProfile with:

  1. Traverse Folder
  2. List Folder
  3. Read Attributes
  4. Read Extended Attributes
  5. Create folders

   (Folder Structure is : Network_Share > User Profile)

3. I created a gpo for folder redirection.

When I login to the client pc. The user's username is created in the User Profile folder but the Desktop, Download and Documents where not redirected.

In which part of the configuration  I made mistakes?

Thanks in Advance.

Viewing all 10672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>