Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

Windows Server 2016 EVALUATION - Error during install

$
0
0
Hello - 

Apologies, this is probably the WRONG forum to put this.

I downloaded the ISO for a 180 day avail of Windows Server 2016 (Standard).  I’m loading this on to a Virtual Machine (VM Ware) on an iMac Pro.

I get the following message as setup begins:

Windows cannot find the Microsoft Software License Terms.  Make sure the installation sources are valid and restart the installation.

I”m assuming that’s because I didn’t provide keys, but none were given.  Other threads I read state the key is "embedded" and just needs to connect to the internet, which isn't helping.

This is a new machine.  Running latest Mac OSX and latest VMWare.


Any help would be appreciated.

Win 10 storage space missing after rebuild

$
0
0

Hi all,

I made a big blunder with a win 10 storage space with 3 sata and I think 1 ssd.

When I rebuild my win 10 the drive manager said unrecognized metadata reset disk.

So I google a few articles that said to enter reset-physical disk.

after doing this it was apparent that it wiped the metadata for the storage space.

Is there a way to recover this or a tool that can fix it as it has pretty much a lifetime of pictures and stuff on it.

Cheers

How can I get ownership?

$
0
0

Hi,

I get win2008r2 as AD file server which a lot share folder

There is a one folder as attachment.

I login as administraror(domain)...

I would like to remove it,but it is access denied.

Then I try to take ownership as administrator.

But It can not....

Please advise...

S2D and IP bindings

$
0
0

Does S2D require IPv4/IPv6 bindings and IP addresses set on its NICs? Or can it work solely at L2 ?

Unable to protect shared folder

$
0
0
I have a folder called ‘scanned’ that is shared. Share permissions full access for everyone. Folder security is full access for everyone.

I created user folders under scanned. Each sub folder has inherited permissions disabled. Permissions for sub folders is system, administrator and folder owners account. All have full access. Any other groups have been deleted like authenticated users or domain users.

The problem is one user can look at the contents of another users folder. I tried to explicitly deny one user in another users folder but they can still browse the folder.

I discovered a previous IT company has made all users members of the Domain Admin group. I removed all the users from this group.

I created a new shared folder called ‘test’ and created two sub folder for two users. I protected the sub folders as above and even went so far as to deny one user from accessing the other users folder but they can still browse the denied folder.

I am at a loss. Did access control get broken by everyone being domain admin? This is a Server 2016 Standard box. Thanks in advance

Rob Walker

Extreme frustration with Microsoft and Storage Spaces

$
0
0

I need to write this article so bear with me because this is important.

I've been working on servers for almost 20 years now since I was really young in fact, I've been through Windows 9x/NT/2000/Me/XP/03/Vista/08/7/2012/2016/2019 so I do know my way around Windows by now, I also use macOS and Linux extensively.

Back when I was young my idea of a storage server was finding any Pentium 3 around the house hook up whatever leftover hard drive i could find and setup a dynamic raid1 on Windows, which is great because you can then have that machine doing things like running FTP client or downloading files specially when you had a slow internet connection. 

A few years later I had a datacentre business. I now have my place here at home and I got a server room for my personal needs, and i'm looking for a storage solution which by now I would assume Storage Spaces would be that solution but it seems the more I try to get into ReFS and Storage Spaces the less happy I become.

So here's my first part of the problem: 

https://social.technet.microsoft.com/Forums/en-US/de40dadc-0363-44ab-b67e-f63f087784d6/dont-trust-storage-spaceswindows-2019-with-your-data?forum=winserverfiles 

TLDR: Windows Server 2019 Datacenter doesn't work for Storage Spaces despite being the current Windows Server iteration.

But how is that possible? Windows Server has always been the better and more stable version of Windows, it has always served the purpose for businesses and server applications and people rely on it, so how can we not have a reliable storage solution?

ReFS is not available anymore on Desktop versions of windows and you need Windows 10 for Workstations to manage ReFS but to use all the redundancy and resilience features you need Windows Server so as far I'm concerned to get the whole experience you need Windows Server. 

I have two storage boxes here in my home, both running Windows Server 2019 Datacenter. Now I did create the storage arrays from Windows Server 2016 because microsoft still hasn't fixed that problem in the latest update which makes me wonder how seriously they're taking their problems with Storage Spaces/ReFS and how concerned they are about our data integrity. But now my second server I literally got 4 5TB hard drives yesterday and put them in Parity, went to sleep and left a network file transfer when i wake up it shows 2 of the drives as failed. I've seen this problem before whenever the access time/latency goes too high the whole array just gives up and crashes so i turned the computer off and back on to fix it, which usually does but now it's just stuck as unhealthy. 

So you go to the interface and right click, repair only to find the disks aren't reading/writing anything and the job is 'running' even though nothing is happening:

================

PS C:\Windows\system32> get-storagejob

Name                               IsBackgroundTask ElapsedTime JobState PercentComplete BytesProcessed BytesTotal
----                               ---------------- ----------- -------- --------------- -------------- ----------
DASH-H-SV4-STD1-H-SPP-Repair       False            00:28:08    Running  0                          0 B      32 GB
DASH-H-SV4-STD1-H-SPP-Regeneration True             00:28:04    New      0                          0 B      32 GB


PS C:\Windows\system32>

================

Now think of it this way, I have a lot of SAS controllers here that are raid only, i'm actually going out of my way to buy HBAs that support drive passthrough just so that I can use a modern filesystem like ReFS/ZFS to store my data with more peace of mind. 

But now, does this look to you more reliable than an old fashioned RAID6? I mean I've had hardware raid arrays for more than a decade and never had anything remotely related to what's happening here, never had to go through all this pain to get my data to even 'show up', rebuilding is as simple as popping in a new drive in the same slot and fully automatic.

Am I missing something? Oh and of course dynamic drives disk manager/software raid that if your server crashes will have to resync for hours for no reason. So should we be asking about me not doing something right or about Microsoft not really doing storage right for as far as I can remember. 

I would love to hear some feedback on this and should I start moving my data to FreeNAS or Linux? Or will there ever be a solution to this.

Enabling Storage Tiers After Pool Creation in Windows Server Storage Pools

$
0
0

I've been messing around with Windows Server and storage tiers for a little while now. I ended up building an 8 drive RAID6 equivalent storage pool 2 drives as parity and all was for the most part, good, minus the write performance on the pool. Recently I've wanted to remedy that so I purchased a pair of SSDs that I wanted to use as a cache for storage tiers.

However, I'm not sure if I can add them after the fact without rebuilding the entire array.

I added the SSDs into the pool through the server manager and they are visible within the physical disks window, however when I check the properties of the virtual disk, it shows that storage tiers are disabled.

Performance on the array is the same as it was before the SSDs were added so I'm pretty sure that the storage tiers are not set up properly.

I'm not sure if there's something that I'm missing that needs to be done at this point, but is it possible to add the SSDs as a cache without losing all of the data that's currently on them?

Thanks!

Windows Server Deduplication: Best way to verify post-Unoptimisation integrity? (ie. inflating optimised chunkstore->normal files)

$
0
0

We have a single disk file server that we have needed to disable/convert from deduplicated state back to basic file structure.

We have run the -unoptimise command which has inflated our folders back again, and reduced chunk store by half, but not gotten rid of it all completely (which is strange). We have also run out of disk expand space and need to get it back somehow.

We have run scrubbing and garbage collect but still the chunk store stays at 6TB (which is 1/3 of the capacity). Next step is to just shift delete the chunk store blobs after removing dedup completely. 

How can I be sure those blobs arent real data? I want to run a file verifier across the folder structure to create some hashes, then snapshot server, delete chunkstore, and then run the verify again to confirm all the data in our folders is actually there and not got some pointing at the now deleted blobs). We are working under the assumption that windows dedup has borked its algorithm and gone awol (many reports of similar experience) creating blobs that it forgets about. This would make sense because there really should never be as much data on this server based on growth projections pre-dedup.

Will a usual hash tool work on deduped files? Any other quicker ways?


Anticipating NVMe SDD failure in a storage server with no spare NVMe slots

$
0
0

Hi,

I'm building a storage server with a 12-port SAS3 12Gbps single-expander backplane, which supports up to 8x 3.5-inch SAS3/SATA3 HDD/SSD and 4x NVMe/SAS3/SATA3 storage devices.

The operating system will be Windows Server 2016.

I plan to create a storage pool and a two-way mirror virtual disk with tiered storage, consisting of 4 NVME SSD drives and 6 SAS3 HDD.

In case an NVMe drive would fail, I would have no possibility to first add a new one and afterwards remove the faulty one (because all 4 NVMe slots are occupied).

Could I do as follows?

Set-PhysicalDisk -FriendlyName <diskname> -Usage Retired
Repair-VirtualDisk -FriendlyName <virtualdiskname>

Wait for the repair job to finish; follow with:

Get-StorageJob
$PDToRemove = Get-PhysicalDisk -Friendlyname <diskname>
Remove-PhysicalDisk -PhysicalDisks $PDToRemove -StoragePoolFriendlyName <poolname>

Physically remove the faulty NVMe SSD and replace with a new NVMe SSD.

Get the FriendlyName of the new NVMe SSD:

Get-PhysicalDisk -CanPool $True

Add the new NVMe SSD to the storage pool:

$PDToAdd = Get-PhysicalDisk -FriendlyName <diskname>
Add-PhysicalDisk -PhysicalDisks $PDToAdd -StoragePoolFriendlyName <poolname> -Usage AutoSelect

Repair the virtual disk:

Repair-VirtualDisk -FriendlyName <virtualdiskname>

If the SSD is recognised correctly (MediaType is SSD), will it automatically be part of the fast tier?

Let's say that, after some time, I'd wish to replace the 4 NVMe drives by a newer generation of NVMe drives. Could I replace them one by one as described above or would that mess up the Virtual Disk?

Thanks,

AG


Storage Pools on Windows Server 2019 Dual Parity

$
0
0

Good Morning,

we are in a project replacing some old NAS systems by a central new system.

I was reading about storage spaces and storage pools in Windows Server so I try setting up a test server 2019.

Hardware is SDDS certified, 10x 300gb SAS HDD´s are in and I was able to set up the pool of all 10 HDD´s.

If I try to set up a Virtual Hard drive with dual parity of any size I get an error.

"The Storagepool does not have sufficient eligibile resources for the creation of specificities virtual disk "

Can't finde some obvious reason for that, we need some RAID 6 like resilience for sensitive data where 2 drives could fail but we do not need a storage spaces direct typ setup because it would be way overkill for us.

How to connect to windows server through my PC using FileZilla?

$
0
0

Hello,

Prior to my question I want to clarify that I'm a beginner in this field, so please forgive me if my questions appear stupid.

This is what I'm trying to do:

I have a local server where we store data and that I currently access through remote desktop (using IP, username and password). I want to use FileZilla Client to communicate with this server, so that I can create users, configure their permissions to read and write to a server location. I tried to connect to it through FileZilla Client using FTP but failed to do so. Am I doing this wrong? Is FileZilla server needed in this case? How do you think I can do this?

My server has  a Windows Server 2012 R operating system.

Thanks in advance!

Why DFS Installation at a DC

$
0
0

Guys,

i see many tutorials installing DFS at the DC, but why is that? The purpose of DFS is sharing folders, so my thoughts are that it should be installed at the memberserver(fileserver).

How do I show the size of a mapped drive from dfs share with quota from fsrm

$
0
0

Hi there,

I've got the following setting.

2 QNAPs, each is mounted as a volume via iSCSI on a different Windows Server 2016 (FS1 & FS2).

FS1 replicates to FS2.

On the Volume are several directories shared with DFS.

Each directory has a soft quota set by FSRM and gets mapped to clients as a drive by the DCs GPO.

Sadly each drive shows the capacity from the whole volume which is mounted on FS1/FS2.

The settings on both fileservers are identical in regards to DFS and FSRM

My Question now is, is it possible to show only the size of the quota for the mapped drive and if yes, how is it done?

Thank you.

Martin

I get an error when managing ServerFolders

$
0
0

I created a 100Gb partition for the OS and 400Gb as a D: drive

I thought that 100Gb was enough until I realized that the server was backing up the clients and left me with 0 bytes free on C:

I went to the Storage management and tried to manipulated the folders from within ServerFolders and got errors with all tasks.

I tried creating a new folder: got an error.I Tried moving folders: error with sometimes: missing folders while I did check it. It was present.

Can you tell me why this sections is completely unmanageable... ??? It goes without saying that I am logged in as Admin.

My main goal is to move the Client backup folder to D:. 

Thanks 

Christopher


Christopher

Work Folders - Sync Failed (0x80c80003) server is currently busy

$
0
0

Hello,

I have implemented Work Folders in an Environment with about 1500 users for now. Work folder server is a Physical server with 2 x Intel(R) Xeon(R) CPU E5-2667v4 @3.20GHz (8 cores) and 128GB RAM running on Server 2016 1607 Datacenter Edition (needed for Storage Replica). NIC teaming is enabled with 2 network adapters of 10Gbps in LACP Dynamic mode.
There is 3 WorkFolders SyncShare of approx. 7TB each located on different volumes.

I actually started with 2 SyncShares but once each of them reached a total of approx. 500 users, I got sync issues (error code 0x80c80003 server is currently busy) and it took a while for finding data on the Work Folders server. That's why I created a third one, and everything was back to normal.

Today, the third SyncShare is now close to 450 users and I have again the same sync issues even with a new SyncShare on a dedicate volume ...

Here are the errors  that I have on some Windows 10 (Enterprise) 1709 clients :

Event 2100 : Sync failed. Work Folders path: C:\Users\xxxx\Work Folders; Error: (0x80c80003) There was a problem syncing because the server is currently busy. Sync will retry later.

Event 2100 : Sync failed. Work Folders path: C:\Users\xxxx\Work Folders; Error: (0x80072ee2)

However, there is nothing on the Work Folders server, I checked if any errors occurred in the Microsoft-Windows-SyncShare/Operational event log, System and Application event log, but nothing. 

The CPU/Disk's usage is around 30% and Percent Bandwidth Used in Total is around 10% for both network adapters ... Everything seems to be OK on the server. 

Shadow copies are running twice a day on each WorkFolders volumes (Shadows storage is located on 2 different volumes). Storage Replica has been implemented to allow synchronous replication of work folders data at a block level.

The parameter -MinimumChangeDetectionMins of Get-SyncServerSetting is still the default 5 mins, I didn't change this parameter since the load on the Work Folders server seems to be OK.

Any help on how to troubleshoot this sync issue will be much appreciated. 

Thanks in advance.

Regards,

Vince



Creating a virtual disk for SOFS

$
0
0

I am struggling to create a virtual disk for SOFS on server 2016

I have 3 Jbods, each with 3 SSDs and i want to create a 3 column 2 data copy.

However every time i try to create this it fails. It tells me i have the wrong disk setup for resiliency type i want. However why would 3 SSDs per JBOD work?

I can create a mirror with 1,2,3 or 4 columns as long as i dont set enclosure awareness. However as soon as i try to enable enclosure awareness it gives me the error message.

Any help with how this could be set up or what i am doing wrong is appreciated.

Searching Remote Indexed 2008 R2 Server FROM a Server 2008 R2 Server not working

$
0
0

Using windows explorer FROM a Server 2008 R2 Server to a UNC on a Remote Indexed 2008 R2 Server is not working.  The search results are displaying results from the file contents (i.e. searching 'instructions' in the contents yields no results).  Any ideas?

More info & background:

  • 2008 R2 server trying be searched has file services role installed + windows search service feature installed
  • UNC is shared properly & included in the index
  • searching from the file server itself yields results inside the file contents (so I know it's working)
  • from a windows 7 pc I can add this share to my document library (so I also know that the remote index is working)
  • from a windows 7 pc with OR withOUT the share in the document library the search results DO yield the file content results
  • from another 2008 R2 server used as a citrix or remote desktop services server:  I do NOT get search results for the contents NOR can I add the share to the document library

What am I missing, any suggestions?


Whit F.

Storage Spaces on Single Server with different disk sizes

$
0
0

Hi Team,

Thanks in Advance. I have one query related to storage spaces on single server.

I'm not considering storage spaces direct.

Here is my scenario, I have created a storage pool in Windows server 2016 from 4 TB x 5 Disk on Azure VM and then single storage spaces from it with Resiliency type as "simple". As currently size is very less and it is about to full I’ve to expand the storage pool.

Now I’ve added 3.91 TB of Disk in the Storage Pool and found that there is no use of adding that 1 Disk as my “Column” of Storage space is 5. Hence I’ve been asked to add 4 Disk more so that I would have 5 Disk for the column to work and strip across those disks properly.

I’m now considering to add 1 TB x 4 Disk addition as I’ve already added 3.91 TB x 1 drive.

As per the Storage space concept I’ve 3 Questions considering the above scenario

Q.1 ) If I’ve added 1 TB x 4 Disk and 3.91 TB is already added. Will I be utilizing 3.91 TB drive to the fullest? Or only 1 TB would be utilized from the 3.91 TB and 2.91 TB would go waste?

Q.2 ) If I add 1 TB x 4 disk now, will I be able to add 4 TB x 5 Disk in future when I’m running out of space in the same storage pool and storage spaces? Or there would be limitation of adding only 1 TB x 5 OR less size Disk than 1 TB each?

 

Q.3 ) When my Storage Pool was 4 TB x 5 my data was getting stripped across the 4 TB drive each. Let’s assume my Single 4 TB is of IOPS 5000 then I would get 5000 x 5 IOPS in my setup which would be equal to 25000 IOPS.(This is simple I don’t have any question till now). My question is when my 4 TB x 5 disk is getting fully utilized and now when I add 1 TB x 5 disk in the same storage pool and storage spaces, and if my 1 TB drive is only having 500 IOPS then will I get only 500 x 5 = 2500 IOPS?

Win 10 storage space missing after rebuild

$
0
0

Hi all,

I made a big blunder with a win 10 storage space with 3 sata and I think 1 ssd.

When I rebuild my win 10 the drive manager said unrecognized metadata reset disk.

So I google a few articles that said to enter reset-physical disk.

after doing this it was apparent that it wiped the metadata for the storage space.

Is there a way to recover this or a tool that can fix it as it has pretty much a lifetime of pictures and stuff on it.

Cheers

S2D 2019: Change cache drain aggressive-ness?

$
0
0

Is it possible to change how long writes remain in the cache before they're de-staged? Or at least yield the cache drain to incoming capacity reads? 

I'm hitting an issue where the cache starts draining almost immediately after a large sequential file copy starts. When this happens, reads that come from the capacity tier (i.e. cache misses) appear to be choked by the drain.

Evidence that leads me to this conclusion:

From the top, I realize this isn't an officially supported configuration. It's a personal project and I don't expect stellar results, but hear me out.

  • Capacity tier drives are SMR HDDs
  • Cache tier drives are 960 GB Intel S4600 SSDs
  • When the cache starts to drain after a few GB, write latency spikes on the HDDs, as expected of SMR
  • Due to this, read latency on the HDDs also increases. 
  • The copy throughput tanks ( < 5-10 MB/s)
  • If I cancel and restart the copy, it races through the copy since it can read from the cache. Several hundred MB/s.
  • When it gets to the point that I canceled it, and has to read from capacity again, it drops to a crawl
  • The cache tier is less than 10% full. I've previously seen it stage several hundred GB of writes when ingesting from outside the cluster.

My expectation would be that the cache absorbs writes and only de-stages when full or the level of IOPS to capacity goes down.

Is there anything else that could cause something like this? Is there anything I can do?


Viewing all 10672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>