Hi I have plan to build s2d two node cluster with 4x 800 12gb SAS Write intensive SSD for cache, 2x960 GB SATA mixed use for performance tier and 10x 2.4 TB SAS 10k for capacity. It's possible to have such scenario. Is it supported?
Thank you OOSOO
Hi I have plan to build s2d two node cluster with 4x 800 12gb SAS Write intensive SSD for cache, 2x960 GB SATA mixed use for performance tier and 10x 2.4 TB SAS 10k for capacity. It's possible to have such scenario. Is it supported?
Thank you OOSOO
Hi There,
Massive shot in the dark here but I am struggling with a pretty major issue atm. We have a production file server that is hosted on the following:
Dell MD 3220i -> iSCSI -> Server 2008 R2 Hyper-v Cluster -> Passthrough Disk -> Server 2012 R2 File Server VM
Essentially 3 times now, roughly a month or so apart. The file server stops accepting connections. During this time, the server is perfectly accessible through rdp or with a simple ping. I can browse the files on the server directly but no-one appears to be able to access the shares over SMB. A reboot of the server fixes the issue.
As per a KB article I removed nod antivirus from the server to rule out a conflicting filter mode driver after the second fault. Sadly yesterday it happened again.
The only relevant errors in the servers log files are:
SMB Server Event ID 551
SMB Session Authentication Failure Client Name: \\192.168.105.79 Client Address: 192.168.105.79:50774 User Name: HHS\H6-08$ Session ID: 0xFFFFFFFFFFFFFFFF Status: Insufficient server resources exist to complete the request. (0xC0000205) Guidance: You should expect this error when attempting to connect to shares using incorrect credentials. This error does not always indicate a problem with authorization, but mainly authentication. It is more common with non-Windows clients. This error can occur when using incorrect usernames and passwords with NTLM, mismatched LmCompatibility settings between client and server, duplicate Kerberos service principal names, incorrect Kerberos ticket-granting service tickets, or Guest accounts without Guest access enabled
and
SMB Server event ID 1020 File system operation has taken longer than expected. Client Name: \\192.168.105.97 Client Address: 192.168.105.97:49571 User Name: HHS\12J.Champion Session ID: 0x2C07B40004A5 Share Name: \\*\Subjects File Name: Command: 5 Duration (in milliseconds): 176784 Warning Threshold (in milliseconds): 120000 Guidance: The underlying file system has taken too long to respond to an operation. This typically indicates a problem with the storage and not SMB.
I have checked the underlying disk/iscsi/network hyper-v cluster for any other errors or issues, but as far as I can tell everything is fine.
Is it possible that something else is left over from the NOD antivirus installation?
Looking for suggestions on how to troubleshoot this further.
Thanks
When I run the Data Integrity Scan manually within the Task Scheduler, it runs and completes immediately without doing anything even though I know there are corrupt files in the volume. The following are the event log entries. Notice that most of my files are skipped. Why doesn't the scan attempt to fix corrupted files?
Integrity is both enabled and enforced for all of the several hundred files. The ReFS storage space is a two-way mirror. Both drives are attached via SATA and the motherboard BIOS SATA Configuration is AHCI.
Information:
Started checking data integrity.
Information:
Disk scan started on \\?\PhysicalDrive9 (\\?\Disk{ecb98218-784e-47d5-b316-941ae9595eb4})
Error:
Volume metadata scrub operation failed.
Volume name: I:
Metadata reference: 0x204
Range offset: 0x0
Range length (in bytes): 0x0
Bytes repaired: 0x0
Bytes not repaired: 0x3000
Status: The specified copy of the requested data could not be read.
Error:
Files were skipped during the volume scan.
Files skipped: 310
Volume name: I:\ (\??\Volume{53d99c4e-9ad6-11e8-8448-0cc47ad896dd}\)
First skipped file name: I:
HResult: The specified copy of the requested data could not be read.
Information:
Volume scan completed on I:\ (\??\Volume{53d99c4e-9ad6-11e8-8448-0cc47ad896dd}\)
Bytes repaired: 0x0
Bytes not repaired: 0x3000
HResult: The operation completed successfully.
Information:
Disk scan completed on \\?\PhysicalDrive9 (\\?\Disk{ecb98218-784e-47d5-b316-941ae9595eb4})
Information:
Completed data integrity check.
Hello Experts,
I am facing issue with File Share over internet. I have a Server 2016 instance in AWS and I created few Shared Folders on that Server.
I am able to access those Shared Folder over public IP <\\Public IP\Share> from Server 2016 hosted in AWS, Azure, Google Cloud but unable to access the Shared Folder from Windows 10. I have enclosed the Windows Diagnostics Log here.
While troubleshooting the issue, I tried below steps but no luck. Please advice...
Allowed all traffics in AWS Security Group
Disabled Windows Firewall on Windows 2016 as well as Windows 10
Enabled SMB 1.0/CIFS Client on Windows 10
Tried to Telnet on port 445 but it was a failure
Disabled StrictNameChecking and SMB2Protocol on Server 2016. I used below commands:
Set-SmbServerConfiguration –EnableStrictNameChecking $False
Set-SmbServerConfiguration –EnableSMB2Protocol $False
Log Name: SystemThanks & Regards, Prosenjit Sen.
Hi!
I have two servers Win 2016 with network adapters 10Gbs (with RSS support, but without RDMA). If run 4-5 file operations in parallel (copying a few very large files), then the total speed will be 8-9Gbs. If run a single operation - about 2. Question: what limits the speed of a single operation? Where to see, what to try to tune?
Thanks
Alex
WBR, Alex
Microsoft-Windows-WorkFolders/Operational
Event ID 2100 "Error while connecting to server" error: (0x80072ee2)
That's basically means a timeout.
It seems to be temporally problem and occurs also when the machine is connected via LAN very well to the server.
the server log syncshare//Operational logs nothing at this time.
Is there another log on the server where we can see if the request arrived the server or not, such the w3svc log.
Thanks
Eckhard
Eckhard
Hi,
I understand how you set up <g class="gr_ gr_17 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar only-ins doubleReplace replaceWithoutSep" data-gr-id="17" id="17">server</g> to server storage replication but can you do it a many to one relationship?
For <g class="gr_ gr_28 gr-alert gr_gramm gr_inline_cards gr_run_anim Punctuation only-ins replaceWithoutSep" data-gr-id="28" id="28">example</g> if <g class="gr_ gr_29 gr-alert gr_tiny gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="29" id="29">i</g> have 10 file servers "Company A" and "Company A" has a DR site with loads of storage, can set up one 2016 server at my DR site and get the file servers to replicate there to this one server? or is it a one to one relationship?
We've using DFS at the moment but <g class="gr_ gr_58 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="58" id="58">its</g> not working that well (3 support calls with <g class="gr_ gr_89 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="89" id="89">microsoft</g> in 1 year as it keeps failing).
Cheers
Richard
Hi all,
I have one FTP server on that FTP i have to give FTP permissions to one of our team having 10 members. I have already installed and configured IIS on windows Server which we are using to provide access to FTP folders. I have given FTP folder access to all the members of that group but now all members are facing one issue.
They are able to access the main FTP folder but when they are trying to open the subfolder they are getting error "you need permission to access this folder. Error 550". All members are using Windows 10 Enterprise OS.
Need help in resolving this.
Thanks...
Hi,
I can't seem find a way how to do this setup.
The creator of the object is always the owner of the file.
The only way that I can change the ownership of the object that was created by a normal user was to change the ownership is to go properties>security>advance and change it to domain administrator.
Is there a way to do this automatically even if a new object is created on that folder?
Thanks,
Stephen
Hello everyone,
I do have a very strange issue with my file server. Let me first describe the infrastructure.
OS: Windows Server 2016
Roles: File and Storage Services
Type: Member of a 2016 Domain
On the file server I do have to following structure/permission:
The NTFS permissions to those folders is like that:
So far I think this is nothing special, now here is my issue:
When I delete the "Wagenbuch" or the "DIMS" folder this does remove the group "L_NTFS_J_ZZZ_R" from the "ZZZ" folder AND does remove the group "L_NTFS_J_R" from the "ANWDTest" folder... and I do have absolutly no idea why this is happening.
Does anyone see an error in the setup or did face similar issues? I am totally lost here, even no idea where to start searching.. Google did also not help at all.
Thanks for the support!
UPDATE 1: To be sure that is not an issue of our file server - I did setup the same structure on an other 2016 server, and did face the same issue.
UPDATE 2: In the meantime I did the same setup on a 2012 R2 server and there is no issue at all, so this seems to be related to Server 2016.
Getting error when running: Set-DfsnFolderTarget to use FQDN
Set-DfsnFolderTarget : The requested object could not be found.
Current Setup:
Current setup works great, so I am not troubleshooting anything, however, I would like to change the LUNs with VHDX file for each VM. In case you’re wondering why I want to do this, the reason is that I am using Veeam to back up servers, and the backup is able to backup any data stored on virtual disks only, so data stored on those LUNs is not being backed up right now.
Of course, I could unnecessarily spend the money and get a windows host license for each VM and that will take care of the problem, but I would rather use the license that’s applied at the Hyper-V host level instead.
I am not entirely sure of the best way to do this, so I am seeking experts’ advice, as these web servers are very critical and run mission critical web applications and services.
I look forward to hearing from you all on the best way to do this.
Thanks in advance
HelpNeed
I have several quota templates configured for different user's home folder, so user can't use over the quota template limits which they belong to. Recently, I found more and more users have space usage issue after they reached quota templates limits.
For example, userA is belong to 1GB template for his folder. He ran out of space, so he deleted all his files, but it still shows o space on client side. Also under File Server Resource console, I still see userA used 100% of his space. To help userA gets back his space, I will have to delete userA's template, and re-apply it. Can someone tell me why this happened? Thanks!
The problem is this:
We had a user that about 6 months ago decided to rename 4 folders in a Data Directory used to store data for a database. As this was done 6 months ago, many new records have since been created using the renamed folders in their document paths.
However, any records created before she renamed those folders have the original folder name in the path for the documents that belong to those records. Needless to say, this is something that NEVER should have been done. But i have to deal with the current situation if i can find a way to fix things.
My first thought was to recreate the original folder names and put a copy of every document in them that exists in the renamed folders. This way, when the database is used, it wont come up with "document not found errors" due to those documents now existing with just a single folder name being changed in their path.
However: There are thousands of documents that would have to be copied as these are huge folders. And there is no easy way to sort out which were stored with the original folder names in their path and which were stored with the new name in their path.
I was wondering if there was a way to create virtual junction of sorts or a path "alias" where Windows would treat a path with both the old name and the new name in as if they were the same?
Example R:\Folder\A\filename.docx = R:\#1 Folder\A\filename.docx.
With All the documents actually existing only in the new path.
The person who did this did exactly that when she created this problem. She renamed four of the already in use main data folders by adding #1,#2,#3,#4 to the front of those folder names
No one even noticed as no one has tried to look up any of the filed document until now. And of course by now there are hundreds if not thousands of documents that have their paths stored as R:\#1 Folder\A\filename.docx as well as the original thousands that had their paths stored as R:\Folder\A\filename.docx.
I would like to find a way (if one exists) to make Windows treat any calls to find files located at either path end up going to the one as it is named now.
I have already contacted the database software company to see if we can mass edit the records in the database to change those paths as that would probably be the most logical solution but all of their software is proprietary. Unless we can get them to do it, it is unlikely that an open-source tool is available to work on their database structure.
I am also open to any other ideas that anyone can offer. Keeping two copies of every document in the database just so they would exists at the end of both paths would be extremely wasteful of space but may be our only alternative if we cannot find another way
Hi,
I have been asked if its possible to setup OneDrive for Business for a company. This company is based on an airfield, and has terrible internet connection, it's shockingly slow.
I have uploaded their documents onto OneDrive for business, and set relevant permissions, but it seems that for them to be able to access the files, users need to log in via the internet, and because of how slow their connection is, that is not possible. I was hoping the OneDrive App in W10 would allow me to point at the share folder on the server, but it won't let me.
So, I then though maybe I could give internal users NTFS shares to the OneDrive folder on the server, and just allow the remote workers to log into OneDrive, so they can all see documents in real time.
Is this possible, or what would be the best way forward?
Thank you
Under security message the administrator and all other admins show access denied and the same for all users, if you go to advanced under you can take ownership, owner and all users and admins have full control, includes inheritable from parent checked. All other folders under this parent have full control. I attempted to use icasls * /t /Q /c /reset 99% of files access is denied.
This folder holds a year of data 3 folders and about 500 files. One can take ownership on an individual file then grand full control and gain access to that file, this same procedure works on the folders but does not allow one to access any files. Looking for a workaround to avoid haveing to perform a five step process on each file.
Any help /support suggestion appreciated.
Thanks Bob
The short story: when a user has traverse-only rights to the root of a shared folder but full rights to a subfolder, the user cannot browse to the subfolder (correct) but can open the folder using the explicit path & change files in that subfolder (also both correct), whether via mapped drive or UNC path. However, the user cannot add/delete/copy files to or from that folder when mapped (incorrect), only via UNC path. On the other hand, the user can create/delete files in that same folder, accessed via mapped drive, using command prompt (inconsistent with above behavior).
I already know I can add List folder contents at the root of the share to overcome this, but that is a right I am trying to avoid, and I am trying to understand the nuance of file security here.
The long story:
In order to keep curious users from simply browsing to certain key shared application folders on my Windows 2012 R2 server (same thing happens on 2008 R2, though), I do several things:
1. Give Authenticated Users only these rights at the root of the drive hosting the share: Traverse folder / execute file, Read attributes, Read extended attributes, and Read permissions.
2. Create the share as a hidden share. That is, share D:\Apps as Apps$. It is accessible as\\MyServer\Apps$.
3. Give Everyone Change & Read shared (not NTFS) permissions. There is no need for Full Control, since I do not want users to be able to change the share permissions.
4. Add group-based enhanced rights to specific application subfolders to which each group needs access. The additive rights are these: Modify, Read & execute, List folder contents, Read, and Write; in short, the basic rights required to create, modify,
and delete folder contents.
5. Map a drive to the Apps$ share for those user groups that require access to it. For example, my login script maps J: to\\MyServer\Apps$ by non-persistent "net use..." statement.
6. Create a shortcut to the particular application and distribute that to user groups that require it.
The reason for all of this is simple security: these application folders do not house files that users should ever interact with directly; the files are back-end data for which all interaction should be through a front-end application, for example, QuickBooks company files and MS Access front/back ends.
All of this ensures that:
1. No curious user or malware can even find the shared folder. The hidden share ($) takes care of that.
2. No over-curious user with just enough knowledge to be aware of the default root administrative hidden shares (C$, D$, etc) but with too much time on his hands can browse the drive from the root up by navigating to\\MyServer\D$. Traverse-only at the root ensures this.
3. No curious user can browse for the application folders downline from the share. This is important because the drive mapping exposes the share name to the user, and a user could just double-click the drive letter, then browse and copy/delete a back-end data
file, accidentally or otherwise.
This all works perfectly:
1. Front end applications can appropriately add/remove data from the files based on the logged-on user's enhanced rights to the particular application folder.
2. The application shortcuts I distribute point directly to the files in the downline folders, for example to J:\App1\MyApp.accdb (MS Access application), where J: is mapped by domain login script to\\MyServer\Apps$. The user can open the application and
3. Access to the downline folders is arcane enough that anyone except a very knowlegeable user would ever know that browsing must begin, not at the root of the mapped drive or share, but with the specific folder. That is, an end user cannot browse to\\MyServer\Apps$ or the J: drive mapped there because the user has traverse-only rights there, but the user can get to\\MyServer\Apps$\App1 or J:\App1, where the user has read/write access, so long as the user enters the full path, bypassing the root of the share. I am pretty sure nobody has ever figured that out.
4. In all of this, I have no problem browsing to and manageing all of this when logged on using a domain admin account.
But there is one thing I do not understand: there is a difference in users' file-management rights, not affecting the ability to modify files, depending on whether they access the folder via UNC path or mapped drive. I have one user now that is a developer, and I created this shortcut for him: J:\HisAppsFolder so he can push applications into that folder. While he can open files there (e.g. open the Access file there and use it, including changes that modify the file), he cannot copy anything to or from the folder or make changes. On every attempt, he gets a "J: is not accessible" message accompanied by "Access is denied." However, if he instead goes to the same folder using path\\MyServer\Apps$\HisAppsFolder, he has (correctly) full access.
In fact, he can open a text file already in that folder, make changes, and save the changes. This, and the fact that this all works when using UNC-based access to the same folder make it obvious that he has full rights to the folder. However, he cannot copy anything to or from the folder, even using XCopy. On the other hand, in a command prompt, he can go to J:, then cd to the folder and delete files there via command or create a file by doing something like this: dir > dir.txt.
I have found that only after I add "List folder contents" access to the folder to which I map the drive (or above) can the user fully interact with the downline folder to which he has full rights. But I really do not want to expose the folder list to users at this level, and the user can already, without this right, list and modify files in the downline folder; he just cannot add, copy, or delete from the downline folder when connected via mapped drive. How is it that List folder contents must exist at the root of the share in order for a user to have the full benefit of full rights already set on downline folders? And how is it that the command prompt can completely bypass this restriction?
After spending so much time working out all these details, it would be a shame to go away without understanding why this particular limitation exists, why it does not exist within command prompt, and how this all may affect me in the future. I have tested this on multiple domains and with server 2008R2 & 2012R2.