Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

File modified timestamp not updating...

$
0
0

Hi,

Our business application generates a log file every minute but the issue is the datemodified timestamp is not gets updated automatically.

>If i refresh the timestamp did not get update.

if i rightclick on the datemodified column then the timestamp updates.

We have already changed the value of NtfsDisableLastAccessUpdate 0 but still the timestamp not getting updated.

We have scripted the touch utility to update the timestamp and it works fine but it updates the timestamp irrespective of whether the file content gets updated or not.

Any advise to fix this issue?


Karuna


File copying is very Slow over network

$
0
0

Data copying over network is very slow ( in ~200Kb/s) from one particular server with Server 2008 OS to desktop / other on the same network in fact the systems are connected to same switch. 

Can somebody help me with things to check, I tried updating / Reinstalling network card driver, changing the various setting in network car dproperities

NTFS Permissions

$
0
0

I want to configure permissions on a folder so the users can't delete files, however if I unmark delete permissions this user can't rename files, an save as any file in this folder. Is there anyway to make users just can't delete files? Thanks.

Connecting to UNC path with specific hostname failing when FQDN / IP address works

$
0
0
I first noticed this issue in https://social.technet.microsoft.com/Forums/en-US/cb4ccc69-d153-4781-9984-fb227d887baf/dcdiag-errors-failing-advertising-and-more-tests-appear-to-be-due-to-flatname-issues?forum=winserverDS



I'm using a server called ACMEDC1, trying to connect to an SMB share on ACMESRV1, which I've called "testshare".

Connecting to \\x.x.x.x\testshare (IP address) works 

Connecting to \\ACMESRV1.acme.local\testshare works

Connecting to \\ACMESRV1\testshare fails with the error "Windows can't find '\\ACMEDC2\FILESHARE'.  Check the spelling and try again."

I can connect to many other SMB shares (not all) from ACMEDC1 using the hostname, and most other clients have no issues connecting to ACMESRV1 SMB shares via the hostname.

From ACMEDC1, I can do a lookup of ACMEDC2 with nbtstat - it will resolve to an IP address and then it appears in the cache (so "nbtstat -a ACMEDC2" and "nbtstat -c" work).  Likewise, netlookup works, and so does nmblookup (or whichever tool it was that does WINS lookups)







I've tried using message analyzer and I've found that no SMB packets leave the PC when using the failed "\\ACMESRV1\testshare".  So it looked like something to do with the SMB client itself.  When I do try to connect to the share, nothing ordinarily appears in the SMBclient logs.  However, if I HAMMER (i.e. press enter around 20 times), I run into a few of these:



Log Name:      Microsoft-Windows-SmbClient/Connectivity
Source:        Microsoft-Windows-SMBClient
Date:          28/10/2015 12:44:00 p.m.
Event ID:      30800
Task Category: None
Level:         Error
Keywords:      (64)
User:          SYSTEM
Computer:      ACMEDC1.ACME.LOCAL
Description:
The server name cannot be resolved.

Error: The object was not found.

Server name: ACMESRV1.

Guidance:

The client cannot resolve the server address in DNS or WINS. This issue often manifests immediately after joining a computer to the domain, when the client's DNS registration may not yet have propagated to all DNS servers. You should also expect this event at system startup on a DNS server (such as a domain controller) that points to itself for the primary DNS. You should validate the DNS client settings on this computer using IPCONFIG /ALL and NSLOOKUP.





The servername in the above log has also appeared as "ACMESRV1.A", ACMESRV1.AC", ACMESRV1.ACM"... none of which I've entered myself.  So the server above has the hostname followed by the period - which is why it's not resolving.  I'm not even sure if that's the reason it is appearing, or whether it's simply a result of me trying to investigate.  When trying to replicate that error, I'm not able to by hammering the "enter" button again.

If I intentionally put in the wrong server name (such as \\ACMESRV1234234), it will immediately create one of the 30800 events such as above.





If I use process monitor, I get a number of TCP Accept / TCP Receive entries - despite the fact that I can't pick anything up on Message Analyzer:

TCP Accept ACMEDC1.acme.local.microsoft-ds -> ACMESRV1.acme.local:51096 SUCCESS

TCP Receive ACMEDC1.acme.local.microsoft-ds -> ACMESRV1.acme.local:51096 SUCCESS

TCP Send ACMEDC1.acme.local.microsoft-ds -> ACMESRV1.acme.local:51096 SUCCESS



I then get four CreateFile events for EXPLORER.EXE:

\\ACMESRV1\TESTSHARE       DISCONNECTED

\\ACMESRV1\PIPE\SRVSVC  DISCONNECTED

\\ACMESRV1\PIPE\SRVSVC  DISCONNECTED

\\;RdpDR\;:2\ACMEDSRV1\PIPE\SRVSVC  BAD NETWORK NAME





Initially I thought that this could have been the other DNS server's fault - ACMEDC2, which ACMEDC1 is using for lookups.  But the DNS lookups are fine when required (normally it seems to grab it from the cache).
I think the key problem is this one:

\\;RdpDR\;:2\ACMEDSRV1\PIPE\SRVSVC  BAD NETWORK NAME

From searching, it appears that this refers to the RDP provider or something to that effect.  No idea why this comes up at all.  The network providers are the same on both ACMEDC1 and ACMEDC2, so it's not a configuration issue there.  I just don't get it, and not sure where to go next.

check consistency of deduplicated drive

$
0
0

Hi,

I decreased the size of a deduplicated drive (Server 2012) of a virtual machine via VMware converter. Now I want to be sure that all is fine.

Is there a tool to check if the deduplicated files are ok? I did a checkdisk and a get-dedupstatus both are fine. I'm also able to copy a large file to another drive.

But I'm no sure if all files find their deduplicated chunks. Is chkdsk the right way to check the consistency of deduplicated volumes?

Thanks! Wolfgang


Unexplained loss of capacity

$
0
0

Good day, 

I am using sindows server standard service pack 2 32-bit OS, witnessing a significant discrepancy between the amount of free space shown as available from the operating system and the amount I am expecting based on disk usage. 

Windows recognizes my D drive having 140GB total size but lists only 10.9 GB free space. measuring the actual content of the D drive tells me that there is only ~80 GB is used.  that leaves 60GB  that I am not not using on purpose but only 11GB is listed that means 49GB is evaporating!

granted there will always be a few GB here and there taked due to the overhead of formatting and indexing and so on and so forth but his loss of capacity seems unreasonable to me. 

I have attempted the following approaches with little to no succes:

1- I have changed the folder settings to show all hidden files and measured disk usage by WinDirStat: still only 80GB used. 

2- I have monitored the disk space used by shadows copies and only 8GB is being used across all drives on the server.

3- I have emptied the recycle bin (not that it should matter for the D: drive)

I am dealing with a network mapped data drive. OS is not installed here so hidden system files are unlikely to be the source of the issue. 

any help would be appreciated.

DFS Doesn't Work For Disaster Test

$
0
0

We are using Windows Server 2012 Domain Controllers running a domain-based DFS namespace. We have six domain controllers at four sites. Our disaster site has one domain controller, and it is one of three DFS namespace servers.

When we run a disaster test, we disconnect the disaster site from the rest of our network, so the disaster site DC cannot connect to any other DCs in the domain. When we try to open DFS management on the disaster site DC, we get the error:

   “The namespace server cannot be queried. The RPC server is unavailable."

Many of our programs use DFS to map to file shares and will not work without DFS. Is there a way to make our domain-based DFS namespace work in this type of test scenario? We tried seizing the FSMO roles but that did not work either.

Thank you.


Encrypt unmounted volume (vhdx) with Bitlocker? (Offsite-Backup Purpose)

$
0
0

To increase our data-redundancy we are planning to extend our current backup policy by an offsite-backup stored somewhere in the cloud.

Therefore we aquired an account for a certain cloud hoster, supporting block-Level synchronization. The Goal is now to synchronize an encrypted vhdx with that hoster. by keeping the incremental backup along with the Client supporting block-Level synchronization the daily upload should not be an issue.

However, we have Problems encrypting the file using bitlocker:

- the vhdx-file is placed on a certain vm, utilizing our local storage-pool.
- this vm is taking care for the synchronization of the vhdx-file and the "cloud".
- this vm exposes the vhdx as an iscsi-disk to our file Server.
- the file Server is using Windows Backup along with a harddisk (also on the pool) that is dedicated for backups and we want to add a second backup target: the vhdx connected through iscsi.

If we Mount the iscsi-disk on the file Server, giving it a drive letter we could easily enable bitlocker. However like that we can not use it as a second backup target, cause this only works with disks dedicated for backups.

If we add the iscsi target as a dedicated disk for backups, Windows backup will Format the volume, therefore disabling bitlocker.

We have tried to enable bitlocker afterwards by using the Volume-guid instead of the (non existing) drive letter, but bitlocker refuses to work like that...

manage-bde -Status \\?\Volume{9a9330de-c326-11e3-80c1-aaaaaa007409}\


yields the error message

BitLocker Drive Encryption: Configuration Tool version 6.3.9600
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
ERROR: The volume
\\?\Volume{9a9330de-c326-11e3-80c1-aaaaaa007409}\ could not be opened by BitLocker.
This may be because the volume does not exist, or because it is not a valid
BitLocker volume.



So currently, what we can (theoretically) do to achieve our Goal is:

- Mounting the iscsi disk on the fileserver, assign it a letter, encrypt it. (Therefore this Container will be synchronized encrypted to the cloud)
- Create another vhdx inside the disk located on the iscsi-target, and connect this disk as a disk dedicated for backups.

Windows backup would now write unencrypted data to the inner vhdx. This data will be written encrypted to the iscsi-target (the outer vhdx) which is then synchronized with the cloud.

While this could work - isn't there a better way to perform this? Like telling Windows Backup to write encrypted data, or encrypting the *Content* of the virtual disk on the ISCSI-Target rather than on the ISCSI-Initiator?










Bypass traverse checking

$
0
0

Is bypass traverse checking enabled by default for domain users accessing files on a file server?

What server OS did this start on?

can i setup domain pc as file server?

$
0
0

hi,

I have a domain controller which is running server 2012. And now i want to make a domain pc which is running win 7 a file server. Is it possible that i can add that win 7 pc as a server to my server pool from server manager. Or Must i use a pc running win server software for file serving?

Thanks. 

FIle server path not access.

$
0
0

Dear All,

I have configured file server. and path configured by windows 2012 failover cluster. and it was running ok almost 2 years. but from few days, the file server path is not accessible from windows xp machine but windows 7 and letter version is working well. in windows xp file server path can access from IP address but not working in hostname.

computer is not accessible. You might not have permission to use this network resource

DFS ROOT

$
0
0

We have two domain controller. Both are using windows server 2008 R2 OS.

I want to crate domain base dfs root for fault tolerance.  when I create dfs root on server dfs is working but when server is down client cant access dfs root. How can i solve this issue.

I want to need when one server fail then second sever will provide dfs root namespace,


Unauthorized network

$
0
0

HI,

I have win2012r2 and its member server.Some time the network showing unautorized network if i restart its showing domain network.plz help on this

Large number of open file handles on fileserver (Server 2012R2) after user's rds session has long been terminated through log off

$
0
0

Hi,

we do have an rds farm with roaming profiles (not profile disks) consisting of 6 terminal servers ("srvts001" to "srv006" 2012 R2), 2 domain controllers ("srvdc01", "srvdc02" 2012 R2), 1 SQL/Fileserver ("srvsql01" 2012 R2).

Roaming profiles are stored on srvsql01 in share RDS. Oplocks are disabled, directory caching is disabled.

Everything was running without problems for 1 or 2 years. Recently (about 2 weeks ago) users began complaining about being logged on with a temporary profile. Investigation showed a lot of open file handles from previous (no longer existing) sessions on the file server.

E.g. user1 had logged on to rds one day and logged off in the evening (not just disconnected, really logged off) and when he logs in the other day on the rds farm he is logged on with a temporary profile due to the profile service being unable to access the roaming profile as it is (event log) "in use by another process".

Looking at the fileserver's open files we can see that a lot of files of that user profile (and of other users, too!) are shown as open - all with read option and no apparent locks.

When user1 loggs off these locks stay! Manually closing the open files on the file server allows the user to log on to the rds servers normally. Subsequent logging off may or may not create those stale open file handles.

We have no clue as to when these enormous amount of open file handles (we talk about 100s of files per user) happen. It seems to be at random and not hitting every user.

Has anyone ever met a similar problem? Or at least an idea on how to prevent logged off users to still have open files on the file server?

Any answer pointing in the direction of a possible solution or source of this problem is greatly appreciated!

Share Permissions and NTFS security permissions- inheritance

$
0
0

Hello,

I am new to Windows Server 2012 and trying to set-up a small network for a non-profit charity. Basically I have a book and I am trying to teach myself. I set a folder share permission and gave "everyone" read/write permissions. I went to the security tab to set the NTFS permissions and fine tune things. The "everyone" group appears here as well. I cannot delete it as it- message says inheritance is enabled. I understand the share permissions would be inherited by any files within the original folder. I do not understand why the NTFS security permissions are being inherited from the share permissions. I thought the two were completely different entities. Can anyone give me any insight as to the mistake I am making?

Thank you in advance,

Andrea


issue with access shared folder.

$
0
0

Hi All

i have a folder named Share with File Share Full Control forevery one 

this folder contain another folder called Picture ,  with inherent share permission formShare  folder.

i have a Global Security Group, called ggPicture, which has another Global Security group as a member called ggMem .

modify NTFS permission has been given to ggPicutre , 

when user which is member of ggMem want to open the folder Picture they get error message you don't have permission to open this folder , also i checked the effective permission to this uses he has a modify access to the mentioned folder.

one thing i notice, we have two network, the user who was not able to access the folder was on the other network,  he can access other folders in same directory.

but when a give individually access to one of the users which is member of ggMem to Picture folder they can open and modify it.

why this happened?

regards 


Extendind existing CSV (Cluster Shared Volume) with hyper-v production VMs

$
0
0
We have Dell storage array MD3600i and want to add 4 new disks into it. There is RAID 6 configured on this storage array. We have 2 Hyper-V hosts which are using these storage array with Cluster Shared Volume configured. The question is whether we can safely extend RAID 6 with these 4 new disks and then extend CSV when there are Hyper-V VMs running\stored on it? Can we do it "online" or we for example should migrate all VMs ?

Export Windows Server 2008 R2 FSRM quotas to new volume

$
0
0
Good evening. Is that possible to export FSRM quotas to new volume? We have a file share in our company, but free space on volume, which is used for file share, nearly ended. That's why we decided to copy an existing file share to new volume on the same server, which has enough space for future needs. We tried it on some test folders and successfully exported share and ntfs settings, but how to export quotas too? It will take a lot of time to create quotas manually (over 500 quotas) again, that's why I want to ask for help. Maybe someone already did this before? Every answer, that I found, was only about exporting templates, not quotas themselves. I know that there is SRM folder in System Volume Information, which has quota and quota.md files in it. This files are used for storing quota settings, if I'm right. If I just copy them to new volume and then return the letter of drive, will it work? Thanks in advance. :)

srv2.sys+0x27400 causing 100% CPU usage

$
0
0
Hi All

Were experiencing some seriously high CPU usage on our file server, this stores our roaming profile data and a few other shared drives within the LAN.When the issue isn't happening, the CPU hits a max of 20% but normally sits at 5%. I did a process explorer on the machine and saw that the System process was the culprit, after i found out it was that i looked into the threads and saw that it was srv2.sys+0x27400 that is causing this spike and there are multiple threads of this name that are running at the same time. I have applied the hotfixes that are mentioned in some of the forums but that hasn't seemed to fix the issue. The OS is Server 2008 R2 with SP1 installed.

Storage spaces virtual disk not showing physical disk members

$
0
0

I have a SOFS cluster with two servers and two volumes on Server 2012 R2. One of the two volumes does not show any physical disks associated with it via either PowerShell or in Server Manager, although it shows status as healthy and seems to be working fine in all regards.

This volume shows what I would expect:

PS C:\> get-virtualdisk -friendlyname "volume 1" | get-physicaldisk

FriendlyName        CanPool             OperationalStatus   HealthStatus        Usage                              Size
------------        -------             -----------------   ------------        -----                              ----
PhysicalDisk10      False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk2       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk3       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk16      False               OK                  Healthy             Auto-Select                     1.82 TB
PhysicalDisk7       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk14      False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk1       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk6       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk11      False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk9       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk13      False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk4       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk12      False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk8       False               OK                  Healthy             Auto-Select                   930.75 GB
PhysicalDisk5       False               OK                  Healthy             Auto-Select                   930.75 GB

PS C:\>

But this one, not so. It should be pretty much the same drives as above:

PS C:\> get-virtualdisk -friendlyname "volume 2" | get-physicaldisk
PS C:\>

However, this is interesting, as it seems to know from the other way around:

PS C:\> get-physicaldisk physicaldisk1 | get-virtualdisk

FriendlyName        ResiliencySettingNa OperationalStatus   HealthStatus        IsManualAttach                     Size
                    me
------------        ------------------- -----------------   ------------        --------------                     ----
Volume 2            Mirror              OK                  Healthy             True                               4 TB
Quorum Disk         Mirror              OK                  Healthy             True                               4 GB
Volume 1            Mirror              OK                  Healthy             True                             2.5 TB

PS C:\>

I don't know if this is just a problem with the tools and don't know how to diagnose further. I don't see any event messages that seem interesting with regard to this. I have tried rebooting each SOFS cluster node (one at a time) with no change. The cluster and volumes did fail over and continue to work across the reboots as expected.

Any ideas?

Viewing all 10672 articles
Browse latest View live