Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

How to read the NTFS $Volume info?

$
0
0

How do you read the $Volume file?  Is there a tool such as a Hex editor required, etc? 

We're running Windows 2003 Standard Server. The server is attached to an HP SANS.  We have 22 volumes.  I'm trying to determine which volume this error applies to by mapping the Guid to the volume name.

\?\Volume{627673ea-2d99-11e5-a486-001635c41328} I've learned this is the Volume Guid, and the $Volume metadata file should reveal the Volume label i.e. Volume 7 from this link.

http://blogs.technet.com/b/askcore/archive/2009/12/30/ntfs-metafiles.aspx

Event Type:    Error
Event Source:    Ntfs
Event Category:    Disk
Event ID:    55
Date:        11/2/2015
Time:        10:06:42 AM
User:        N/A
Computer:    Beta
Description:
The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \\?\Volume{627673ea-2d99-11e5-a486-001635c41328}.


Resolving DFS Issues

$
0
0

Hi,

I'm trying to get a set of data currently being saved on an array of servers onto a single, new Server 2012 file server which is a VM on a hyper-converged system.  The intention being to then turn off all the old servers.  I've created the server, installed DFS and added the appropriate shares to the new server.  They have mainly worked, but there are a lot of bugs that need to be ironed out and unfortunately, in some cases the initial replication stage was never completed, so some are more befuddled than others.

The real trouble is that data is saved on an array of legacy servers and so the data cannot simply be copied from one, which is trusted to be the most up-to-date set - to the new server and then turn the old ones off.  DFS has to be made to work if there is to be any confidence as to the integrity of the data.

So to that ends I've started by running a health check against one - probably the most important - DFS share and intend to troubleshoot my way out; I've solved a few issues already, but I'm hitting my head against the wall trying to understand what to do with the following error, which I'm seeing on all the member servers APART from the new one:

The DFS Replication service detected invalid msDFSR-Subscriber object data while polling for configuration information. Additional information includes Object DN: CN=6529a4fe-b847-4070-89f1-02a843074acd,CN=DFSR-LocalSettings,CN=PPSERVER10,OU=Member Servers,DC=domain,DC=net Attribute Name: msDFSR-MemberReference and domain controller: PPSERVER30.domain.net. Event ID: 6002

There are 3 servers; with 2 of them the error is "invalid msDFSR-Subscriber" while on the third it is "invalid msDFSR-member" object.  Is there a significance there?

I'd be very grateful if someone might be able to point me in the direction of where things are going wrong.  Many thanks.  Servers are:

  1. Server 2003 x2
  2. Server2008 x1
  3. Server 2012 x1

Regards,

Robert

File Permissions issues after moving file 2008r2

$
0
0
We have a File server (Windows 2008R2), which our Windows 7 machines have access for their files.  For each office there is a folder which accumulates documents that have been scanned. The office then moves the document to the department that should process it. Example Office1 scans to folder Office1scan and then moves the document to Operations folder.  Although we have it set to inherit permissions, the documents are not viewable by the department receiving the items.  Regardless of trying various settings, nothing works.

Rebecca Palmer


mapping drive to linux "not enough server storage to complete this operation"

$
0
0

We are running a script that maps a drive to a linux share and then copies files.

Lately, every week or 2 we end up unable to map the drive due to "not enough server storage to complete this operation"

We are still able to map drives to other servers, and everything else runs with no issue.

Only way to resolve seems to be restarting the server, then mapping to the linux share works again.

Anyone else see this before, and a possible solution?

Thanks.

Frequent errors iscsi

$
0
0

Hi. On the server Windows 2008 Storage Standard installed iSCSI Target v.3.2. On him are the Exchange databases. On another server Windows 2003 R2 installed iSCSI Initiator v2.8 and Exchange Server. When a transfer of mailboxes from one database to another, there is a disconnect iscsi. And may not occur. That is, the connection is unstable. in logs windows 2008 error:

ISCSI Target has received an invalid digest data from the initiator iqn.1991-05.com.microsoft:<initiator name>.

Network Adapter on the server-initiator  Broadcom NeXtreme Gigabit Ethernet. Flow Control is disable. what could be the problem?

Use DFS to centralize Backup Branch Office files

$
0
0

We have 150 Remote Offices plus 1 Central Office. User save documents (not profiles..) on their office. Also they have PST on these servers.We want to create 150 DFS (point to point). We would like use 2 or 3 servers on Central Office to centralize documents. For example:

CentralServer1: 50 DFS Replication Groups. Each Replication group contains CentralServer1 plus one Remote Office Server

CentralServer2: 50 DFS Replication Groups. Each Replication group contains CentralServer2 plus one Remote Office Server

CentralServer3: 50 DFS Replication Groups. Each Replication group contains CentralServer3 plus one Remote Office Server

After, we would use HP Data Protector to backup this 3 Central Site Server (3x50 directories)

I'd like to know possible limitation, issues, etc..with this infrastructure...

DFSR replication database recovery

$
0
0

We have three volumes participating in DFSR that suffered unexpected shutdowns (event 2212). Two of the volumes have taken days to recover (as described by events 2218 and 2220).

I was wondering if there was a way to monitor in real-time or near real-time the progress of the recovery? It is a bit nerve-racking not knowing what is going on or if it is making progress. 

Also, is there a way to prioritize or expedite the repairs? The biggest volume is also the most critical and I would prefer to allocate more resources to that volume than the others.

Thanks

Derek

 

Cannot defragment/optimize C drive on Server 2012

$
0
0

I have a Server 2012 server that I manage and I cannot get the C drive to defragment/optimize. I run the gui and it runs for a few seconds then says "Needs Optimization (26% fragmented)". I have tried running from command line with every switch I could find and it still will not complete. There are no errors in the event viewer (System or Application) and I am at a loss as to why it is happening. The C drive is on a RAID 1 array on 2x 300GB 10K SAS drives in a Dell Poweredge M520 (inside of a Dell PE VRTX chassis). When I first set the server up a year ago, the defragmenter/optimizer ran fine, but for the last couple of months it runs then stops, but does not defragment/optimize. I can optimize the D drive on the same array with no issues, just not the C drive. I also do not get any errors in the VRTX's control panel and that the drives are operating normally.

Here is what I get when I run defrag c: -a -v

C:\Windows\System32>defrag c: -a -v
Microsoft Drive Optimizer
Copyright (c) 2013 Microsoft Corp.

Invoking analysis on OS (C:)...


The operation completed successfully.

Post Defragmentation Report:

        Volume Information:
                Volume size                 = 99.65 GB
                Cluster size                = 4 KB
                Used space                  = 31.02 GB
                Free space                  = 68.62 GB

        Fragmentation:
                Total fragmented space      = 26%
                Average fragments per file  = 1.00

                Movable files and folders   = 98048
                Unmovable files and folders = 48

        Files:
                Fragmented files            = 60
                Total file fragments        = 330

        Folders:
                Total folders               = 7052
                Fragmented folders          = 0
                Total folder fragments      = 0

        Free space:
                Free space count            = 3873
                Average free space size     = 18.13 MB
                Largest free space size     = 34.57 GB

        Master File Table (MFT):
                MFT size                    = 179.75 MB
                MFT record count            = 184063
                MFT usage                   = 100%
                Total MFT fragments         = 1

        Note: File fragments larger than 64MB are not included in the fragmentation statistics.

        It is recommended that you defragment this volume.

Any help would be greatly appreciated!!!




device manager took dvd/cd usually labelled drive E off computer

$
0
0
dvd/cd drive in my win 10 os was removed from device manager and is now hidden. i had been playing a dvd intermittenly , it was  playing then gone. it took me a million times looking at device manager to realize it was gone, four hours later to realize it was now hidden. you have to be a tech to find anything on technet and my familiar library on MMC is now gone too, in windows 10 . why did they take the answer to so many problems off my computer?  i do not like to do command prompts or any of that other stuff without a good legend to follow.

2012 R2 ISCSI target to REHL5 or 6 server initiator issue

$
0
0

I've created an ISCSI target on my Windows 2012 R2 server and my RH05/6 servers can see the lun but I need the stanza or storage type in Linux before it will recognize the lun.

From my Linux admin:

We have multiple sessions per initiator, so we end up seeing multiple devices.  I’ve never seen a storage vendor not have a recommended stanza, I just can’t find it on Microsoft’s site.  It is just 5 or 10 lines, but I can’t use this properly in a production setting until I have it. 

Currently, since we don’t have a stanza for storage of this identity, it defaults to alua, which returns an error as not supported on the storage side

Any help would be appreciated. 

Deduplication get-dedupvolume 0% savingsrate

$
0
0

Hi

We have a 2012 R2 server that we use for backup. We have created a 31TB volume where we store some of our incremental and full backups. The lifecycle for those backup are 30 days. Thus the data on the disk are constantly in change since new backups are done everyday and backups older that 30 days are removed. We use Netbackup as backup software without any built in deduplication features active.

We have been using this method for about 1½ year without much problem. The last months we have started seeing problems. The free space on the volume have been reduced to only a few TB. Under normal operations we typically have between 10-15TB free with a deduplication ratio of 50-75%. 

The size of the volume is 31TB. There are only one folder on the volume. When checking the properties on the folder it says Size: 23.5TB and Size on disk: 11.2TB. Still there are only 5.75TB free on the volume, it should be around 20TB free.

When running get-dedupvolume the values shown are often wrong. We didnt think to much about it until one day when running get-dedupvolume we got 0% savingsrate, even though the size and size on disk on the folder we deduplicate shows that deduplication is active. We ran update-dedupstatus to get it fixed. And after a day or two things started to look ok again when running get-dedupvolume. 

Yesterday we got the same problem again, get-dedupvolume shows 0% savingsrate. Update-dedupstatus doesnt seem to help us this time.

Have we set up deduplication in a wrong way? Below are the jobs we run:

WeeklyScrubbing -We havent changed this job at all from the default settings. enqueue /scrub /scheduled /vol * /priority normal /throttle none /memory 50 /backoff

WeeklyGarbageCollection -Same with this one, we havent changed it. enqueue /gc /scheduled /vol * /priority normal /throttle none /memory 50 /backoff

ThroughputOptimization -This job we have changed. enqueue /opt /scheduled /vol * /priority high /throttle none /memory 50
Since we want this job to run almost all the time, we have set it to repeat every 1 hour for a duration of: indefinitely. Stop task if it runs longer than: 23 hours.

We changed ThroughputOptimization to run almost all the time since the data on the disk is constantly changing. Could this be the problem? That there is no time for the Scrubbing and GarbageCollection jobs to run? 

Do we need to changed the priority on the Garbage and Scrubbing jobs?

Are there any know bugs regarding this? Any Hotfixes that we can apply?

Thanks. /Olof



Store Shadow Copies on Different Volume

$
0
0

Hi,


I want to store shadow copies of volume "U:" on volume "S:", but shadow copies settings menu only lists the volume selected (in this case "U:") as a Storage area location.

I've tried enabling and disabling the shadow copies to clear past restore points if there were any, I also formatted volume "S:".


Both Volumes are part of the same disk, and the disk is assigned to a File server failover cluster.

Servers running as virtual machines Windows 2012 R2 (VMware 5.5), and the Disk is added as an RDM disk.


I'd like to avoid combining the volumes as a last resort, any other suggestions?

Thanks.

DFS Consolidation root doesn't support Windows server 2008 R2 / 2012 DFS root.

$
0
0

DFS Consolidation root doesn't support Windows server 2008 R2 / 2012.

I just tried DFS Consolidation root doesn't support Windows server 2008 R2 / 2012. It just support up Windows server 2008.

Any update version that support Windows server 2008 R2 / 2012 ?

I want to conslidate and migrate existing Windows 2003 file server with maintain UNC path.

Any workaround?

Many Thanks!


File Server Migration

$
0
0

Hello:

I am moving a File Server in a failover cluster to a new standalone server. I have already copied the files with its users permissions. 

The actual server is named server1 and new server is named server2. Both servers are registered in the DNS server and both are member of a AD domain.

When I changed the ip address of server1 with the ip address of server 2 in the DNS server, I can't access to \\server1\shared but I can access to \\server2\shared.

Ping is working.

What can I do? 

Thank you in advance.


Windows Server 2012 R2: File write strangeness

$
0
0

Hi

We have come across some strange behaviour in Server 2012 R2 regarding file write operations. When an application is writing data to a file it seems to hold the data in memory until the application has finished processing, before finally writing to the file. For example.

On Server 2008 / Windows 7 the application will run and you can visibly see the output file growing in size in file explorer when refreshing. When the same is done for Server 2012 this is not the case - it sits at 0kb until the application has finished and then writes the data. This has been having a big memory impact and decreasing performance on one of our LOB applications which we are migrating from 2003.

To rule out the application itself I wrote a small c# app that would write a certain number of random characters to a file continuously, with the following behaviour:

Server 2008 R2 VM - Works as expected, output file can be seen growing and no performance issues.

Server 2012 R2 VM - Utilized memory constantly increases, but the file size does not grow until the application has finished.

Server 2012 R2 Physical - Utilized memory constantly increases, but the file size does not grow until the application has finished.

Strange thing is, using the dir command when running the app you can see the file size staying at 0kb whilst the total available space on disk is obviously depleting.

Is there some new NTFS feature in 2012 that would make this behaviour expected or is this a bug? This is what we have tried so far...

  • Disabled write cache on the disks
  • Turned off paging
  • Added a separate storage disk and run the tests on this
  • Tested both physical and virtual 2012 R2 servers
  • Duplicated GP settings across working 2008 and non-working 2012
  • Attempted to run the application in compatibility mode
  • Played with write buffer settings in test application itself.
  • Taken a file count reading with Treesize before and during the running of the app. (to see if it was buffering to tmp file anywhere).

We have now come to a decision to revert the OS to 2008 R2 until we can find the reason behind this behaviour. 

Is this something anyone has come across before?

Thanks

Steve

 

 

 



File disappeared from ReFS volume

$
0
0

I have a RAID 0 Storage Space which stores some stuff which isn't really that important (hence the RAID 0 config).  The volume is formatted ReFS (6TB in size) has very few files on it, but one of those is a 3TB Hyper-V VHDX.  The rest are a few kilobytes in size.

Last night there was a power failure in the office.  On reboot the volume appeared as normal, and the virtual machine that owns the 3TB VHDX spun up completely fine as normal.  BUT, at some point during the day I noticed that the VM turned off, and I couldn't start it back up again!  The VHDX is gone, and isn't in the folder where I keep it.  The volume, however, is still showing 3TB free out of 6TB, indicating that the file system knows that there's a file there but Windows won't show it.

Has anyone had any similar experiences?  Is the file just not there because the file system is healing or what?  Can I check the status of the file system repair somehow??

Event 6, SMBWitnessClient critical event through DirectAccess

$
0
0

I have a Server 2012 R2 Failover Cluster with 2 nodes and with a few clustered disks volumes for shares.  I also use the Disk Witness in Quorum.  Everything has been working fine, expect when I connect through Direct Access my event log gets hammered with the Event 6, SMBWitnessClient critical event.

Witness Client failed to find a Witness Server for NetName "my cluster role dns record" with Error (element not found).

Any ideas why I see this through DA?  It doesn't seem to cause issues and I can still connect, just the never ending events.

File and Folder monitoring in Windows

$
0
0

Hi

I was asked by one of my developers if I can monitor Windows Server file for changes. Now I have a lots of tools to monitor if file has changed but he would like to get notification if this file didnt change.

Some application writes logs to this file and if this change didnt occur then he knows that he has problem. FolderSPy is great if you want to get notified that file has not changed but I am not sure how to get notified if the file has not changed for example for certain period of time


Dalibor Bosic

When I try to open an Excel file already opened by another user, randomly it shows the username or not.

$
0
0

Hi all,

Hope you can help us with it.

- We have tested it from many computers and versions of Office (2007 and 2010).

- The file server is a Windows Server R2 updated to the latests patches.

- We have tested deactivating the antivirus both in the desktop machines and in the server.

- We can reproduce the problem easily: User 1 opens the excel, User 2 opens the same excel and shows the alert with the username. Both users close the excel file. Then user 2 opens the same excel, and then user 1 opens the same file, then appears "another user" instead of the username. We can interchange the order and happens the same.

Any idea?

Thanks!

File Server Auditing - Track Copy Files

$
0
0

Hi,

I'm not sure if this is the appropriate Forum but pls let me know

I have enable Auditing Policy and i have add auditing in the folders that want to track. All works fine but do you know someone how can understand if someone copy a file?

Viewing all 10672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>