Quantcast
Channel: File Services and Storage forum
Viewing all 10672 articles
Browse latest View live

replacing a failed disk in storage space ! unable to remove a retired disk

$
0
0

Hi Folks 

Not really asking for much !! we are using windows 2012 R2 storage space ; one disk failed 
and marked as retired. 

today we received replacement disk ; added that to storage pool - fine. 
then I repaired all Virtual Disk - process was very quick and went to 100% like a flash. 
then I tried to remove faulty disk by using following commands :

$DeadDisk=Get-PhysicalDisk -FriendlyName PhysicalDisk-1 

FriendlyName        CanPool             OperationalStatus   HealthStatus        Usage                             Size
------------        -------             -----------------   ------------        -----                             ----
PhysicalDisk-1      False               Lost Communication  Warning             Retired                        185.5 GB

Remove-PhysicalDisk -PhysicalDisks $deaddisk -StoragePoolFriendlyName storagePool

this errored out ; basically saying that the above disk is still in-use ! 

Any suggestions ? does repair job take some time ? it looked very fast ; when I did get-storagejob ; it showed 100 percent completed . 



Set Default Permissions for Folder when Created

$
0
0
We're looking to set up a drive that everyone on the domain can access and create a folder to store some temporary data. At the top level of the drive we would like to restrict users to only creating a folder to store this data. The permissions to that folder would need to be such that the creator has write/read access as well as domain admins. We're going through a refresh and would like keep things as simple as possible when transferring data to the new machines. Standing up a new temporary fileserver with some cheap disk data would do nicely. The problem we're running into is that the management of that server could be a job unto itself - so we thought of them creating their own folder and moving their data ahead of time. Where can we set default permissions when a folder is created?

DFSR Replication 2012R2

$
0
0

Hello

I have two file servers working with DFS Replication

The server A - Home
Server B - receives a replica of the server files

The users access only the server A for inclusion and change files
The Users do not have access to Server B.

However receive messages in the event viewer that files are being changed on both servers.
The DFS Replication service detected that a file was changed on multiple servers.

A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the file folder Conflict and Deleted.

In this scenario I do not understand how this can happen?

Can anyone help me?


MCP - MCTS - MCTS AD


NTFS permission, local users and share permissions confusion

$
0
0

For years now, where I work, my file servers have shares with only "domain\authenticated users" with Change permissions on them. NTFS permissions are set per security group (and in some cases a specific users if need be).
This works great, so that only specific groups or users can access a share (and do whatever they are allowed to do according to NTFS permissions).

Here at home I'm playing with Hyper-V free 2016 TP3, and created a share for storing ISO files.
Wanting to use more PowerShell I used the New-SMBShare cmdlet to create a share.
Microsoft still uses Everyone for share permissions, instead of Authenticated Users which is more secure imo, but to my surprise I was able to access the share using my domain user. Couldn't write, just read.

After some digging around, I found out that the server's Local Users group has Domain Users in it's list.
Reading some old links about why still doesn't make it clear to me why Domain Users should be part of the Local Users group of a server. It just sounds very silly to me.

The fileservers at work aren't old. Currently they are Server 2012, but I've administered them the same way in 2008 R2 and 2003. I can't remember ever seeing the Local Users filled with the Domain Users security group.

Have I missed anything in the past years, or are we really supposed to let go of NTFS permissions and use Share Permissions instead?
The latter sound very dangerous to me in case a share is gone, for whatever reason, and I have no clue what the permissions were for users/groups. That's why I like it to be on the NTFS side.

Hope someone can shed a light on this one, and what I'm missing here.

Defragmenting Files With DFS

$
0
0
Hello

Servers have 2 files with 2012R2, and DFS replication working.
However microsoft recommends taking care of defragmenting files.

I wonder if you can or not, do the defragmentation of files on the servers with DFS Replication?

Problem bringing physically moved Storage Pool online

$
0
0

a recent HDD failure forced me to rebuild a very simple windows 2012 server. the server did nothing but host a storage pool on a network, it had AD setup purely to manage user access to the shared areas on this storage pool

the failed OS HDD has been removed, a new one installed and the OS (Windows Server 2012 x64) installed again, so for all intensive purposes I have moved the Storage Pool drives to a new server.

in Server Manager all of the drives that used to make up the storage Pool are listed as "Status Online" I am attempting to follow this process http://blogs.technet.com/b/askpfeplat/archive/2012/12/24/windows-server-2012-how-to-import-a-storage-pool-on-another-server.aspx which details how to import a storage pool physically from a different server. However I am having an issue taking the drives offline to make them "available" (see attached image) the drives in question are number 0, 2, 3, & 4.

Selecting take offline causes the following prompt to issue

clicking yes does NOT change the status of the disk, it remains Online. I have also attempted changing the status of the disk in diskmgmt.msc without any success.

EDIT

BEcouse i figured it might be useful, I ran the PS Command;

Get-PhysicalDisk

and this was the output


Server 2012 R2 File Server Stops Responding to SMB Connections

$
0
0

Hi There,

Massive shot in the dark here but I am struggling with a pretty major issue atm.  We have a production file server that is hosted on the following:

Dell MD 3220i -> iSCSI -> Server 2008 R2 Hyper-v Cluster -> Passthrough Disk -> Server 2012 R2 File Server VM

Essentially 3 times now, roughly a month or so apart.  The file server stops accepting connections.  During this time, the server is perfectly accessible through rdp or with a simple ping.  I can browse the files on the server directly but no-one appears to be able to access the shares over SMB.  A reboot of the server fixes the issue.  

As per a KB article I removed nod antivirus from the server to rule out a conflicting filter mode driver after the second fault.  Sadly yesterday it happened again.

The only relevant errors in the servers log files are:

SMB Server Event ID 551

SMB Session Authentication Failure Client Name: \\192.168.105.79 Client Address: 192.168.105.79:50774 User Name: HHS\H6-08$ Session ID: 0xFFFFFFFFFFFFFFFF Status: Insufficient server resources exist to complete the request. (0xC0000205) Guidance: You should expect this error when attempting to connect to shares using incorrect credentials. This error does not always indicate a problem with authorization, but mainly authentication. It is more common with non-Windows clients. This error can occur when using incorrect usernames and passwords with NTLM, mismatched LmCompatibility settings between client and server, duplicate Kerberos service principal names, incorrect Kerberos ticket-granting service tickets, or Guest accounts without Guest access enabled

and

SMB Server event ID 1020
File system operation has taken longer than expected.

Client Name: \\192.168.105.97
Client Address: 192.168.105.97:49571
User Name: HHS\12J.Champion
Session ID: 0x2C07B40004A5
Share Name: \\*\Subjects
File Name:
Command: 5
Duration (in milliseconds): 176784
Warning Threshold (in milliseconds): 120000

Guidance:

The underlying file system has taken too long to respond to an operation. This typically indicates a problem with the storage and not SMB.

I have checked the underlying disk/iscsi/network hyper-v cluster for any other errors or issues, but as far as I can tell everything is fine. 

Is it possible that something else is left over from the NOD antivirus installation?  

Looking for suggestions on how to troubleshoot this further.

Thanks


DFSR warning "This member is waiting for initial replication for replicated folder GRPData"

$
0
0

HI,

I have done all the step to trouble shot the DFSR. Include re-create the replication group, set primary member, re-create dfsr database, etc. but also can't replicate one folder and the error in debug log show as below

0151110 15:27:36.392 2116 CONF   900 [WARN] ConflictManifestHandler::endElement Failed to process Resource.
20151110 15:27:36.392 2116 CONF   819 [ERROR] ConflictManifestHandler::endElement (Ignored) Failed. Error:
+ [Error:2(0x2) ConflictWorkerTask::DeleteConflicts::Apply conflictmanifest.cpp:2072 2116 W The system cannot find the file specified.]
+ [Error:2(0x2) ConflictWorkerTask::GetFidInConflict conflictmanifest.cpp:1891 2116 W The system cannot find the file specified.]
+ [Error:2(0x2) FileHandle::OpenPath filehandle.cpp:126 2116 W The system cannot find the file specified.]
+ [Error:2(0x2) FileUtil::FrsCreateFile fileutil.cpp:759 2116 W The system cannot find the file specified.]
+ [Error:2(0x2) FileUtil::FrsCreateFile fileutil.cpp:752 2116 W The system cannot find the file specified.]

Please help to advise what I should do in next step, Thanks!

Sing


Windows Storage Server 2012 R2 Setup

$
0
0

HI All,

  I got 2 Servers and each got 12 x 1.2 TB drives. This is going to be our production file server and put second server to DR location.

 1. What is the  best Storage pool size ? 2x6 drives with Raid 10  or   3 x 4 drives  with raid 10

 2. How do i replicate data to DR?

 3. What is the best way to setup failover?  DNS Alias name for file server ( CropFS.domain.com) and point to DR if any emergency 

4. What is the best folder structure?  Users and company data for 3 location users to access

As

Encrypt unmounted volume (vhdx) with Bitlocker? (Offsite-Backup Purpose)

$
0
0

To increase our data-redundancy we are planning to extend our current backup policy by an offsite-backup stored somewhere in the cloud.

Therefore we aquired an account for a certain cloud hoster, supporting block-Level synchronization. The Goal is now to synchronize an encrypted vhdx with that hoster. by keeping the incremental backup along with the Client supporting block-Level synchronization the daily upload should not be an issue.

However, we have Problems encrypting the file using bitlocker:

- the vhdx-file is placed on a certain vm, utilizing our local storage-pool.
- this vm is taking care for the synchronization of the vhdx-file and the "cloud".
- this vm exposes the vhdx as an iscsi-disk to our file Server.
- the file Server is using Windows Backup along with a harddisk (also on the pool) that is dedicated for backups and we want to add a second backup target: the vhdx connected through iscsi.

If we Mount the iscsi-disk on the file Server, giving it a drive letter we could easily enable bitlocker. However like that we can not use it as a second backup target, cause this only works with disks dedicated for backups.

If we add the iscsi target as a dedicated disk for backups, Windows backup will Format the volume, therefore disabling bitlocker.

We have tried to enable bitlocker afterwards by using the Volume-guid instead of the (non existing) drive letter, but bitlocker refuses to work like that...

manage-bde -Status \\?\Volume{9a9330de-c326-11e3-80c1-aaaaaa007409}\


yields the error message

BitLocker Drive Encryption: Configuration Tool version 6.3.9600
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
ERROR: The volume
\\?\Volume{9a9330de-c326-11e3-80c1-aaaaaa007409}\ could not be opened by BitLocker.
This may be because the volume does not exist, or because it is not a valid
BitLocker volume.



So currently, what we can (theoretically) do to achieve our Goal is:

- Mounting the iscsi disk on the fileserver, assign it a letter, encrypt it. (Therefore this Container will be synchronized encrypted to the cloud)
- Create another vhdx inside the disk located on the iscsi-target, and connect this disk as a disk dedicated for backups.

Windows backup would now write unencrypted data to the inner vhdx. This data will be written encrypted to the iscsi-target (the outer vhdx) which is then synchronized with the cloud.

While this could work - isn't there a better way to perform this? Like telling Windows Backup to write encrypted data, or encrypting the *Content* of the virtual disk on the ISCSI-Target rather than on the ISCSI-Initiator?









DFS- 2000 - Server 2003 R2 to server 2012r2 adding replacation.

$
0
0

I have been tasked with upgrading our environments DFS implementation and I am a bit intimidated.

This is our current DFS environment:

DFS- 2000 mode

4 namespace servers

All Namespace servers are 2003 R2 Std edition

Namespace servers hosting DFS and Non-DFS shares

Total of 12 different servers hosting shares mapped in namespace that are not namespace servers

There is no replication happening now but we want to implement.

There are a few overlapping targets

Domain and forest function level set to 2003

We want to move to server 2012 R2 for the DFS upgrade.

I have been half looking at this for about a month now (keep getting pulled in other directions) and have run all of the DFS integrity checks, but I am still a bit overwhelmed.

I have a few questions:

  1. Will upgrading our Domain and forest function levels break the current DFS implementation in any way?
  2. Do I really need to completely teardown the existing namespace? We have some very critical systems that leverage DFS and it scares me having to tear it down for any period of time.

I have read a few different forums on and off of TechNet and I am getting so confused that my head is about to explode. Everyone approaches this differently and I am yet to find a scenario that compares to my environment (size and complexity). I see a lot of single namespace server with a handful of servers type articles, or folks that are not still at my function level or they are not trying to jump multi OS levels like me (2003 to 2012R2).

Any help would be greatly appreciated.

OneDrive Preview on Win2008 R2 crashes on start

$
0
0

I'm running the OneDrive Preview on Windows 2008 R2 to sync up an existing set of files. I first setup the oneDrive sync with nothing inside and then moved all the folders in. It now crashes on start. I don't want to start moving files around as people are using these files. Hopefully the error below helps. Its so close to all being done but the app just crashes. Argh!

Problem signature:

Problem Event Name: APPCRASH

Application Name: OneDrive.exe

Application Version: 17.3.6259.1102

Application Timestamp: 56383899

Fault Module Name: KERNELBASE.dll

Fault Module Version: 6.1.7601.19018

Fault Module Timestamp: 5609fed4

Exception Code: 80000003

Exception Offset: 0001322c

OS Version: 6.1.7601.2.1.0.272.7

Locale ID: 1033

Additional Information 1: 0a9e

Additional Information 2: 0a9e372d3b4ad19135b953a78882e789

Additional Information 3: 0a9e

Additional Information 4: 0a9e372d3b4ad19135b953a78882e789

Sync Diagnostics

Sync Diagnostics - Sync Progress

SyncProgressState: 0

================================================================================

Diagnostic Report

UtcNow: 2015-11-05T15:25:09.0000000Z

BytesDownloaded = 0

BytesToDownload = 0

BytesToUpload = 0

BytesUploaded = 0

ChangesToProcess = 0

ChangesToSend = 0

DownloadSpeedBytesPerSec = 0

EstTimeRemainingInSec = 0

FilesToDownload = 0

FilesToUpload = 0

OfficeSyncActive = 0

OfficeSyncEnabled = 1

SymLinkCount = 0

UploadSpeedBytesPerSec = 0

appId = 596

bytesAvailableOnDiskDrive = 312202539008

cid = 224d3533-62c8-4a85-a134-9ff4acc2610f

clientType = Win32

clientVersion = 17.3.6259.1102

conflictsFailed = 0

conflictsHandled = 0

cpuTimeSecs = 2

currentPrivateWorkingSetKB = 20312

datVersion = 36

deviceID = 53d38bb5-cd58-54e3-f87f-afbee5a9de98

driveChangesToSend = 0

driveSentChanges = 0

drivesChangeEnumPending = 0

drivesConnected = 1

drivesScanRequested = 0

drivesWaitingForInitialSync = 0

env =

files = 0

filesToDownload = 0

flavor = ship

folders = 2

fullScanCount = 1

instanceID = 0a87c9b0-90be-4e16-ba7d-0987297c9fc7

invalidatedScanCount = 0

isMsftInternal = 0

numAbortedMissingServerChanges = 0

numAbortedReSyncNeeded = 0

numAbortedUnexpectedHttpStatus = 0

numAbortedWatcherEtagDifference = 0

numDeleteConvToUnmap = 0

numDownloadErrorsReported = 0

numDrives = 1

numDrivesNeedingEventualScan = 0

numExternalFileUploads = 0

numFileDownloads = 0

numFileUploads = 0

numHashMismatchErrorsReported = 0

numJumpLinkError = 0

numKnownFolderLocal = 0

numKnownFolderMismatch = 0

numKnownFolderRedirectError = 0

numKnownFolderRedirected = 0

numKnownFolderRedirecting = 0

numKnownFolderRestoring = 0

numLcChangeFile = 0

numLcChangeFolder = 0

numLcCreateFile = 0

numLcCreateFolder = 2

numLcDeleteFile = 0

numLcDeleteFolder = 0

numLcMapKnownFolder = 0

numLcMoveFile = 0

numLcMoveFolder = 0

numLocalChanges = 0

numProcessors = 16

numRealizerErrorsReported = 0

numResyncs = 0

numSelSyncDrives = 0

numUploadErrorsReported = 0

officeVersion = None

officeVersionBuild = 0

officeVersionDot = 0

officeVersionPoint = 0

originator = b16c5f83-d74b-401d-9192-343e0ae57177

partnerFiles = 0

partnerFilesToDownload = 0

partnerFilesToUpload = 0

pid = 3368

preciseScanCount = 0

privateWorkingSetKB = 20312

privateWorkingSetKBIncreaseDuringSyncVerification = 0

scanState = 1

scanStateStallDetected = 0

seOfficeFiles = 0

seOfficeFilesToDownload = 0

seOfficeFilesToUpload = 0

syncProgressState = 0

syncStallDetected = 0

sync_progress_id = 2749b236-4484-474c-a324-9f92580b5a93

threadCount = 33

timeUtc = 2015-11-05T15:25:09.0000000Z

totalDoScanWorkCpuTimeInMs = 0

totalInvalidatedScanCpuTimeInMs = 0

totalScanCpuTimeInMs = 0

totalSizeOfDiskDrive = 960047173632

totalSubScopes = 0

uptimeSecs = 473

userOverriddenConcurrentUploads = 0

version = 504

wasFileDBReset = 0

BiCi sync_progress_id, value 2749b236-4484-474c-a324-9f92580b5a93 to string

BiCi pid, value 3368 to int

BiCi datVersion, value 36 to int

BiCi version, value 504 to int

BiCi scanState, value 1 to int

BiCi numDrives, value 1 to int

BiCi drivesConnected, value 1 to int

BiCi drivesWaitingForInitialSync, value 0 to int

BiCi drivesChangeEnumPending, value 0 to int

BiCi drivesScanRequested, value 0 to int

BiCi files, value 0 to int

BiCi folders, value 2 to int

BiCi wasFileDBReset, value 0 to int

BiCi numResyncs, value 0 to int

BiCi numFileDownloads, value 0 to int

BiCi numFileUploads, value 0 to int

BiCi numExternalFileUploads, value 0 to int

BiCi conflictsHandled, value 0 to int

BiCi conflictsFailed, value 0 to int

BiCi numLcCreateFolder, value 2 to int

BiCi numLcDeleteFolder, value 0 to int

BiCi numLcDeleteFile, value 0 to int

BiCi numLcCreateFile, value 0 to int

BiCi numLcMoveFile, value 0 to int

BiCi numLcChangeFile, value 0 to int

BiCi numLcChangeFolder, value 0 to int

BiCi numLcMoveFolder, value 0 to int

BiCi currentPrivateWorkingSetKB, value 20312 to int

BiCi numProcessors, value 16 to int

BiCi cpuTimeSecs, value 2 to int

BiCi officeVersion, value 0 to int

BiCi uptimeSecs, value 473 to int

BiCi BytesToDownload, value 0 to string

BiCi BytesDownloaded, value 0 to string

BiCi BytesToUpload, value 0 to string

BiCi BytesUploaded, value 0 to string

BiCi ChangesToProcess, value 0 to int

BiCi ChangesToSend, value 0 to int

BiCi FilesToDownload, value 0 to int

BiCi FilesToUpload, value 0 to int

BiCi SymLinkCount, value 0 to int

BiCi DownloadSpeedBytesPerSec, value 0 to int

BiCi UploadSpeedBytesPerSec, value 0 to int

BiCi EstTimeRemainingInSec, value 0 to int

BiCi officeVersionPoint, value 0 to int

BiCi officeVersionBuild, value 0 to int

BiCi officeVersionDot, value 0 to int

BiCi originator, value b16c5f83-d74b-401d-9192-343e0ae57177 to string

BiCi threadCount, value 33 to int

BiCi numSelSyncDrives, value 0 to int

BiCi numDeleteConvToUnmap, value 0 to int

BiCi checksum, value 137fe60b0 to string

BiCi numDrivesNeedingEventualScan, value 0 to int

BiCi totalScanCpuTimeInMs, value 0 to int

BiCi preciseScanCount, value 0 to int

BiCi fullScanCount, value 1 to int

BiCi invalidatedScanCount, value 0 to int

BiCi deviceID, value 53d38bb5-cd58-54e3-f87f-afbee5a9de98 to string

BiCi userOverriddenConcurrentUploads, value 0 to int

BiCi syncProgressState, value 0 to int

BiCi syncStallDetected, value 0 to int

BiCi scanStateStallDetected, value 0 to int

BiCi partnerFiles, value 0 to int

BiCi partnerFilesToDownload, value 0 to int

BiCi partnerFilesToUpload, value 0 to int

BiCi seOfficeFiles, value 0 to int

BiCi seOfficeFilesToDownload, value 0 to int

BiCi seOfficeFilesToUpload, value 0 to int

BiCi totalSubScopes, value 0 to int

BiCi totalSizeOfDiskDrive, value 960047173632 to string

BiCi bytesAvailableOnDiskDrive, value 312202539008 to string

BiCi username, value ***** to string

BiCi device, value ***** to string

BiCi numJumpLinkError, value 0 to int

BiCi numKnownFolderRedirecting, value 0 to int

BiCi numKnownFolderRedirected, value 0 to int

BiCi numKnownFolderRestoring, value 0 to int

BiCi numKnownFolderLocal, value 0 to int

BiCi numKnownFolderRedirectError, value 0 to int

BiCi numKnownFolderMismatch, value 0 to int

BiCi numLcMapKnownFolder, value 0 to int

Adding new server to DFS -Copy Data?

$
0
0

I have 2 (2003R2 + 2012R2) servers in DFS. I am adding a new one (2012R2)later this week.

My question is: Should I first xcopy all of the data over to the new server, and then pull that server into the DFS and namespace? Or just create the DFS on that server and let it take care of replicating all of the data over.

Keep in mind that some of the shared folders are quite large <100GB

Replacing an outdated mirrored RAID Drive

$
0
0
I have a senario I am unfamiliar with... replacing a failed mirrored RAID 1 SAS drive that is 7 years old and no longer carried by either Dell or the manufacturer.

The drive is a Segate ST33000655SS 300GB 15K SAS Drive

From what I can tell, it is so old, only remanufactured drives are available and then only with a 90 day warranty from sources I don't know and are extremely expensive.

My question is this:
a.) Can I replace this drive with a different drive that is larger?
b.) Because it's a mirrored RAID drive, don't the two drives need to be identical make, size and model?
c.) What would your recommendation be to proceed.

I need to do this on 2 different servers, both have a failed drive in a mirrored RAID configuration

Work Folders replication on other work folders server

$
0
0

Hi,

I have two NAS with built-in of windows server 2012 R2 storage server. On first server configured work folders role and enabled DFS replication. On second server enabled DFS replication and the work folders data were replicated on second server. Now I am in a situation that configure a work folders role on second server as a redundant of primary server. I installed role and configured work folders service on second server and shutdown the first server but the problem is that user unable to connect to second server URL. I installed the SSL certificate on secondary server and in IIS assigned certificate on port 443 the IIS service is stopped and when try to start the service popup message appear that "the website cannot be started another website using the same port". I checked in IIS but couldn't find the other port that is using the same port. It is my assumption that the work folders port in primary server may  be conflicting with secondary port.

Kindly suggest what configuration part I am missing to get the work folders on secondary server online.

Thanks 


iSCSI target connected but no volume present

$
0
0

We are using Windows Server 2008 R2 iSCSI initiator to connect to an iSCSI target on Sun Storage. On some servers iSCSI target shows connected but no volume appears in Disk Management or disk in Device Manager under Disk Drives. In iSCSI target properties under Devices it shows

Name: Disk -1

Address: Port 1: Bus 0: Target 0: LUN 0

I tried restarting iSCSI service, disconnecting and reconnecting all targets, restarting server without success.

Only action that allows target to show up properly under Disk Management is reinstalling Windows Server.

Any ideas how to troubleshoot this?

 

Deduplication corruption, scrub job at 30% for days

$
0
0

One of my dedup volumes on a Server 2012R2 system has also been hung at 30% for four days.  I can see in the resource monitor that the fdsmhost is actively reading and writing from the chunk store; so I am hesitant to kill the job or restart the server. All other i\o to the volume has been turned off.

This started with a dedup volume getting behind on dedup, and running out of disk space. I moved some larger files to a different volume, and noticed the error below.  

PS C:\Windows\system32> Get-DedupMetadata d:
WARNING: MSFT_DedupVolumeMetadata.Volume='D:' - Data deduplication scrubbing job should be run on this volume.


Volume                         : D:
VolumeId                       : \\?\Volume{2e43a20c-fa99-4788-8524-154555aa10c0}\
StoreId                        : {05A2C5CB-30D6-49BE-9DFA-392555D489FC}
DataChunkCount                 : 153062169
DataContainerCount             : 7208
DataChunkAverageSize           : 35.09 KB
DataChunkMedianSize            : 0 B
DataStoreUncompactedFreespace  : 0 B
StreamMapChunkCount            : 51280
StreamMapContainerCount        : 666
StreamMapAverageDataChunkCount :
StreamMapMedianDataChunkCount  :
StreamMapMaxDataChunkCount     :
HotspotChunkCount              : 987336
HotspotContainerCount          : 55
HotspotMedianReferenceCount    :
CorruptionLogEntryCount        : 221
TotalChunkStoreSize            : 5.12 TB

After moving some large files off the volume, I had 3TB free so I started the scrub job. It has been stuck in this same state for almost 4 days, 30%.

Scrub job status:

Volume                   : d:
VolumeId                 : \\?\Volume{2e43a20c-fa99-4788-8524-154555aa10c0}\
Type                     : Scrubbing
ScheduleType             : Manual
StartTime                : 11/6/2015 5:12:23 PM
Progress                 : 30 %
State                    : Running
Id                       : {B75433D1-E180-4B62-8665-9E7111A538AF}
StopWhenSystemBusy       : False
Memory                   : 50 %
Priority                 : Normal
InputOutputThrottleLevel : None
ProcessId                : 148696
Full                     : False
ReadOnly                 : False

I have verified the health of the array and drives; and all seems well. Any ideas on what I should try next?

Thanks.

best way to move everything

$
0
0

Hello,

I currently have a Dell Power Edge T300 running Windows Server 2008 Standard, i want to do a copy of everything on it (apps, roles, data) and move it all to boot from my new server a Dell Power Egde R710.

I looked into Windows Backup then restore but looking for something else

i look forward for your suggestions. 

thanks,

jordan

How can we disable Customer Experience Improvement Program forever on Server 2008

$
0
0

Hi,

I need to know how can we disable CEIP forever on Server 2008. I looked into the articles given here below,

I had already tried the below option,

To disable CEIP using Server Manager

  1. Click Start, click Administrative Tools, and then click Server Manager.

  2. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  3. Click to expand Resources and Support.

  4. Click Configure CEIP to open the Customer Experience Improvement Program dialog box.

  5. Click No, I don’t want to participate, and then click OK.

Its still working and forced a user logoff with a System event 7002.

Kindly help how can I disable it forever so that my user ID will not get logged off. 

Awaiting your response.

Thanks,

Noufal

Error is coming "The Remote device or resource won't accept the connection" when am connecting shared network drive

$
0
0
Error is coming "The Remote device or resource won't accept the connection" when am connecting shared network drive 
Viewing all 10672 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>