Quantcast
Channel: File Services and Storage forum
Viewing all articles
Browse latest Browse all 10672

Best Practice for General User File Server HA/Failover

$
0
0

Hi All,

Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?

We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.

We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.

We use DFS to publish file shares to users and machines.

Solutions I have researched with potential draw backs:

  1. Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
  2. Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.

Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.

Any thoughts on where I should be focusing my efforts?

Thanks


Viewing all articles
Browse latest Browse all 10672

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>