Skip to main content

Going Backup-less with Exchange 2010?

 Going Backup-less with Exchange 2010?



Microsoft Exchange Server 2010 has attracted a lot of attention because of its new database availability group (DAG) feature. DAGs offer a new spin on the standard Exchange database model by letting you maintain multiple, continuously updated copies of mailbox databases on multiple servers without requiring shared storage or the use of SANs.

This new feature isn't without controversy, though, because it offers the possibility of creating a system that's designed to run without making frequent backups for database restoration or recovery. The notion of routine operations without backups has made a lot of people nervous, so I wanted to talk this week about whether backup-less operation is really possible, not to mention safe.

The basic idea is simple: If you maintain enough copies of a particular mailbox database, you don't need to make frequent backups because you'll always have an available copy. The magic number in this case is 3: 3 copies of each protected database is what Microsoft claims to be sufficient. With only two copies, you'd still be vulnerable to the loss of a single machine, but three independent copies, on three separate physical machines, gives you the ability to withstand two simultaneous failures, which seems like it should be enough.

There are two issues that make using DAGs instead of routine backups a challenging proposition to accept. The first problem is cost. Maintaining three copies of a database necessarily implies having three servers to put that copy on. Running three servers means three licenses of Exchange 2010, plus three licenses of Windows Server 2008 R2 Enterprise edition or Server 2008 SP2 Enterprise edition—and Enterprise is much more expensive than the Standard edition. However, Windows failover clustering is available only in the Enterprise edition, and the DAG feature depends on it. Then there's the hardware cost, which admittedly might be cut down by intelligent use of virtualization—keeping in mind that putting all your DAG copies on separate VMs in the same physical machine or data center takes away much of the benefit of using DAGs in the first place!

The second problem is a little more complicated. Exchange 2010, like its predecessors, creates transaction logs that contain records of every transaction applied to a given database. Putting a mailbox database into a DAG doesn't change that, which means that logs will still continue to accumulate until you do a full backup of the database. For that reason, Microsoft recommends that you enable circular logging on those databases. The very term circular logging makes many experienced Exchange administrators nervous because they know that without logs, your database recovery options are limited and painful. That lack of logs seems to be a bigger sticking point for many customers than the additional cost of DAG-based deployments. However, the DAG mechanism itself ensures that the logs are kept until the transactions therein have been committed on all remote copies.

What I've found a few sites doing is taking a hybrid approach: deploying DAGs but leaving circular logging turned off, and doing regular full backups, but on a less frequent schedule. This method offers the comfort of regular backups without as much overhead, while at the same time preserving the utility of DAGs. You can change the frequency of your backups as much as you like to find the right balance. Then when you're comfortable with your DAG implementation (and, most importantly, with how you restore data when necessary), flip the circular logging switch for your DAG databases and cut back the backup frequency yet again.
I like this approach, and it's one that I'll be recommending, but I'm curious: What do you think about the possibilities of going without routine backups? Does it make you nervous? Drop me a line to let me know.

Comments

Popular posts from this blog

Recreating a missing VMFS datastore partition in VMware vSphere 5.x and 6.x

    Symptoms A datastore has become inaccessible. A VMFS partition table is missing.   Purpose The partition table is required only during a rescan. This means that the datastore may become inaccessible on a host during a rescan if the VMFS partition was deleted after the last rescan. The partition table is physically located on the LUN, so all vSphere hosts that have access to this LUN can see the change has taken place. However, only the hosts that do a rescan will be affected.   This article provides information on: Determining whether this is the same problem Resolving the problem   Cause This issue occurs because the VMFS partition can be deleted by deleting the datastore from the vSphere Client. This is prevented by the software, if the datastore is in use. It can also happen if a physical server has access to the LUN on the SAN and does an install, for example.   Resolution To resolve this issue: Run the  partedUtil  command on the host with the issues and verify if your output

ما هى ال FSMO Roles

  بأختصار ال FSMO Roles هى اختصار ل Flexible Single Operation Master و هى عباره عن 5 Roles فى ال Active Directory و هما بينقسموا لقسمين A - Forest Roles 1- Schema Master Role و هى ال Role اللى بتتحكم فى ال schema و بيكون فى Schema Master Role واحد فى ال Forest بيكون موجود على Domain Controller و بيتم التحكم فيها من خلال ال Active Directory Schema Snap in in MMC بس بعد ما يتعمل Schema Register بواسطه الامر التالى من ال Cmd regsvr32 schmmgmt.dll 2-Domin Naming Master و هى ال Role المسئوله عن تسميه ال Domains و بتتأكد ان مفيش 2 Domain ليهم نفس الاسم فى ال Forest و بيتم التحكم فيها من خلال ال Active Directory Domains & Trusts B- Domain Roles 1-PDC Emulator و هى ال Role اللى بتتحكم فى ال Password change فى ال domain و بتتحكم فى ال time synchronization و هى تعتبر المكان الافتراضى لل GPO's و هى تعتبر Domain Role مش زى الاتنين الاولانيين و بيتم التحكم فيها من خلال ال Active directory Users & Computers عن طريق عمل كليك يمين على اسم الدومين و نختار operations master فى تاب ال PDC Emu

Unlock the VMware VM vmdk file

  Unlock the VMware VM vmdk file Kill -9 PID Sometimes a file or set of files in a VMFS become locked and any attempts to edit them or delete will give a device or resource busy error, even though the vm associated with the files is not running. If the vm is running then you would need to stop the vm to manipulate the files. If you know that the vm is stopped then you need to find the ESX server that has the files locked and then stop the process that is locking the file(s). 1. Logon to the ESX host where the VM was last known to be running. 2.  vmkfstools -D /vmfs/volumes/path/to/file  to dump information on the file into /var/log/vmkernel 3.  less /var/log/vmkernel  and scroll to the bottom, you will see output like below: a. Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)FS3: 130: <START vmware-16.log> b. Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)Lock [type 10c00001 offset 30439424 v 21, hb offset 4154368 c. Nov 29 15:49:17 vm22 vmkernel: gen 664