| ||||||||
| ||||||||
Current Filter: Storage>>>>>> Uncovering the hidden costs of straight disk backup Editorial Type: Technology focus Date: 01-2014 Views: 3027 | |||
| In this article Bill Andrews, CEO of ExaGrid Systems, explores the hidden costs of settling for a straight disk solution as opposed to selecting a purpose-built, disk-based backup appliance Organisations are moving from tape backup to disk-based backup for a variety of reasons, including faster backups (shorter backup window), faster restores, more reliable backups, more reliable restores, and reduced time to manage backups.
HOW IT WORKS Because deduplication is extremely compute intensive, less aggressive approaches are typically deployed at the backup application or on the media server side, with deduplication rates ranging from 2:1 up to 10:1. In contrast, disk-based backup appliances have dedicated compute resources, so more aggressive algorithms and approaches are employed. These yield deduplication rates that average 20:1. The net result is that more disk space is required over time for backup application deduplication versus purpose-built, disk-based backup appliances. Because backup applications use data deduplication, it is often assumed that the deduplication is covered in the backup application itself and, therefore, that only straight disk backup is needed to store the deduplicated data. But this is not the case; the costs of both the straight disk, rack space and the bandwidth necessary to replicate the data are much higher with straight disk. Before the advent of data deduplication, most organisations deployed straight disk behind the backup application to perform the backups and to keep some level of retention for fast restores. All long-term retention was written to tape. This is called disk staging and, due to its cost, it is rare to see any organisation keep more than one to two weeks of retention on straight disk. A typical backup rotation includes a full copy of the database and email servers each night, but it moves only those files that are greatly changed. This reduces the amount of data backed up each night (typically 25 percent of the full weekend backup) and results in a short backup window. On weekends, a full copy of all data is backed up. A disk staging example:
• 20TB of primary data = a 20TB full weekend backup The net result is that for each week of backups, you need twice the disk space of the weekly full backup; that is, for 20TB of primary data, 40TB of disk space is required. If you keep two weeks of backup, then you need 80TB of disk space (to back up and retain 20TB of data). This becomes very expensive very quickly. If you keep four weeks of retention with two weeks of both nightly and full weekend backups and two additional weeks of full weekend backups, then you would need 80TB of disk space for the first two weeks, 20TB for week three, and another 20TB for week four, for a total of 120TB of disk space to back up and retain just 20TB of data. And you will only have four weeks of retention. Because most studies show that users who deleted or overwrote a file did so at least six weeks ago, and in many cases don't realise that for 13 weeks, most experts agree that onsite retention should be 90 days, or 13 weeks. The problem with straight disk backup is that you cannot afford either the cost or the rack space for 13 weeks of retention.
THE SYNTHETIC APPROACH
Page 1 2 | ||
Like this article? Click here to get the Newsletter and Magazine Free! | |||
Email The Editor! OR Forward Article | Go Top | ||
PREVIOUS | NEXT |