| ||||||||
| ||||||||
Current Filter: Storage>>>>>> A tale of two architectures Editorial Type: Management Date: 11-2013 Views: 3624 Key Topics: Storage Backup Deduplication Scaleout Architecture Key Companies: ExaGrid Key Products: Key Industries: | |||
| Bill Andrews, CEO of ExaGrid Systems Inc, argues that a scale-out approach is essential to safeguard data for the future
If Mark Twain had been alive today he might have changed his famous quotation to: "There are only two things certain in life: death and data growth." Over the years the process of protecting data has become an increasingly big problem for IT teams as a result of relentless data growth. And, according to feedback from over 400 ExaGrid customers that's not about to change. In fact, 88% of businesses see data growth as having the biggest impact on the backup and restore infrastructure in 2014. Tape storage - once the preferred solution - simply can't keep pace with this growth. As a result many businesses have seen their backup window extend to the point where all their data isn't being protected on a daily basis. To solve this problem businesses are moving to disk-based backup with deduplication because it is faster and more reliable for backups, recoveries and restores. However, does simply adding deduplication to disk solve the data growth problem?
ON THE BACKUP WISH LIST The fact is that disk with deduplication does not necessarily satisfy this wish list. That's because there are two different approaches to the data protection challenge, each with its own architecture. One approach - scale-up - provides a temporary fix and the other - scale-out - provides a permanent solution.
SPOT THE DIFFERENCE Scale-out typically refers to architectures that scale compute performance and capacity in lockstep by providing full servers with disk capacity, processor, memory and network bandwidth. As data grows, all four resources are added: disk capacity, processor, memory and network bandwidth. As can be seen from the research results above, primary data and the resulting backup data grows continuously. This means that the amount of data to be deduplicated continues to grow. The more data you have to deduplicate the more processor and memory is required. If you don't continually add more processor and memory as the data grows then it's obvious that the backup window will get longer and longer as the time to deduplicate the data grows in length. The only way to have a fixed length backup window is to add processor and memory as the data grow. Simply adding disk capacity is not enough. The key difference between scale-up and scale-out is that scale-up treats backup as just a storage problem. But backup is a data movement, data processing and data storage problem. As a result, because scale-up only provides a single resource controller for all processing, the performance is limited by the capabilities of that one component. Therefore, as data grows, so does the backup window. Separately, the deduplication architecture also matters for restores. The scale-up architectures all deduplicate the data on the fly and only store deduplicated or de-hydrated data. For restores, tape copies, instant recoveries and disaster recoveries you need to wait until the data is put back together or re-hydrated. This process is very time consuming and slows down all restores and recoveries. Scale-out architectures have a landing zone where they keep the most recent full backup and then store all deduplicated data behind the landing zone. The restores, tape copies and recoveries are fast since the most recent backups are in their full and hydrated form ready to go.
Page 1 2 | ||
Like this article? Click here to get the Newsletter and Magazine Free! | |||
Email The Editor! OR Forward Article | Go Top | ||
PREVIOUS | NEXT |