Banner
Backup D.R. Replication Virtualisation Hardware/Media Privacy

Current Filter: Storage>>>>>Technology Focus>

PREVIOUS

Filtered Articles:1 of 29   Current Article ID:6356

NEXT



Do you copy?

Editorial Type: Technology focus     Date: 01-2016    Views: 2421      





Copy data virtualisation can help to free an organisations' data from its legacy physical infrastructure, suggests Ash Ashutosh, CEO, Actifio

When the words 'Big Data' are used, there is much discussion about how to use, manage, and store data as a strategic advantage for companies. What is often forgotten is the fact that most organisations do not need special Big Data applications that are promoted under this hype.

However, what in many cases is useful and necessary as a prerequisite for the efficient use and analysis of any company's data is the virtualisation of their data in the enterprise. The idea is based on the same concept as virtualised servers and networks in the past already having contributed significantly to the efficiency of businesses. By performing the essential step of data virtualisation, businesses are ideally equipped for handling the upcoming petabyte data loads that can be expected from Big Data.

UNDERSTANDING BIG DATA BETTER
The key to understanding Big Data is to accept that it is not a class or type of data. It has been used to describe the analysis of large volumes of various types of data. Big Data is also a trend covering multiple new approaches and technologies for storing, processing and analysing data and the technology used to do so. Such analysis can be useful for businesses looking to understand what people are buying, when, where and how.

Its popularity is such that for many, it is seen as the Holy Grail for businesses today. It will enable organisations to understand what their customers want and target them to drive profitable sales and growth. The Big Data trend has the potential to revolutionise the IT industry by offering businesses insight on previously ignored and underused data.

GOING GLOBAL
The UN predicts that over half the world's population will be connected to the Internet by the end of this year. That means some 3 billion people are possibly connected to social networks such as Facebook and Twitter, providing a wealth of potentially valuable data related to customer interests and buying behaviour.

This trend has stimulated an intense debate about how Big Data can help organisations improve customer targeting and drive revenue. Amid the excitement, Big Data is often over-hyped and discussed in a context that overlooks the fact that data is meaningless without intelligent insight. The challenge for users is to negotiate toward a successful outcome while avoiding falling for the hype.

Insight is important. But just because organisations now have access to vast amounts of information, they still need to understand and draw conclusions from complex and unwieldy data. Many fall into the trap of believing a correlation between data sets is all that is needed.

CAUSE AND EFFECT
For instance, if you identified a correlation between the rise in the consumption of ice cream and an increase in the murder rate during the summer months, you might conclude that one caused the other. However it is a third variable - that of hotter temperatures - that is a more likely cause of the other two. So it's not just about looking at the trends between data sets. Whatever data you analyse, you still need to understand cause and effect; otherwise you simply end up with a series of false positives.

Above all else, Big Data is about storing, processing and analysing data that was previously discarded as being too expensive to store and process with traditional database technologies. That includes existing data sources such as web, network and server log data, as well as new data sources such as sensor and other machine generated data and social media data.

For IT professionals, the opportunity to lead the way in helping organisations store and manage data is key. IDC has estimated that 60% of what is stored in data centres is actually copy data - multiple copies of the same thing or outdated versions. The vast majority of stored data are redundant copies of production data created by disparate data protection and management tools like backup, disaster recovery, development, testing, and analytics.

While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of exponential copy data growth. In the same way that prevention is better than cure, reducing this weed-like data proliferation at the core should be a priority for businesses.



Page   1  2

Like this article? Click here to get the Newsletter and Magazine Free!

Email The Editor!         OR         Forward ArticleGo Top


PREVIOUS

                    


NEXT