| |||||||||
| |||||||||
Current Filter: Document>>>>>> The big issue Editorial Type: Technology Focus Date: 07-2015 Views: 2838 Key Topics: Document Big Data Storage Data protection Search Compliance Key Companies: Arkivum Key Products: Key Industries: Government Health | |||
| Nik Stanbridge, VP Marketing, Arkivum argues that many enterprises simply do not fully understand what big data is, how best to store and utilise it, and just how much data there has to be in order for it to be called 'big' One of the biggest phenomena of modern technology is Big Data. Even seeing the phrase written down makes it look big. As it is also generally capitalised this makes it look and sound even bigger. But how many organisations actually know what it means? And how big is big? Over the last few years there have been plenty of definitions doing the rounds. Data however was already getting bigger in Big Data terms in 2001 when industry analyst Doug Laney described the "3Vs"-volume, variety, and velocity-as the key "data management challenges" for enterprises. These are the same "3Vs" that have been used in the last few years by just about anyone attempting to define or describe Big Data.
DIFFICULT TO PROCESS I recently read two articles that explain how Big Data is having an impact on both life sciences and academia and the articles attempt to put some numbers around 'how big is big'. The first article in Inquisitr talks about how scientists are generating data about our genomes and the fact that they will soon be facing an acute shortage of storage space. The pace at which raw genome data is being generated is faster than even that for YouTube. The article goes on to say that companies like Google, Facebook and Twitter are taking this all in their stride simply because their businesses are all about helping us share our 'small data' while making it Big in the process. The numbers Google faces are eye watering - 100 petabytes (PB) of data a year from YouTube alone. That's Big. And it all has to be on fast, expensive, always-on storage because their job is to serve data up immediately. However, one organisation's Big is another organisations' really quite small. The article also says that next generation gene sequencing (NGS) will dwarf what we currently call Big Data and it will truly redefine what we mean by Big Data. By 2025 (that's just ten years away) NGS activities will be generating anything up to 40 exabytes (EB) a year. That's 40 times the YouTube figure. The drivers motivating the amount of gene sequencing being carried out that will generate this huge volume of data will bring about massive benefits to scientists and ultimately to us in terms of our health. That's because the sequencing brings in the ability to predict and diagnose illnesses and inherited conditions so the value of this data is incredibly high. Even more crucially, the cost of performing the sequencing is coming down at a rapid rate. So what is happening now is that rather than requesting a single test for a specific condition, clinicians are simply requesting the whole genome to be sequenced, as it is considered that the benefits outweigh the cost.
VALUE ADDED? So this leads me on to ask what do you do with something that is clearly valuable - with something that you can't exactly value today, but that you just know is going to be more valuable tomorrow?
Page 1 2 | ||
Like this article? Click here to get the Newsletter and Magazine Free! | |||
Email The Editor! OR Forward Article | Go Top | ||
PREVIOUS | NEXT |