| ||||||||
| ||||||||
Current Filter: Storage>>>>>> A solid argument Editorial Type: Opinion Date: 07-2013 Views: 3257 Key Topics: Storage Memory SSDS SAN I/O Key Companies: OCZ Technology Key Products: Z-Drive VXL Key Industries: | |||
| When and where do SSDs make sense in a SAN? Joost van Leeuwen of OCZ investigates
With ever-increasing data storage volumes and the need for faster data processing, many companies require better storage resources to fulfil these requirements. To answer the question, 'when and where do SSDs make sense in a SAN environment?' we must first review some general background regarding SSDs. SSDs were designed to write and read data much faster than conventional spinning disks. The obvious difference between the two is that the HDD's rotating disk and magnetic head searches for a specific location to process the requested data, while the physically much faster medium of flash technology does not have the burden of moving parts. As such an SSD is perfectly suited to read and write data randomly whereas HDD has a physical limitation of accessing random locations which inflicts serious system bottlenecks especially as the number of I/O commands increase.
SEQUENTIAL VS. RANDOM DATA Modern operating systems process complex data with more and more random reads and writes. An HDD struggles handling that data, while an SSD is well equipped to easily deal with both sequential and random data. But there is more: access time of SSD is much faster and the I/O responsiveness can exceed an HDD by as much as 1000x.
ADDRESSING THE
'I/O BLENDER' EFFECT SAN arrays have grown in size dramatically over the past few years not only to facilitate growing database requirements but also the need for increased I/O performance. What used to be a large abundance of HDDs in the SAN with low I/O per drive servicing all of the I/O user requests in one continuous data stream has radically changed as one SSD could invariably replace hundreds of HDDs. But as discussed earlier, the entire SAN infrastructure, including the server, all of its connections and access points are only as fast as the slowest element so to simply replace HDDs with SSDs is not always the most efficient solution. Looking at it from the user's perspective, the only factor that really counts is application performance; "How fast can I get the data for my application?" As the bottleneck might be the server accessing the SAN, replacing the HDD array might not be the most efficient solution. A more efficient approach could be to add an SSD into the server and have it function as an accelerator by caching the most frequently used ('hot') data. Within every application data access profile there is usually a subset of data that is requested frequently. That hot data can be cached on SSDs inside the server, eliminating SAN access bottleneck issues as well as server bottlenecks. Adding this level of flash caching to the infrastructure not only lowers the overall investment, requiring only a few SSD flash devices, but also improves performance. From a deployment perspective, this capability can be easily installed in most modern servers and is currently one of the most cost-effective and efficient solutions available.
Page 1 2 | ||
Like this article? Click here to get the Newsletter and Magazine Free! | |||
Email The Editor! OR Forward Article | Go Top | ||
PREVIOUS | NEXT |