Banner
Backup D.R. Replication Virtualisation Hardware/Media Privacy

Current Filter: Storage>>>>>>

PREVIOUS

   Current Article ID:2652

NEXT



A solid argument

Editorial Type: Opinion     Date: 07-2013    Views: 3257   






When and where do SSDs make sense in a SAN? Joost van Leeuwen of OCZ investigates

With ever-increasing data storage volumes and the need for faster data processing, many companies require better storage resources to fulfil these requirements. To answer the question, 'when and where do SSDs make sense in a SAN environment?' we must first review some general background regarding SSDs. SSDs were designed to write and read data much faster than conventional spinning disks. The obvious difference between the two is that the HDD's rotating disk and magnetic head searches for a specific location to process the requested data, while the physically much faster medium of flash technology does not have the burden of moving parts.

As such an SSD is perfectly suited to read and write data randomly whereas HDD has a physical limitation of accessing random locations which inflicts serious system bottlenecks especially as the number of I/O commands increase.

SEQUENTIAL VS. RANDOM DATA
The combination of an enterprise SSD with caching software provides the basic ingredients for a successful flash implementation in the data centre. The SSD hardware typically impacts the speed at which the application can get to its critical data by addressing the much lower latency, whereas an HDD has always been designed for handling sequential reads and writes. However, if the data becomes spread out over the physical HDD, maintenance to defragment the data is needed to close these open areas in order to get back to its normal speed.

Modern operating systems process complex data with more and more random reads and writes. An HDD struggles handling that data, while an SSD is well equipped to easily deal with both sequential and random data. But there is more: access time of SSD is much faster and the I/O responsiveness can exceed an HDD by as much as 1000x.

ADDRESSING THE 'I/O BLENDER' EFFECT
Many applications run together in a server environment and IT managers will set up the infrastructure to utilise virtual servers and enable users to run multiple loads. As a result, the virtualisation layer will consolidate all access requests into one data stream. This phenomenon is known as the 'I/O blender effect.' All of the sequential data commands are integrated into one big data path of random data vying to access the SAN. For this reason, server virtualisation requires strong random access.

SAN arrays have grown in size dramatically over the past few years not only to facilitate growing database requirements but also the need for increased I/O performance. What used to be a large abundance of HDDs in the SAN with low I/O per drive servicing all of the I/O user requests in one continuous data stream has radically changed as one SSD could invariably replace hundreds of HDDs. But as discussed earlier, the entire SAN infrastructure, including the server, all of its connections and access points are only as fast as the slowest element so to simply replace HDDs with SSDs is not always the most efficient solution.

Looking at it from the user's perspective, the only factor that really counts is application performance; "How fast can I get the data for my application?" As the bottleneck might be the server accessing the SAN, replacing the HDD array might not be the most efficient solution. A more efficient approach could be to add an SSD into the server and have it function as an accelerator by caching the most frequently used ('hot') data.

Within every application data access profile there is usually a subset of data that is requested frequently. That hot data can be cached on SSDs inside the server, eliminating SAN access bottleneck issues as well as server bottlenecks. Adding this level of flash caching to the infrastructure not only lowers the overall investment, requiring only a few SSD flash devices, but also improves performance. From a deployment perspective, this capability can be easily installed in most modern servers and is currently one of the most cost-effective and efficient solutions available.



Page   1  2

Like this article? Click here to get the Newsletter and Magazine Free!

Email The Editor!         OR         Forward ArticleGo Top


PREVIOUS

                    


NEXT