Banner
Backup D.R. Replication Virtualisation Hardware/Media Privacy

Current Filter: Storage>>>>>>

PREVIOUS

   Current Article ID:1910

NEXT



Avoiding

Editorial Type: Management     Date: 01-2013    Views: 3313   











Tired of fixing one data centre bottleneck only to see another one emerge somewhere else? Jeff Richardson, Executive Vice President and Chief Operating Officer at LSI has some reassuring advice

It's a curse in any network infrastructure, especially in the data centre: Clear one performance bottleneck and another drag on data or application speed surfaces elsewhere in a never-ending game of "Whack-A-Mole." In today's data centres, the "Whack-A-Mole" mallet is swinging like never before as these bottlenecks pop up with increasing frequency in the face of the data deluge, the exponential growth of digital information worldwide.

Some of these choke points are familiar, such as the timeworn Input/Output (I/O) path between servers and disk storage, whether directly attached or in a SAN, as microprocessor capability and speed has outpaced storage. Other, newer bottlenecks are cropping up with the growing consolidation and virtualisation of servers and storage in data centre clouds, as more organisations deploy cloud architectures to pool storage, processing and networking to increase computing resource efficiency and utilisation, improve resiliency and scalability, and reduce costs.

Improving data centre efficiency has always come down to balancing and optimising these resources, but this calibration is being radically disturbed today by major transitions in the network, such as the growth of 1 Gbps Ethernet to 10 Gbps and soon to 40 Gbps, the emergence of multi-core and other ever-faster processors, and the rising deployments of solid state storage. As virtualisation increases server utilisation, and therefore efficiency, it also exacerbates interactive resource conflicts in memory and I/O. And even more resource conflicts are bound to emerge as big data applications evolve to run over ever-growing clusters of tens of thousands of computers that process, manage and store petabytes of data.

With these dynamic changes to the data centre, maintaining acceptable levels of performance is becoming a greater challenge. But there are proven ways to address the most common bottlenecks today - ways that will give IT managers a stronger hand in the high-stakes bottleneck reduction contest.

BRIDGING THE I/O GAP
Hard disk drive (HDD) I/O is a major bottleneck in direct-attached storage (DAS) servers, storage area networks (SANs) and network-attached storage (NAS) arrays. Specifically, I/O to memory in a server takes about 100 nanoseconds, whereas I/O to a Tier 1 HDD takes about 10 milliseconds-a difference of 100,000 times that chokes application performance. Latency in a SAN or NAS often is even higher because of data traffic congestion on the intervening Fibre Channel (FC), FC over Ethernet or iSCSI network.

These bottlenecks have grown over the years as increases in drive capacity have outstripped decreases in latency of faster-spinning drives and, in confronting the data deluge, IT managers have needed to add more hard disks and deeper queues just to keep pace. As a result, the performance limitations of most applications have become tied to latency instead of bandwidth or I/Os per second (IOPS), and this problem threatens to worsen as the need for storage capacity continues to grow. Keep in mind that the last three decades have seen only a 30x reduction in latency, while network bandwidth has improved 3000x over the same period. Processor throughput, disk capacity and memory capacity have also seen large gains.

Caching content to memory in a server or in the SAN on a Dynamic RAM (DRAM) cache appliance can help reduce latency, and therefore improve application-level performance. But because the amount of memory possible in a server or cache appliance, measured in gigabytes, is only a small fraction of the capacity of even a single hard disk drive, measured in terabytes, performance gains from caching are often inadequate.

Solid state storage in the form of NAND flash memory is particularly effective in bridging the significant latency gap between memory and HDDs. In both capacity and latency, flash memory bridges the gap between DRAM caching and HDDs, as shown in the chart. Traditionally, flash has been very expensive to deploy and difficult to integrate into existing storage architectures. Today, decreases in the cost of flash coupled with hardware and software innovations that ease deployment have made the ROI for flash-based storage more compelling.



Page   1  2  3

Like this article? Click here to get the Newsletter and Magazine Free!

Email The Editor!         OR         Forward ArticleGo Top


PREVIOUS

                    


NEXT