Main image of article How Flash Memory Could Shrink the Data Center
[caption id="attachment_9476" align="aligncenter" width="500"] Enterprises are thinking of the best ways to upgrade their data centers' storage capabilities.[/caption] Flash memory has transformed the consumer-tech experience, making it possible for manufacturers to deliver successive waves of affordable smartphones and tablet PCs. But until now, the impact of flash memory on the data center has been comparatively modest. Flash memory in the enterprise has been seen for the most part as an expensive way to boost the performance of high-end applications. In fact, one of the great ironies of IT in recent years has been the widespread availability of flash memory technology in consumer-grade devices, all while remaining too expensive for the average IT organization to afford to use in the enterprise. But as the price of flash memory continues to fall, it’s become cost-effective to use flash memory for Tier 1 and even sometimes Tier 2 persistent storage. As such, not only will there be less dependency on magnetic disk drives going forward, the amount of physical space (and along with it, the amount of energy consumed by the data center) is about to drop. “Some data centers you see these days are the size of a football field,” said Joe Clabby, president of the IT research firm Clabby Analytics. “Now we’re talking about putting 1 petabyte of data in a cabinet.” The end result is savings in the total cost of the data center that transcends the line-item price of flash memory storage. Startup companies such as Pure Storage, Kaminario, Whiptail and NexGen Storage (which was just acquired by Fusion-io) all pioneered the use of flash memory as a primary storage system. In the past several months, IBM, EMC, Hitachi Data Systems, Hewlett-Packard and NetApp have also signaled their intention to deliver flash storage systems that provide shared access to primary storage. At the same time, companies such as SAP have been pushing for the adoption of in-memory computing platforms such as the High Performance Analytics Appliance (HANA) that SAP brought to market as a stand-alone database machine in 2010. In a recent call with financial analysts, SAP co-CEO Jim Hagmann Snabe said that with 1,300 customers and a tripling of HANA sales year over year in the first quarter, it’s clear that HANA is gaining mainstream adoption. What no vendor can seem to agree upon is the degree to which flash memory systems will take control of primary storage. IBM, for example, rolled out an IBM flashSystem 820 all-flash storage system that can be configured with up to 20TB of flash memory storage in a single cabinet at a cost of about $10 per GB. In comparison, high-performance disk storage in enterprise-class storage systems costs about $10 to $12 per GB, and takes up as much as four times the space to provide similar capacity. But given the inefficient ways hard disk drives store data, the total costs of hard disk drive storage in reality is close to $30 per GB. “Magnetic disk drives have served us well, but it’s a mechanical device,” said Steve Mills, senior vice president and group executive for IBM Software and Systems. “One of the reasons that the storage industry is so big is that it’s inefficient. Inefficient markets never last forever.” Magnetic disks are inefficient because developers frequently have to compensate for the way the mechanical device is constructed. To optimize performance, developers often wind up placing data at the edge of any given platter. That results in utilization rates of magnetic disk storage that is nothing short of abysmal. But Inhi Suh, IBM's vice president of information management product strategy, suggests that, while flash memory has a big role to play, organizations are only willing to place so much data at risk on a flash memory system that may crash at any time. As a result, she expects to see flash memory and magnetic disks used in combination for quite some time; in case of a system crash, organizations will want to recover only one or two terabytes of data at most. “You want to make sure you only have to recover data that was relevant to the query being processed,” Suh said.

Flash in the Cloud

Cloud service providers are already beginning to tout the benefits of all-SSD clouds. Cloud Sigma, for example, launched one of the first SSD clouds, which promises to not only outperform rival platforms such as Amazon Web Services, but do it in a way that allows them to take better advantage of data center economics to match existing AWS pricing. CloudSigma COO Bernino Lind hints that solid-state drives will have major implications for service level agreements (SLAs): “With the kind of I/O performance we get we can extend our SLA guarantees.” Cloud Sigma is not the only cloud service provider headed in this direction. Verizon Terremark CTO John Considine believes that, in the not-too-distant future, his company will also make available a cloud offering based entirely on flash memory. The challenge that Considine sees other cloud service providers facing (not to mention internal IT organizations) is the bigger the cloud platform, the more expensive it becomes to overhaul the data centers that support it. As such, cloud service providers that make the transition to all SSD data centers are going to have a significant performance advantage for the foreseeable future. Ultimately, making this transition will prove unavoidable for all IT organizations—not just because of the total cost of magnetic disks, but also the need to eliminate latency that winds up negatively affecting application performance. “The object is to be always keep the data as close as possible to the processor,” Clabby said. “As flash memory pricing comes down it’s starting to turn the whole argument about where data should be stored topsy-turvy.”   Image: kubais/