[caption id="attachment_13472" align="aligncenter" width="600"] Oak Ridge National Lab.'s Cray supercomputer Titan[/caption] For most of this decade, the standard of corporate computing has been shifting toward commodity computing – masses of low-powered servers hidden by clouds and anonymized within virtualized IT infrastructures that revolve around datacenters whose job it is to link clouds, legacy- and mobile systems into a fabric that doesn't clash too much with itself. Datacenters, according to the conventional wisdom, don't supercomputer (yes, we just made that a verb). Except for a few giant companies that need supercomputing muscle to model crash analysis and design, research and design molecules for new drugs, or spam everyone on the planet at once, only nuclear weapons designers, quantum physicists and supercomputer architects need power that's measured in gigaflops. That wisdom held true to a certain extent during the early years of this millennium, when all but the highest-end of the high-performance computing (HPC) shrank in response to the recession and rise of commodity hardware, according to a report released Oct. 25 by research firm IDC showing increasing demand for both top-end supercomputers and slightly down-market versions once attractive only to the scientific HPC market. Supercomputers and HPC servers of all kinds continue to do the kind of technical, scientific and visual modeling work they always have, the study found. Two-thirds of them are also being used for big-data analysis or are connected to cloud services through which an increasing number of companies that can't buy their own HPC systems are able to take advantage of those belonging to someone else. Sales of high-end systems are so good that Intel Corp. gave credit to HPC for a big part of the 12.2 percent growth in processor sales, in the financial results it announced Oct. 16. Other than the rapid spread of supercomputer architecture built on ranks of processors paired with GPUs that act as accelerators, the biggest surprise in the 2013 study was "the large proportion of sites that are applying big data technologies and methods to their problems and the steady growth in cloud computing for HPC," according to Earl Joseph, IDC technical computing analyst, in a statement announcing the study. Companies that own HPC servers use approximately 30 percent of their total compute cycles for big-data-analysis workloads – a trend IDC chronicled in a June report that estimated end-user companies spent $739.4 million on HPDA servers in 2012 and would spend $1.4 billion on them in 2017. The increase is partly due to the kind of customer-behavior analysis that appears in big-data projects; but more companies are pushing HPC and supercomputing resources as a way to better analyze security data to detect fraud (PayPal), more realism and shorter time requirements for 3D modeling and design, instant insurance-information analysis and price quotes (Geico), and similar customer-facing issues, according to the report. A lot of companies are buying HPC hardware to get that power, but even those that can't afford supercomputers want the computing muscle: the percentage of sites using cloud-based services to get the resources for HPC workloads nearly doubled between 2011 and 2013, from 13.8 percent to 23.5 percent. Fully half of the sites that get HPC via the cloud get it from public-cloud services such as those of Penguin Computing, Sabalcore Computing, or Amazon Web Services in addition to traditional HPC vendors including Bull, HP and IBM. The growth of HPC servers, which IDC estimates will remain at seven percent per year for the next five years, will pose increasing problems for datacenters needing to cool, house and provide storage. It will also pose problems for developers. Very little software, corporate or scientific, is designed to take advantage of the masses of compute cores that power supercomputers. Only 5.2 percent of applications are run on more than 1,000 compute cores; 64 percent run on one node or less. ITD expects corporate spending on HPC-enabled applications will grow even faster than hardware sales, reaching a total of $4.8 billion per year by 2017. Systems software will undergo slower but also fast growth as middleware and systems-management software is developed to run HPC systems more efficiently both on their own and in cloud environments. What is unlikely to happen is a slide back into corporate complacency and acceptance of commodity hardware as the only real option. HPC may satisfy the bulk of demand even from large companies, but not all: ninety-seven percent of companies that adopted supercomputing told IDC they could no longer remain competitive without it.   Image:Cray/Oak Ridge National Laboratories