[caption id="attachment_4116" align="aligncenter" width="500"] Greet Revolution Cooling claims that cooling via oil is the way of the future.[/caption] Chillers? Dinosaurs of the server-cooling world. Water misting? Tired. Last week, Intel gave its seal of approval to dunking a server full of electronic components in a bath of dielectric oil, lowering the PUE to an eye-catching 1.02. GigaOM reported last week that, after a year’s immersion in the oil bath, all the hardware involved in the test—microprocessors, I/O chips, connectors, hard drives, and server housing—withstood the oil just fine. Mike Patterson, senior power and thermal architect at Intel, told the site that Intel submitted the servers to an independent testing lab to discover whether any of the components had been adversely affected: “[They] came back with a thumbs up that a year in the oil bath had no ill effects on anything they can see.” The technology was developed by Green Revolution Cooling, which offers a CarnoJet system for a 90-day evaluation lease (it costs $300, plus shipping and installation). Each 13U rack can handle between 6 kW and 8 kW of heat, depending on whether the heat pulled away from the servers is exchanged via a traditional radiator or a water loop. Of course, this “new” way of cooling a server isn’t exactly novel. As GR Cooling notes, some transformers and power substations use liquid cooling to reduce heat. Dunking heat-generating microprocessors and graphics cards inside an oil bath has served as an alternative cooling solution for PC overclockers for many years. And early Cray supercomputers, including the 1985 Cray-2, the fastest supercomputer in the world at the time, pumped a liquid called Fluorinert over the circuit boards to cool them. “Ten years ago, no one cared," Christiaan Best, chief executive of GR Cooling, said in a January interview with the Texas Advanced Computing Center at the University of Texas at Austin:
“Power density was very low, the cost of hardware compared to cooling was much higher, and the number of overall servers was lower. But now, people who run data centers are starting to build dedicated buildings just to move the air though the computers. Where does this madness end? You can't just keep shoving power into a box and expecting it to be cooled by air, which is fundamentally an insulator."
Each CarnoJet server is mounted vertically and fully submerged in a bath of GreenDEF coolant, which the Website describes as a specific (but not proprietary) formulation of nonconductive, nontoxic dielectric oil with 1,200 times more heat capacity by volume than air. In other words, the amount of heat that the oil can absorb by conduction means that the oil itself doesn’t have to be cooled as much; GR Cooling states that the oil can cool a server efficiently at 104 degrees F (40 degrees C) as opposed to air, which must be chilled to about 75 degrees F (24 degrees C) to be effective. “In fact, our system is so powerful that we can install 100kW or more of compute in each 42U rack,” the company claims. Naturally, the cooling solution is ideal for dense computing environments where a large number of heat-generating components are placed in close proximity to one another. GR Cooling also claims that the technology can save up to 25 percent more power by eliminating cooling fans and letting the oil do the job. That all sounds wonderful, but Intel’s tests showed that the Power Usage Effectiveness of the oil-cooled servers were between 1.02 and 1.03. Patterson told GigaOM that the average PUE for an air-cooled server is about 1.60; as another data point, consider Facebook, which recently reported its company-wide PUE as 1.08. And that may be the number that causes data center operators to sit up and pay attention.   Image: GRCooling.com