Has the Floating Data Center Finally Arrived?

Aug. 20, 2015
Nautilus Data Technologies says it has developed a floating data center that can dramatically slash the cost of cooling servers. Is the market ready for the data barge?

When it comes to California’s water crisis, a data center startup has an innovative solution. Rather than bringing tens of thousands of gallons of water to the data center, they’re bringing the data center to the water.

Nautilus Data Technologies says it has successfully tested a design for a floating data center that can dramatically slash the cost of running IT operations. The company has built a prototype “data barge” moored at Mare Island in Vallejo, Calif., about 20 miles north of San Francisco, and says it expects to have a production data center online by the end of 2015.

Nautilus is the latest company to pursue a water-based data center, a novel concept that has made headlines since Google first floated the idea in a 2007 patent filing. Industry experts have debated whether the idea is sheer brilliance or total madness, but several efforts to launch a commercial data barge have fizzled.

Working Proof-of-Concept

By building a working proof-of-concept, Nautilus has brought the idea closer to reality than previous efforts. The company is convinced that a floating data barge can offer extraordinary economics, slashing infrastructure costs by cooling servers with water from the bay.

“We’ve developed a way to use the natural body of water we’re sitting on to cool the data center,” said Arnold Magcale, the CEO and co-founder of Nautilus Data, who said the design allows it slash costs by as much as 40 percent by eliminating traditional cooling equipment like CRAC units and chillers.

As the company builds its first production vessel, Magcale and his team are ready to test the big question: Are customers ready to house their mission-critical IT equipment on a barge floating on the water?

Nautilus believes it is offering a timely solution to a particular problem in the California market. After four years of drought, the state has declared a drought State of Emergency, and is implementing measures to reduce water usage. Thus far there has been no effort to reign in water use by the state’s data centers, but the issue has drawn media scrutiny in the Wall Street Journal and Forbes.

“In California, we have a big issue with water,” said Ron Suchan, VP of Sales and Marketing for Nautilus. “One of the benefits of our approach is that 100 percent of the water we use is returned to the environment, not through evaporation, but directly back into the body of water. Water is going to become more expensive, both in what you use and what you’re putting back into the environment.”

Futuristic or Crazy?

It’s been eight years since the data center world was captivated by Google’s patent for a water-based data center, stirring visions of a fleet of futuristic offshore data havens powered and cooled by the waves. The company has never built the sea-going “Google Navy” described in its patents (alas, the company’s 2013 “mystery barge” turned out to be a PR initiative), but several other companies have pursued the floating data center concept.

That includes International Data Security (IDS), which spent three years trying to develop data ships that would house modular data centers. IDS was headed by Richard Naughton, a former Navy admiral with extensive experience with on-board IT. But the company struggled to find funding, and filed for bankruptcy after Naughton passed away in 2011.

Running IT operations on ships is not new. The U.S. Navy has maintained sophisticated telecom and IT infrastructure on its fighting ships for decades. Major cruise lines also incorporate advanced technology into their newest mega-ships. Both the Navy and cruise lines are sea-going enterprises, and thus have no choice about whether to operate technology on floating vessels.

The same is not true of enterprise data center companies, who have a wide variety of choices on the type and location of their data center. For many customers, housing data and equipment aboard a barge – even the most water-tight of barges – introduces a risk of water damage that isn’t found in other commercial data centers.

Magcale, the CEO of Nautilus, worked with Naughton on the IDS initiative, and believed it could work with modifications. He shifted his focus to vessels moored at major ports.

“This has been six years in the making,” said Magcale. “IDS was primarily an infrastructure play, reusing old Navy ships. The most efficient approach is to use a former ocean-worthy barge. It can function as an extension of the pier.”

Nautilus has partnered with a marine shipbuilder to tap an ample supply of barges. Magcale said the company just completed a Series B financing, with primary backing from an individual investor in the green energy field.

How It Works

The primary advantage of a floating data center is the ability to slash costs by using water to power or cool the data center, and avoiding the expense of real estate and property taxes. The original Google patent describes using the motion of ocean surface waves to create electricity, and a cooling system based on sea-powered pumps and seawater-to-freshwater heat exchangers. The concept envisioned floating data centers located 3 to 7 miles out at sea.

An illustration from Google’s patent for a sea-going data center, with water intake below the vessel and power-generating turbines deployed on the surface.

The Nautilus design differs from Google’s approach by mooring the barges at a pier, which eliminates the ability to harness wave power for electricity.

The IT equipment is housed inside modular data halls on the deck, with servers in racks with rear-door cooling units. Mechanical and electrical equipment, including UPS units and cooling distribution units, are located below deck in a water-tight hold.

The cooling system features two separate piping loops and a heat exchanger. Cool water from the bay enters through an intake several feet below the barge, is filtered to remove fish and any other contaminants, and then moves to the heat exchanger. A fresh water cooling loop on the other side of the heat exchanger feeds the water-cooled rear-door systems on the racks.

The intake system uses copper plating and titanium piping to limit the impact of salt water on equipment. Nautilus says it has worked closely with the Navy to address the humidity and condensation issues that arise on a floating vessel.

Proof of Concept

Daniel Kekai, the co-founder and Data Center Architect for Nautilus, says the cooling system worked smoothly in the company’s proof-of-concept at Mare Island, with water being returned to the bay at temperatures within 4 degrees of the intake temperature, reducing any environmental impact on the San Pablo Bay or nearby Napa River.

Kekai says the test was performed on board the floating data barge, which featured five racks of IT gear and load banks that simulated power densities of up to 32kW per rack, and yielded a Power Usage Effectiveness (PUE) of 1.045. PUE is the leading measure of a facility’s energy efficiency.

Nautilus Data Technologies used this “data barge” as its proof-of-concept for its floating data center. (Photo: Nautilus Data)

“We primarily wanted to prove out the PUE,” said Kekai. “We also wanted to make sure we mimicked an actual data center, and we wanted to make sure the cooling system performed as expected.”

The proof-of-concept featured equipment from the U.S. Navy, Applied Materials and Veolia, and was validated by Jacobs Engineering and Critical Engineering Group.

The Real Test Begins

Magcale says Nautilus will deploy its first production data center on a barge that will be moored at an unidentifed port in the Bay Area. He projects that it will be complete by year-end, and says Nautilus will be able to deploy new data barges in about six months. Each barge will support 8 megawatts of IT load, and each location will support up to five data barges. Since the company is using a modular approach, capacity can be deployed in 1 megawatt units.

While Nautilus is distinctive for its use of floating data centers, the company has also developed in-house software for cloud orchestration and data center infrastructure management (DCIM). Customers have the option of using Nautilus software or their own choice.

Magcale says Nautilus can build data center capacity at less than $3 million per megawatt. “We’re saving a lot of infrastructure,” he said.

Suchan says the company has begun its sales process, and has drawn interest from enterprise customers in the San Francisco area.

“We’ve been taking the prospects out to the proof of concept, where we show them exactly what we’ve been able to accomplish,” said Suchan. “Thirty to 35 percent of the cost of a data center is mechanical. We now have removed any CRAC units, water chillers and water coolers. The customers will reap the benefits of these cost savings.”

For now, Nautilus must execute on the construction and delivery of its first data barge, and then seek to fill it with customers. Many customers would probably not consider putting their IT equipment on the water, which narrows the pool of prospects.

But today’s data center landscape features many types of non-traditional facilities, including subterranean data bunkers in caves and missile silos, all manner of modular and micro-modular designs, data halls in shopping malls, and enclosures with servers immersed in cooling fluid. Many of these concepts have found commercial success in these niches.

Will the floating data center join their ranks? We’re about ready to find out.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

CoolIT Systems
Source: CoolIT Systems

Selecting the Right Coolant Distribution Unit for Your AI Data Center

Ian Reynolds, Senior Project Engineer with CoolIT Systems, outlines considerations for selecting the best CDU for your needs.

White Papers

Get the full report.

Northern Virginia Data Center Market

May 30, 2022
As developers seek to secure land for hyperscale operators looking to take advantage of Northern Virgina’s capacity to power cloud computing platforms and social networks, leasing...