DCF Tours: Flexential Dallas-Plano Data Center, 18 MW Colocation Facility

Nov. 15, 2023
As told by our tour guide, Flexential Regional Vice President Tom Myers, the Dallas-Plano data center's customer base is highly diversified across the industrial, retail, manufacturing, technology, financial services, and healthcare sectors.

In town for the annual 7x24 Exchange Fall Conference, I identify myself by speaking into the security intercom located in the building's front driveway. The gate to the K1-Rated crash-resistant fence then slides open, admitting me to drive onto the grounds of the Flexential Dallas-Plano Data Center, an 18 MW colocation facility situated at 3500 East Plano Parkway in Plano, Texas.

Plano is of course a key suburb in the incredibly active Dallas-Ft. Worth data center market. The city is also host to facilities from companies such as Equinix, DataBank, NTT, and CoreWeave.

Security

Flexential Regional Vice President Tom Myers meets me indoors. I compliment him on the security fence.

"Beyond that, you've got a natural berm," he points out. "So if someone wants to do damage, they're going to have to 'Dukes Of Hazzard' it, fly through the fence, fly over the berm, crash into that wall. Let's say they can fly over that wall, then they're going to hit this reinforced wall. That's going to stop them dead in their tracks."

Myers tells me the exterior of the facility is built with ten inches of reinforced concrete screen wall. The structure's roof is rated for an F3 tornado. Speaking of tornadoes, he notes, "The reality is they move more horizontally than they do vertically. I'm more concerned about a telephone pole flying horizontally than I am a roof unit being picked up. This screen wall is rated to handle projectiles at 150 mph."

Bread and Butter

Built to accommodate power densities above 1,500 watts per square foot, Flexential's Generation 4 Dallas-Plano Data Center is supported on a 100% SLA basis for power, cooling, network and bandwidth elements. The facility offers proprietary, highly efficient air cooling across its 261,425 square feet of gross building capacity.

Upstairs in the data center's expansive, well-lit upper lobby and adjoining conference room for customers, Myers and I stand before an easel holding a detailed chart of the facility. Beyond us through enormous plate glass windows looms 60,000 SF of well-populated, raised-floor data center white space, representing the facility's phase one.

Phase two calls for another 100,000 SF of space, Myers tells me. "Generally we've had a lot of success over the last several years as a company moving up-market and selling these very large multi-megawatt deals, because of what we've done with our design," he adds.

He explains further, "These data centers, like this one and the ones in Portland, our gen fours, are a bigger form factor with more power capacity, which gives us a seat at the table to have a conversation with the customer about doing a 4 MW deployment or a 5 or a 9 MW. Whereas prior to that, in our gen threes, some of our smaller data centers, I couldn't have that conversation because I was maybe a 5 or 6 MW facility, max."

Myers continues, "The larger form factors allows us to go have larger leasing conversations. But our bread and butter is still the medium enterprise customer. For national banks and credit unions, national retailers, I would say the real sweet spot is probably anywhere from five to 40 racks."

Cooling Considerations

Referring to the chart, Myers says,"Here's what we call our super CRACs [computer room air conditioners]." Pointing out to the data hall, he adds, "It's a clean floor, there's no wall water anywhere, no chilled water loop. We feel like water next to electricity is a bad idea. We don't need it. We can provide power and cooling for high density workloads without any special adjustments."

Myers tells me the facility sits on 13 acres and is purpose-built from the ground up. Looking through the window, I notice the tall ceiling. "It's 3 ft. raised floor, 3 ft. drop ceiling, 24 ft. tile to tile, 30 ft. slab to slab," Myers says. "It looks kind of amazing, but the tall ceiling is actually a bit of the secret sauce too, in terms of the ability to reject the heat and provide cooling."

Referring again to the super CRACs, he describes how the location's often humid air is recycled, cooled, then pushed back under the floor."We charge under the floor with 0.5 cubic inches of air pressure," he says. Associated cooling units dynamically control the air pressure under the floor, and that pressure is what creates more or less cooling.

"The nice thing about a large volume of air is that it moves very evenly," Myers notes, adding, "I think our ability to keep a large volume of air uniform across a large space, and the ability to dial in the amount of pressure and cold air to a specific tile, is somewhat unique to us. We pop a tile for a high density workload, and crank the cooling unit up to provide more cold air in one specific spot."

Myers then echoes a truism I've heard stated in other tours, i.e. that warm data centers are more efficient than cold data centers. "Cold data centers are inefficient," he affirms. "We want it to be warm or ambient here because that means that the hot air is rising."

CoreWeave and Nvidia

Last month, CoreWeave announced an expansion its data center footprint into two new co-location facilities in Hillsboro, Oregon and Douglasville, Georgia, both owned and operated by Flexential. According to a press release, the facilities feature 18 MW power allocation equally divided between the Georgia and Oregon markets.

CoreWeave, a specialized cloud provider of large-scale GPU-accelerated AI workloads and a noted partner of Nvidia, has established a computing framework designed specifically to address the demands of machine learning, AI, VFX, rendering, and pixel streaming projects. Advanced networking capabilities integrated by Flexential include InfiniBand support up to 3600 Gbps.

Similarly, "we've got a pretty extensive partnership with Nvidia," notes Flexential's Myers. "I think it was about seven years ago when we deployed effectively their first generation of high density chipset workloads up in Oregon, maybe 30 or 40 kW."

He opines further, "I think that was a big 'Aha' moment for the industry, for Nvidia, and for players like us. I think it validated Nvidia to keep manufacturing and pushing the envelope from a technology and compute perspective, because the providers of the world like Flexential could handle it."

I ask Myers about super CRAC proponent Flexential's possible uptake of liquid cooling technologies in support of AI. He affirms it is happening on an as-needed basis around the provider's national footprint.

"We have some liquid cooling footprints in place for some large technology firms up in the Northwest U.S. where they just wanted to have that as a standard," he says. "So we enabled that for them in our fourth generation facility."

Earlier this month, Flexential CEO Chris Downie was a panelist during DCF's Data Center Summit virtual event, in a roundtable session entitled, "Choosing the Right Data Center Cooling Strategy." You can access that session on-demand here.

Power Considerations

Referring again to his chart, Myers describes how Flexential's data center design pushes power elements including UPS and battery rooms and cooling units to the outside of the structure, the better to accomodate high density workloads. He notes that it's a clean design, with no PDUs [power distribution units] or cooling units on the floor, just RPP [remote power panel] fixtures.

Flexential's facility managers in Oregon, where the operator recently endured a significant outage involving Cloudflare, will no doubt be adopting Myers' stringent mindset when it comes to observing the benefits of tightly controlled redundant power distribution in the context of Uptime Institute Tier III certification responsibilities.

"For facility power distribution, we use a system called four-to-make-three," he explains. "We label it ABC and D; each of these modules is about 1.5 MW. I effectively have 6 for power distribution available; I only need four and a half to run the facility. It gives me lots of different configurations so that, in the event we lose any side of power, everything fails over and there's no interruption of service."

Directing my attention to the facility's mechanical yard, Myers continues, "Each of these generators is a 3.2 megawatt diesel engine. Once all is said and done, we'll have nine of those throughout the facility, providing more generation than you need, because it's an 18 MW facility. Everything's a minimum of N+1 from a redundancy standpoint."

Myers concludes his easel talk by highlighting the importance of the facility's Uptime Institute Tier III certification for design and construction. "We are fully concurrently maintainable according to Uptime Institute, which is a big deal, because in Texas, there's probably about 200 data centers, and we're one of five that have that distinction."

Customer Base

The Dallas - Ft. Worth market is a great location for data center customers, and a great location for data centers -- if you can establish one. According to JLL's H1 2023 North America Data Center Report, the DFW data center market's "already limited supply continues to dwindle as the remaining swaths of capacity are leased by users seeking power anywhere available."

JLL notes further, "Development of new supply is in progress, with a significant portion preleased to sophisticated users hedging against their future IT infrastructure needs." The JLL report adds that demand in the DFW market "remains above historic levels as national supply continues to be leased." The report states that large users of data center space and power in the DFW data center market are now looking out to 2024, 2025 and 2026 to lease new supply, from developments both under construction and planned.

A useful blog from Flexential's Ryan Mallory contends that, by virtue of Texas having its own power grid, the DFW market enables grid diversity for companies additionally locating swaths of IT infrastructure there.

Electricity rates in Texas are lower than the national average because of the multitude of electricity providers. Deregulation has led to heavy competition among them, resulting in lower utility rates, attractive to customers. Then too, Dallas is home to more than 50 data center operators, competing to create a high degree of supply optionality while lowering prices.

Myers tells me that the Dallas-Plano data center's customer base is highly diversified across the industrial, retail, manufacturing, technology, financial services, and healthcare sectors. He says the facility's customers range in size from a single cabinet, to a single tenant taking 100,000 cabinets, and "everywhere in between."

Cloud and Network Hub

Then too, Dallas is a major on- and off-ramp and thoroughfare for network connectivity and infrastructure destination for latency sensitive applications. Because of its central U.S. location, the DFW market is a major network hub for telecom connectivity and carriers, replete in low latency paths leading into and out of the state.

"As much as we focus on selling data center space and power, alongside that, we're going to sell interconnection bandwidth connectivity and cloud services," affirms Myers. He goes on to explain how Flexential provides infrastructure as a service and data protection services through cloud nodes across the U.S. operating inside its data centers.

"Customers can consume multi-tenant or private cloud resources through us, that we provision for them," he adds."Maybe they've got a physical colocation footprint for production and need redundancy, but don't want to go spend $2 million in capex to stand up a second site. They'd rather do it as a service and give us the resource counts for RAM, vCPU and storage. We stand that up."

Myers continues, "That gives us a nice diversification story, and it's a good roadmap story for customers who may have a subset of physical infrastructure or applications that aren't quite cloud-ready and need to sit in colo, but another subset of workloads that are cloud-ready. They're already in our data center and can just crossconnect right into our cloud, all leveraging best-in-breed VMware and OEM technology from a server and storage perspective."

One for the Road

Our tour takes us along the facility's fanwalls, past a windy view of its underfloor, through its numerous "tall doors" and long hallways, back into the lobby area with its large windows overlooking the data hall. Shortly before escorting me back out into  the data center's parking lot, at the tour's conclusion, Myers shares a parting anecdote:

"When we opened this facility in January of 2017, we very much advertised and touted our high density capabilities. It was funny because people - enterprises, customers - didn't believe it.

We'd show them with math and computational fluid dynamics how we can do it, we'd prove it out on the whiteboard. They'd say, the math checks out but I don't really see it.

So we built a test bed cage at dead center of the floor. We put 20 racks in about 600 sq. ft., full of load banks that simulated server utilization, and we ran those 20 racks consistently at 30 to 45 kW per rack.

When we had a big customer come in talking about density, we would fire up the test bed. Then we'd go down there where we had temperature sensors all throughout the cage for machine learning, giving us real-time data showing the exact utilization per cabinet, per footprint.

It proved out, customers then saw it: a megawatt of power and cooling in 600 sq. ft. Six or seven years ago, nobody was thinking about that level of density, and there were those that were skeptical."

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Image courtesy of Submer
Image courtesy of Submer

The Rebound Effect in the Data Center Industry: How to Break the Cycle

Nathalie Cruchet, Head of Sustainability & ESG at Submer, explains why tackling the rebound effect is essential for achieving true sustainability.

White Papers

Dcf Cadence Decision Wp Cover 2023 01 11 17 29 43

Data Center Decision Makers

Jan. 11, 2023
An In-Depth Look at the Challenges and Opportunities Now and Ahead