DCF Tours: NTT Global Data Centers Americas' Dallas TX1, Garland, Texas
Arriving for my tour of NTT Global Data Centers Americas' Dallas TX1 Data Center, located at 2008 Lookout Drive in Garland, Texas, I meet with NTT's Nehemiah Emerson, Manager of Critical Facilities Operations, and Jared Provasek, Supervisor of Critical Facilities Operations.
As will come as no surprise to those in the data center industry, both men are former U.S. Navy nuclear machinist mates who worked on nuclear submarines.
Established in 2016 on the former RagingWire campus and made operational in 2017, NTT's Dallas TX1 data center encompasses approximately 230,000 square feet, offers 16 megawatts (MW) of critical IT load, and is directly connected to a dense fiber network.
In June, NTT announced a widely reported $50 million data center fit-out project in Garland involving nearly 300,000 SF of construction across two existing buildings on the campus.
As reckoned by the the Texas Department of Licensing and Regulation and noted by Datacenter Dynamics, the "TX2 - Tenant Improvements" and ""TX3 - Tenant Improvements" projects in development on NTT's existing campus at 2108 Lookout Drive and 2080 Lookout Drive, respectively, are both for additions to existing buildings, amounting to 36 MW apiece, as Emerson tells me.
The projects include “additional office space and adding equipment to data vaults,” per details provided to the Texas Department of Licensing & Regulation, as reported by The Dallas Morning News.
NTT also operates a separate data center and large business park office in nearby Plano, Texas.
Emerson remarks, "I started in the industry in 2017. Before that, I had never been at a data center or seen a data center. I barely knew what a data center was. I understood power, I understood cooling, and I started here as a level one technician."
He continues, "I did the commissioning here as that technician, and fortunately have been promoted within the company to manager, which has been very exciting, to work at all those different levels. But that's because of this growth. The reason I have this job is because other people in these positions were fortunate enough to also move up within the company."
D-FW Data Center Market Considerations
Cushman & Wakefield's mid-year 2023 DFW Data Center Report states that the Dallas-Fort Worth Metroplex has approximately 865K SF of data center space under construction, amounting to 140 MW of IT capacity.
As recently noted here at DCF, the Dallas market is one of the nation’s primary peering and interconnection hubs, featuring excellent proximity to cloud providers and exchanges. In recent years the region has also become a major destination for hyperscale operators, including major campuses for Meta and Google.
The Dallas market saw 110 MWs of data center capacity absorbed in the first half of 2023, according to CBRE, pushing the vacancy rate down to just 4 percent. There was 273 MWs of capacity under construction in the first half of 2023, but also 246 MWs of pre-leasing, meaning most new data center capacity is under contract during the construction phase.
According to JLL's H1 2023 North America Data Center Report, the DFW data center market's "already limited supply continues to dwindle as the remaining swaths of capacity are leased by users seeking power anywhere available."
JLL notes further, "Development of new supply is in progress, with a significant portion preleased to sophisticated users hedging against their future IT infrastructure needs."
The JLL report adds that demand in the DFW market "remains above historic levels as national supply continues to be leased."
The report states that large users of data center space and power in the DFW data center market are now looking out to 2024, 2025 and 2026 to lease new supply, from developments both under construction and planned.
Impressions of NTT's Dallas TX1 Data Center
The TX1 data center represented phase 1 of the overall 42-acre megacampus development, which contains more than a million square feet of data center space and will provide 144 MW of critical IT load at full build-out.
Fully compliant with SOC 1 Type II and SOC 2 Type II, ISO 27001, PCI DSS, NIST 800-53 High and HIPAA standards, the two-story TX1 facility is conveniently located off of the President George Bush Turnpike (PGBT) with easy access from both the Dallas-Fort Worth International Airport (DFW) and the Dallas Love Field Airport (DAL).
A large apartment complex sits behind the data center; a Stream Data Centers facility sits across the street, next door to a planned Digital Realty development.
Dallas TX1 was recognized by Datacenter Dynamics as one of the "most beautiful data centers in the world," and it shows inside, where wavy, elegant architecture is illuminated by colored LED track lighting around the floors, which flickers on as we walk by.
"Blue track lighting is what we leave it on, but sometimes orange or purple," says Emerson. The LED lighting also runs over surfaces on top of the racks in the data hall behind TX1's voluminous plate glass observation walls.
Emerson explains that the colored LED lights "aren't exposed energy or anything, but it is helpful for people that have never seen a data center or worked on equipment like this." He adds, "I think this is probably one of the longest ongoing LED run tests ever. Typically, they say they can run 10,000 hours, which would be five years. We've passed that."
Looking through the observation panes down onto the darkened white space crowned by glowing LED tracks, Emerson says, "If we were doing a tour to clients, we would take them down onto the data space on the data floor and show them the actual spaces available. This physical space is what you could buy. We'll deploy any type of way you want. Cage toppers, cage walls, cage walls, under the floor, any amount of security clients want, we will provide."
I ask Provasek what the maximum kilowattage of a TX1 rack would be. "In 2017, the original design I think was 15 kW," he says. "But we've done a few deployments of up to 20 kW. That's not typical, but it's possible."
The Dallas TX1 data center offers carrier-neutral connectivity with two diverse fiber entry points, two carrier Meet-Me-Rooms, and access to major public, private and hybrid cloud providers. Inevitably, I ask about types of clients and anticipation for AI workloads. "I try not to know," responds Emerson.
He explains, "My job is to provide power and cooling. I will do that, I'll do it 365 days a year, 24/7. No matter what storm, in blizzards, there's no instance in which I will not provide power or cooling. I don't care what you want it for, that's just my stance. When we have client kickoff calls, we usually know the client's name, but that's it. They don't typically tell us what they're deploying for the same reasons."
Security at TX1 is handled via multifactor authentication throughout the facility, with over 100 real-time security feeds and a 24/7 in-house security and operations team. On our tour, indicating the wall of video screens therein, Emerson notes that TX1's network operations center (NOC) is connected to the NTT's other U.S. NOCs in Texas, Sacramento and Virginia.
As we walk along, stopping before an exterior view of the expansion projects, Emerson says, "As you see, we have two 36 megawatt buildings there, and I anticipate my team to grow into those operations. That's what I get excited about from the standpoint of power and availability."
Cooling Notes
For cooling, the TX1 data center's raised floor environment employs air-cooled direct expansion rooftop units with indirect air economizers and N+1 redundancy.
As we look out the walkway window over the expansion projects, Provasek tells me, "Over there is going to be air cooled chillers with a fan coil wall on the flat. That's like the new hotness going forward, very different than this design. It'll have air cooled chillers on the roof and then a fan coil wall right outside the space in the mechanical gallery. That'll feed cold air onto a slab and you'll remove tiles and the hot air will go out the tile."
Emerson adds, "When we walked through this space, here you saw it's just a flooded underfloor pressurized plenum with cold air and hot air, and it's client-specific. If the client wants cold air containment, they can request that. That's really driven by their kilowatts per rack design."
I ask Emerson whether he anticipates having any liquid cooling in TX1 or in the facilities under construction, or if it will depend on the customer. "It depends on the customer - it's going to depend on power density," he says.
He adds: "I think the bigger thing about cooling is the type of refrigeration that's used today is being changed for emissions. Because the amount of hydrochlorocarbons and greenhouse gases and pollutants that they have, even with the techniques and improvements that have been made to mitigate that, is going to drive even more changes than we've seen."
Emerson continues:
"Because here's the reality most people don't understand about refrigerants. A refrigerant is a greenhouse gas, 100% of it, and it's designed to go into a system that will likely one day leak all of its refrigerant to the atmosphere. Every refrigeration system that has ever been put into service will at some point leak. And unless you have 24/7 operators on standby looking for that to mitigate and stop it, most of the times all of the charge is released to the atmosphere, that's just how they operate."
He concludes, "There has to be a better way for the survivability of the planet for us to provide what we're providing, without that cost."
Power Notes
Still looking across to the expansion projects, the talk turns to power. Fed with dual, redundant power feeds, the TX1 data center supports N+1 distributed redundancy for each vault and diverse power distribution to the data hall, with renewable energy options available.
"There you can see it's a little different," says Emerson, referring to the expansion site. "Generators are different, different design, a little bigger. Then those little gray boxes, we call E-mods, an electrical modular unit that has the UPSs and the switchboards."
"That is the new design there that puts your electric room all in one compact place," adds Provasek. "You've got 24 different electrical rooms, so if you have a failure or fire, it's contained in one room, and then another electric room will be the redundant power."
Along a hallway inside the TX1 data center, observing that facility's electrical infrastructure, Emerson explains, "They patented a design called Zipper and FOBO Zipper which allows for, if anything were to interrupt the input power to one of the UPS's, the master PLC [programmable logic controller] would see that loss from one of the local switchboards, and then redistribute the electric plant automatically on closed transitions at the main switchboards without causing interruption. That prevents anything from human error to equipment failure. The system automatically responds to and maintains relevant input power to the UPS. It's pretty fantastic."
Emerson adds, "FOBO is the other patented design this building uses. It means fix one, break one, and it allows us to take an emergency standby generator out of service for maintenance or repairs for any reason, and then during that repair, if we were to suffer a loss of utility, for whatever reason, the PLC would sense, when it calls for the generator to start, that it was locked out and not available, and it would automatically dump one of our mechanical loads because they're redundant."
As we stand inside TX1, Emerson emphasizes, "There's redundancy at every level. Everything has PLCs; this switchboard has redundant PLCs; the Master PLC has redundant PLCs. As an operator, this is about as easy as it gets, very intelligent, logic is very robust."
He adds, "It made commissioning challenging because of so many interlocks."
On Mission Criticality
As NTT's Emerson and Provasek see me out through the security door into the Dallas TX1 data center's lobby, comparing his present job with his former one, Emerson observes, "Everything we do in here is critical, but I cannot think of a more critical situation than a reactor 800 ft underwater that is off, there's no more power, it's gone. Most operators that come from submarine nuclear machines have lived those scenarios. So that criticality makes this job a walk in the park."
"Provide power, provide cooling, that's what we did in the Navy," he concludes. "The attention to detail, the procedures, the execution of those procedures, all those things combined in operating in a critical environment in a nuclear engine room, correlates to this. But in the Navy, the cost was people lived or died. There's nothing I'm going to experience here that is that critical."
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.