DCF Tours: Digital Realty, New York JFK12 Data Center, 60 Hudson St., Manhattan

Sept. 5, 2024
A nerve center for international communications, Digital Realty's New York JFK12 Data Center is recognized as one of the world's most critical colocation and interconnection facilities as it serves the region's ever-evolving colocation needs.

Dating back to 1928-1930, the iconic carrier hotel 60 Hudson St., one of the most pivotal buildings and interconnection hubs in the internet's history, fittingly began as New York's City's telegraph depot. The site at various points from the 20th century till now has housed municipal agencies including the City's Departments of Buildings and Corrections. From the beginning of the internet and digital infrastructure until now, 60 Hudson St. is where a great volume of global, commercial internet telecommunications has traversed, lived, thrived and changed for almost a quarter of a century.

Formerly known as the Western Union Building, 60 Hudson Street is a 24-story, 164,000 ft² (15,236 m²) telecommunications tower in the Tribeca neighborhood of Manhattan. Wikipedia tells us it was one of several Art Deco-style buildings designed by Ralph Thomas Walker of Voorhees, Gmelin and Walker for housing telecommunications in the early 20th century. The building spans the entire block between Hudson Street, Thomas Street, Worth Street, and West Broadway.

Located inside 60 Hudson, Digital Realty's New York JFK12 data center is recognized as one of the world's most critical colocation and interconnection facilities. A nerve center for international communications, the facility offers access to physical connection points feeding in to global telecommunications networks and internet backbones, and versatile and scalable colocation environments for the area's ever-evolving customer needs, including a great volume of financial and gaming services, and sooner or later, the long tail of some dimensions of AI computing.

My primary tour guides for JFK12 are Greg Sisk, Digital Realty Sales Engineer Manager,  and Jordan Sadler, Digital Realty Public and Investor Relations, who provided DCF with background information on the building as well as a complete tour of two of the data center's floors.

Interconnection Hub

Sisk tells me his history at 60 Hudson started with Telex in 2009. "At that point, we were finishing up the 11th floor spaces that we built on 1101 and 1107. We still had some of the new space, the 9th floor, where Telex, now Digital, basically expanded bits and pieces at a time."

"That floor was three different companies that we started in the northeast corner: the north side, then we built the main Meet Me Room, expanded to the west side, and then on the south side was the last space that we acquired and built out. It was built in stages over time." 

As compared to other two giants in Manhattan’s triumvirate of iconic telecommunications buildings -- 32 Ave. of the Americas, where Digital Realty operates its New York JFK13 data center, and 111 8th Avenue, where the company runs its New York JFK10 facility -- Sisk says of 60 Hudson, “What’s been really important to internet infrastructure as a whole is the sheer amount of connectivity that's in this building, being the largest and probably most dense carrier hotel in Manhattan."

"I've got a few commercial tenants here, an advertising agency over here. But this building, it's mainly all telecom and internet services. 32 has a few non-telecom companies; 111 is now the Google building, although there's still a fair amount of connectivity. But if you look at this building, 111 and 32, in terms of the sheer number of carriers, this building's by far the most."

Sisk talks about how the company’s dark fiber network connects the three buildings for carrier access. "It's helped our business dramatically, because I don't have space and power here. I can be at 32 or 111 and get all the benefits of being right here."

Sadler talks about the subterranean underpinnings of 60 Hudson Street's telecommunications infrastructure. "There's good access into this facility from the street. There's a cable vault built under this floor that runs north to south, various conduits out into the street. So all the carriers can get in here."

"They come into the cable vault, diverse entrances in and then diverse risers up to their floor. So from a carrier perspective, this is the place to be, whether they're local carriers or worldwide carriers that come in and land in Long Island or Wall St. or New Jersey with their subsea cables and then trunk them back into here."

Meet Me Rooms 

As we walk the floor at Digital Realty's JFK12, Sisk says, "To give an idea of how much actual interconnectivity runs through here, they actually took one of the elevators out of service for a riser to make it easier to get the connectivity into the space."

We arrive at what Sisk calls "the original Meet Me Room" for 60 Hudson St. "When people talk about the Meet Me Room, typically in this building, they're talking about the 9th floor. Technically, it's three different rooms. But they're all interconnected so that for our customer base, if you're in one, you're in all of them."

"The concept here is every customer, every carrier, whoever they are, takes space, and builds into this room. Then it's our technicians that make the connectivity between the various carriers and customers. It's a completely secure space. If a carrier has a requirement to do splicing, our remote hands techs will keep an eye on them while they're doing it, because everybody's connectivity around the Street is here."

Copper to Fiber Migration Ongoing

"Within distance for some of our network is the copper-based network, but we're slowly getting away from that," notes Sisk. "The industry is changing. We don't have that many T1 circuits and T3's, but we still have the ability to support them in these older facilities, because there are some carriers that are still using that."

"We've migrated away from that into a fully fiber-optic -based network in a lot of larger data centers. If you look at a campus in Ashburn or Piscataway or Richardson, the only way to get from building to building is just singlemode fiber."

Sadler remarks on how New York JFK12 connects multiple carriers for seamless international traffic handoff.

"This is where they bring their equipment up to service their customers. They're bringing their trunking in from overseas into this cage and then they're handing off their traffic either to local carriers or other international carriers for the customers. That's where the value of a building like this really shines."

"The sheer amount of carriers available and connections available will draw the carriers in, because if you've got a terrestrial network in the United States, but you want to connect it to Europe, this is a building where you can hit three or four or five different European carriers and hand off that traffic seamlessly in one side."

Sadler adds, "It's a mind blower. If you look at the bandwidth on each of those fiber circuits, the amount of actual data that flows through these facilities is just mind-boggling, really staggering. The last number that I heard, this building handles somewhere around 70% of the internet traffic in [greater New York City] which runs through 60 Hudson, especially going overseas."

Power Distinctions

As talk on the tour turns to power, Sisk explains how and why JFK12's carrier customers sometimes use DC power, while enterprise customers largely opt for AC power. 

"Because it's carrier-based, and carrier focused, we do have a DC plant on this floor, so we can offer negative 48V DC to our carrier customers, which they use with a lot of their transport equipment. A lot of the Ciena gear that they order is still DC power."

"We have large DC plants here and up on the 11th floor that support the carriers directly. It's a little more efficient from the equipment standpoint."

"But it's a harder for the data center to support it because it's very distance-sensitive as far as voltage drop from plant to equipment. That's one of the reasons the networking and enterprise customers tend to do all AC. They don't get into DC; they're not usually familiar with it. Whereas the carrier customers, they've got tons of personnel that understand DC, and that's all they use."

Power Monitoring and DCIM

Sisk adds, "We do monitor the incoming power because one, it tells us how much we've sold at the cabinet, how much is consumed, and there's a capacity that we have. So we have to manage all three.”

“The RPP monitoring tells me what's being consumed. The contracts will tell us what we sold, and then we have to manage the power coming into the equipment. We know a certain amount of power we've committed to and how much is being consumed.” 

“Larger customers will do the same internally. I've got some cage customers that put their own DCIM in to understand how much they’ve bought and how much they’re using. Because if we've sold them to a cage with 80 kW, they can draw up to the full 80 kW. If they don't monitor it and manage it, they don't know what they're buying. So they don't know if they can add more equipment or not.” 

I ask about Digital Realty’s arrangement of diesel backup generators at 60 Hudson St. Sisk says, “We have dedicated ones, depending on the floor.”

“This one has two generators up on the 24th floor and then the UPS is down on three. It depends on the floor where they were built, and the building. At some of the legacy sites here in Manhattan, on some floors we own the generators and they're dedicated. Others the building owns and operates, but they're dedicated to us.” 

Colocation Matters

For colocation at JFK12, Sadler tells me there’s a mix of customers.

"A standalone cabinet is where we provide all the cabinets and they take just space. They buy cabinet by cabinet. We pull the power into it. We pull the connectivity from here back to the meet me room and they operate with whatever they put in there."

"We're not a managed service provider where we come through and provide all the networking hardware and they rent space on it. All the customers own their own hardware. They're basically leasing the cabinet fiber. And then the power. They buy the power from us."

Sisk notes how Digital Realty puts forth different solutions for different customers, which boil down to single cabinets, cages, or 2-post racks. 

"Small customers would be a single cabinet. As you walk down here, you'll see some cages built where the customers come in and they buy the rack. They put their equipment in there at about, I'd say, four to five cabinets as a breakover, where it makes more sense to just take a whole cage.” 

“It also depends on what they're doing. Some of the carriers don't want a full cabinet. They'll just do 2-post racks. That's what they're used to using. The difference for them operationally is connectivity between the cabinets is usually done through the Meet Me Room or it's run up on the ladder rack.” 

Sisk adds, “All of this is considered shared space that we're responsible for. So if there's any work up there, it's our people doing that work."

Urban Data Center Cooling 

My tour guides explain how New York JFK12 employs an overhead cooling system, used due to the space’s low ceiling height. 

"That’s one of the challenges with being in Manhattan,” says Sisk. “We’ve got an air conditioning system to cool it, and we're doing overhead distribution -- just because putting a raised floor in here would really restrict what we can do as far as cabling back.”

“So this space is ducted overhead for overhead cooling distribution. So it's blowing cold air. Cold air is dropping into the cold aisles and then the hot out raises it back up, and it's return air back into the top of the air conditioning system."

"We can do a lot more with a new building. If I go into a brand new facility in Ashburn or Jersey, they all have 20 to 25 ft. from slab to slab. So there's a lot more that I can do with that type of facility.”

Sisk emphasizes, “I can't do high density deployments at the moment, but the coming technology that we're seeing is liquid cooling, where that could be a game changer for what we can do with the science.” 

“Then the limiting factor becomes how much power can we get in and how much heat can we reject outside on the roof, or however it's done. But you can get a lot higher density and in a lot smaller space using liquid. Right now the industry is in a huge adjustment. What's going to work? What's going to take off, what's going to be the dominant cooling strategy?”

“At the moment, everything here is air cooled and that's typically what most providers are going to build. It's a proven technology, it works well. It's very efficient, but limited. Depending on your architecture and your power availabilities, in a year or two, it might be quite a bit different.”

"There's a couple of technologies we're working on that can improve that at sites like this. Then it becomes about how every building's got a limiting factor in architecture, power availability, cooling availability and heat projection, which is tied to that directly."

High Density Strategies

"The story is not just about interconnection,” opines Sisk. “In some sense, it's a major driver in this market. If you're going for a large compute environment, say AI or something like along those lines, you need the connectivity back to here. But the density of those applications is much, much higher than what can typically be done inside a dense metropolitan area.”

I ask if that's where a campus strategy comes in. Sisk nods. “In a campus, if I go to somewhere where I have a lot of room and I've got good carrier connectivity, okay, there I can build 230,000 square foot facilities, drop 30 to 40 MW in there, and then I can do high-density problems.”

Sadler adds, “The city, limited by the utility, has only got so much power down here. There's only so many places we can put rooftop condensers and things like that for air conditioning, for heat projection. So it has its challenges."

"I think in this industry, there's sort of this spectrum, where it's connectivity and power. And this facility is all the way on the far end of the spectrum of connectivity. And you've been to northern Virginia, to Ashburn: That's the far end of the spectrum in terms of power.”

"If someone has the power, it changes the density, it changes the design criteria,” adds Sisk. 

“I've got a lot of requests from clients who just want to do high-density. It behooves them to take a look at the requirements of why you're trying to do high density. Just to throw 30, 40, 50 kW in a cabinet because you can? You might be able to do it but does it make financial sense to do it? Because the strategies to do that at the moment are very expensive.”

“If you're doing it because you've got a connectivity requirement where all the cabinets have to be right next to each other, I can do that. But if I try to do it in 60 Hudson or downtown Chicago or downtown San Francisco - if you need to, I could, but it's going to be more expensive than if I go to Ashburn for two reasons. The infrastructure that I have to deploy to cool it is going to be a lot more here, and the cost of power is going to be a lot more."

Customer Education

Nearing the end of the tour, Sisk talks about his efforts in educating clients on topics including data center needs, power, and site suitability. 

"Most of my time is spent with potential customers,” he says. “Like you were asking about the high-density applications: a lot of what I do is educating clients. Because a lot of our customers come in, and if they're moving off-prem, going into a data center for the first time, their perception of what we do as a reality in the industry is a learning problem. 

Sisk adds, “Really when I'm talking with customers, it’s like how much power do you need? And more importantly, why are you making this heat? Are you doing it because your stuff's end of life? Are you doing it for cost savings? Are you doing it for latency? Trying to get more cloud applications and make that easier?”

“100 different customers will give me 100 different answers here plus one. But that's what our team is tasked to do. Understand what do you need and why are you doing this? Because the why will sometimes dictate this may not be the right site.”

“Or you might be better off here, and I'll be able to walk through: If this is what you're trying to do, this is a better option, or a blending of two options. Put your connectivity node here and trunk it out to the suburbs where you're going to do a large AI application. Then you get the best of both worlds.”

“So it varies from customer to customer. The financial industry for instance has a completely different set of requirements than carriers do or AI deployments. They're coming at it differently. They've also seen the bitcoin phase come and go."

Andrew Power, CEO of Digital Realty, recently joined CNBC's 'The Exchange' to discuss how massive spending on AI gives his company a long-term boost -- and how North American data center markets have been its largest growth contributors of late.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

Image generated by Midjourney, courtesy of EdgeConneX
Image generated by Midjourney, courtesy of EdgeConneX

Future-Proofing AI: A Blueprint for Success

Phillip Marangella, Chief Marketing and Product Officer for EdgeConnex, describes what a smart, forward-looking data center platform needs to deliver for AI and other HPC customers...

White Papers

Dcf Venyu Wp Cover2022 03 31 16 31 33 1 232x300

The Future of Future Proofing

April 7, 2022
Venyu provides IT leaders with a framework for future-proofing their systems, networks, and partner ecosystems.