Data Center Design: How to Think Like an Architect

Nov. 21, 2017
The future of the data center architect is direct integration with business, applications, and IT teams during the design process, writes Bill Kleyman.

Data center design professionals have always needed to focus on the physical side of data center architecture. Today, they must consider a bigger picture, and understand how a design impacts advanced applications and new use cases emerging from the ongoing digital transformation. Now, more than ever, it’s truly important to think like an architect in data center design,.

The latest Jones Lang Lasalle (JLL) report indicated that in 2016 the data center market saw big deals from major players, new economic and regulatory policy, and strategic cloud adoption. These factors are changing the rules of the game, from data center pricing models to location site selection tactics, and everything in between. Over the next 12 months, JLL expects to see user demand for smart data center solutions continue to heat up, while operators will feel the heat to deliver more data facilities, faster and more flexibly than ever.

The key differentiator is the “smart data center” that’s supporting our business. As the JLL report points out, cloud adoption is shifting which data centers host it, and where. Already, some major cloud providers are anticipating they will need to triple infrastructure by 2020. These cloud data centers may host services delivering everything from compute-intensive “machine learning as a service” to serverless computing.

So let’s understand what it means to “think like an architect” for the data center of 2020. Here are some key themes:

  • Application and Virtualization Infrastructure Are Directly Linked to Data Center Design. A legacy mindset in data center architecture revolves around the notion of “design now, deploy later.” The approach to creating a versatile, digital-ready data center must involve the deployment of infrastructure during the design session. I can’t stress this enough. Learn about the virtualization platforms being used, know the types of apps that will be deployed, and then create an architecture around it. Why is this important? Over-provisioning data centers is a HUGE cost factor. Let me give you an example. Based on an original research report conducted by Upsite Technologies of 45 data centers worldwide, the average data center uses 9 times more cooling capacity than IT load. This excessive use of cooling results from a poor airflow management strategy, as well as the misunderstanding and misdiagnosis of cooling problems. When you don’t plan or design around your workloads, you will potentially waste precious resources that could be going to more productive projects.
  • Core Infrastructure Design Considerations Must Involve Apps and Business Teams. I recently attended a business and IT meeting where several data center facility engineers were participating. They didn’t say much, but they were taking a lot of notes. This data center was going through a refresh to enable greater abilities around hyperscale requirements. The organization did a study and found that their ROI would actually be greater if they invested in their own data center architecture rather than moving to the cloud. During the meeting, business initiatives and application delivery requirements were carefully discussed with the data center design team. At the end of the meeting, the data center team had a clear vision for rack design, density around power, how to create airflow containment for specific hyperscale systems, and how to directly align this design with the apps and virtualization ecosystem sitting on top. These weren’t instructions passed down as an after-thought. Rather, data center facility engineers were present to clearly understand requirements. This is the future of the data center architect – direct integration with business, applications, and IT teams.[clickToTweet tweet=”This is the future of the data center architect – direct integration with business, applications, and IT teams.” quote=”This is the future of the data center architect – direct integration with business, applications, and IT teams.”]
  • Stop Fearing the Cloud – It’s Not Here to Replace Your Data Center. Leading data center architects can actually help save their organizations from the cloud! Many organizations are actually taking a step back from pushing more of their infrastructure into the cloud. This is partly why we’re seeing more emphasis on the modern data center; and why there are so many more mergers and acquisitions happening. A good data center architect needs to understand more than just cooling or energy efficiency. They need to understand where their data center fits within the business model, how workloads are being hosted, and where cloud architecture actually makes sense. One of the most dominant models moving forward will be “multi-cloud” infrastructure. This means that core data center operations will remain very important. Those architects and engineers who can connect the dots with the business and the cloud will be critical in creating advanced capabilities. These capabilities will allow applications to scale, support new types of virtualization initiatives, and allow the business to be much more agile.
  • There Are No More Data Center Design Silos. I’ll keep this point short and simple. Remove silo data center operations from your business today. Even if you don’t think you have any remote teams, make sure you know for certain. Islands of operation will slow down your processes and prevent your data center ecosystem from truly enabling IT capabilities.

Over the years  I’ve worked with a number of different types of data center architectures. Some were smaller branch locations, while others were large data centers supporting advanced business functions. Today, I’m seeing more hyperscale data center designs supporting new types of initiatives around big data, high-performance computing, and new forms of cloud deployment.

Which ones were the most successful in their operations? The ones where facilities teams directly aligned with both business and the IT side of the organization. Regular communication between application teams, virtualization engineers, and data center operators helped create a fluid architecture capable of shifting with every small market change. This level of agility removes legacy silo operations to enable true, digital-ready, data center architecture.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

CoolIT Systems
Source: CoolIT Systems

Selecting the Right Coolant Distribution Unit for Your AI Data Center

Ian Reynolds, Senior Project Engineer with CoolIT Systems, outlines considerations for selecting the best CDU for your needs.

White Papers

Download the full report.

2021 Overview of the Phoenix Data Center Market

April 6, 2021
This report, in conjunction with NTT, continues Data Center Frontier’s market coverage of growing data center hubs. Explore further for a comprehensive overview of the Phoenix...