The Data Center Frontier Executive Roundtable for the second quarter of 2019 features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from David Richards of Chatsworth Products.
David M. Richards, RCDD, NTS, OSP, TECH, CT is the Product Manager of Open Systems at Chatsworth Products. David Richards has almost 30 years of experience in the information communications and telecommunications industry, having worked as a systems engineer and QA/QC consultant in several organizations, including 19 years as a design and installation Master Instructor and Certified Trainer for BICSI. Richards volunteers as a subject matter expert on the editorial review board for BICSI manuals and is the Founder, Head Judge and Chair of the popular BICSI Cabling Skills Challenge.
Here’s the full text of David Richards’ insights from our Executive Roundtable:
Data Center Frontier: After a year of blockbuster leasing in 2018, the large hyperscalers appear to be less active in procuring new capacity in the first half of 2019. What does the prognosis for hyperscale deals look like in the short and long term? Are these requirements changing in size or structure?
David Richards: In general, the size of builds is increasing. It is important to remember that each generation of compute provides a significant increase in compute capacity and better overall hardware utilization. So, the amount of hardware per compute capacity is shifting. Hyperscale will continue to grow and to lease colo space, but the compute capacity in each rack is also increasing.
Additionally, the emergence of 5G and increased demands for mobile computing have created a need to observe and create the next natural evolution of data center design, layout and operations. Once this evolution pans out, I believe hyperscale will be in position to confidently continue to multiply to support the increased wave of data flow and storage needs that comes along with it.
Data Center Frontier: Edge computing is a trend with many layers and flavors. What types of edge computing are most relevant in 2019? What opportunities does the evolution of edge computing create for future data center infrastructure (and when)?
David Richards: The evolution of edge computing, which is still taking shape, requires early pioneers to be self-sufficient as the market leaders’ progress to new operation and facility models. Many in-house test facilities are already working with the support of co-alliances and it appears to still be a race to the finish line. As the Internet of Things (IoT) and artificial intelligence (AI) become standard, edge computing will continue to trend up to support these markets.
In 2019 and 2020, 5G buildout will be one large, universal application. Carriers will need to deploy more compute and storage closer to users to address latency. In parallel, the CDNs will likely expand their regional presence to support streaming and eventually AR/VR.
The other major driver will be Industry 4.0. There is tremendous value in collecting, trending and analyzing data from manufacturing operations. But, doing that in the cloud is cost-prohibitive for most data other than top-level reporting. This will require some compute on-premise.
Data Center Frontier: The shift of enterprise IT workloads into third-party cloud platforms and colo facilities appears to be continuing. What strategies and services are proving most useful in working with enterprises in this transition?
David Richards: Recent reports state that hyperscale data centers are supporting these efforts in the background by providing repurposed equipment to open up the research and development (R&D) process as the race to commissioning ramps up globally. This will eventually pay dividends to the hyperscale providers who will be positioned to control the excess demand that is predicted to follow.
Data Center Frontier: The rise of AI has boosted the use of new compute hardware to accelerate specific tasks, including GPUs, FPGAs, ASICs, TPUs and neural processing hardware as well as more powerful CPUs. How might the emergence of application-specific hardware impact data center design? What are the opportunities and challenges?
David Richards: All of these hardware tasks, as well as additional architectures, are being spearheaded by the usual suspects (Microsoft, Google, NVIDIA, etc.). The race is on to be a dominant force in the market, and the rewards are going to be tremendous. There will be winners and losers. One example of application-specific hardware is the equipment used to mine crypto currency. These sites are radically different from traditional data centers, occupying two extremes. One is any storage shed anywhere with a large axial fan for ventilation of the space and bread racks to hold equipment. At the other extreme, there’s immersion cooling with 3M Novec in tubs that look nothing like current computer rooms.
But in the mainstream, it may be more practical to continue the practice of adding accelerators to existing compute for the incredible benefits of increased speed in compute cycles. The actual workloads are still basically spread across available resources, but the time to process is significantly reduced. That approach can be racked using current data center design, but may push the rack footprint density up, requiring better airflow management, then auxiliary indirect liquid cooling or even direct liquid cooling in extreme applications.