Powering AI in the Enterprise

March 26, 2020
Gerry Fassig, Vice President, Sales, at CoreSite, explores how best to power AI and machine learning across a variety 0f industries and sectors through data and processing power. 

Gerry Fassig, Vice President, Sales, at CoreSite, explores how best to power AI and machine learning across a variety 0f industries and sectors through data and processing power. 

Gerry Fassig, Vice President, Sales, CoreSite

When a novel concept becomes a viable business tool, every company is eager to hop on the bandwagon, hoping to capitalize not just on the buzz, but also on the technology.

Right now, Artificial Intelligence (AI) and machine learning (ML) are generating the lion’s share of buzz across industries and sectors. The promise of the technology is such that it’s getting more attention than any technology development in recent memory. Everyone from 100-year old stalwart tech giants to innovative 3-man startups are actively investing time and resources into accelerating the technology’s development and mobilizing it for virtually every application imaginable.

Research suggests that AI is more than the flavor of the day, but rather the new normal. Analyst firm IDC projects spending on AI will grow at more than 38% annually, reaching $79.2 billion in 2022. It will eventually become the driving force behind everything from highly efficient sales enablement platforms and virtual digital receptionists to children’s toys, autonomous vehicles, and products or services that don’t yet exist.

As AI evolves to power most of our world, what will power AI?

Data and processing power. And lots of it.

Great potential, great limitations

As unassisted machine learning, natural language processing (NLP) and deep learning capabilities improve, the applications of each skill will continue to grow and expand into new use cases.

Already, companies across the business spectrum are investigating how AI/ML technologies can be used for future business functions like object recognition and geodata localization. And while they’re waiting for the technology to catch up to the promise, other luminaries are already applying the innovations into practical applications in today’s market — semi-autonomous vehicles, financial planning, call centers and customer care, and cybersecurity, among others.

Like the automobile, the smartphone, or the Internet, AI and machine learning technologies are poised to become a fixture of our everyday lives and business operations.

Yet, despite the continued advancements and innovative uses for AI, all organizations depending on AI will eventually face the same harsh reality of AI’s biggest limitation: capacity. That’s because the Artificial Neural Networks (ANNs) that drive AI’s ability to model and process data inputs and outputs — like how a chatbot compares customer inquiries with an entire lexicon and language to arrive at the “correct” answer — require storing massive volumes of input data and relying on large-scale computing to make sense of those relationships.

The technology consumes vast amounts of bandwidth beyond what most on-premises networks are designed to handle and add considerable overhead in power consumption, due to the number of CPUs or GPUs involved, that go well beyond what most organizations are prepared to spend. Worse, trying to do that all within a single data center geographically removed from where the interaction is taking place introduces latency concerns that can wreck whatever the product or app is trying to accomplish.

Maximizing AI performance with direct cloud connections

To fully realize AI’s potential and catch the wave of a future-shaping technology, enterprises must rethink their collective approaches to network development and connectivity. Increasingly, companies using process-intensive AI applications are turning to edge data centers with on-premise and cloud-based connection options to resolve bandwidth and compute challenges, lower operating costs, and all but eliminate latency concerns.

These strategically located data centers help organizations build a high-performance nationwide or global network that enables processing information closer to the user or device, which can significantly improve performance and reliability. This is especially beneficial for supporting latency-sensitive AI applications like autonomous vehicles or cybersecurity operations that command real-time data exchange, while also maximizing workload flexibility and cost management.

Specifically, this new generation of data center is equipped with both conventional on-premise colocation infrastructure as well as connectivity with cloud services providers for instant access to additional compute and storage, greater flexibility and increased business agility. This “hybrid” data center provides easy on-ramps to cloud providers and their vast, extensible resources within facilities — all while eliminating the need for private WAN connections to each provider manually.

This means that not only can an enterprise build its private infrastructure in the same data centers trusted by world-class cloud providers, but it also can link to those clouds within the facility through real-time, high bandwidth one-to-many connection options. These connections provide a higher performance alternative to network builds while also substantially lowering latency and reducing data transfer costs by as much as 50% when compared to the public internet.

Like the automobile, the smartphone, or the internet, AI and machine learning technologies are poised to become a fixture of our everyday lives and business operations. As they continue to mature and advance, enterprises hoping to gain or maintain a competitive advantage with AI must consider reshaping their current infrastructure to properly balance the various demands on their business while positioning themselves to realize both the current and future capabilities of world-altering technologies.

Gerry Fassig is Vice President, Sales, at CoreSite

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Siwakorn1933/Shutterstock.com
Source: Siwakorn1933/Shutterstock.com

Vendor Diversification vs Vendor Consolidation: What’s the Best Way to Ensure Supply Chain Resilience for Your Data Center Construction Projects?

Joey Wagner, Program Management Subject Matter Expert for Blueprint Supply Chain, outlines the benefits of two supply chain vendor strategies and explores how each can impact ...

White Papers

DCF_IMDCEBook_2020-05-21_10-12-02

Transforming Your Business Through Data — Creating a Virtuous Cycle for Your High-Value Data

June 1, 2020
Get the new paper from Iron Mountain that explores digital transformation in full, including today’s data landscape, current challenges, opportunities and more.