Gaining Control of Your Cloud Spend: A Data Center Decision-Maker’s Guide

Nov. 30, 2020
How can organizations gain control of their cloud spending? As data center managers reviewing budgets for 2021, DataBank CTO Vlad Friedman takes a look at strategies and tools to optimize your cloud and data center spending.

Vlad Friedman CTO at DataBank, takes a look at the strategy behind cloud spend and data centers. 

Vlad Friedman CTO at DataBank

For IT decision-makers, there’s a lot of hype around the theory that the public cloud is always the least-expensive option. It might not be. Corporate spend on public cloud infrastructure has risen 25 percent year over year — to nearly $17 billion in the second quarter of 2020, hitting an all-time high, according to Synergy Research Group.  While corporate spending may continue to increase in the public cloud space, companies are often surprised when receiving their invoices. Those surprises have many IT executives questioning whether they considered all of the alternatives when they executed their “cloud-first” strategy.

The net result is that many data center managers are now reviewing budgets for 2021 and are looking for ways to economize when it comes to cloud decision-making. With this in mind, how can organizations gain control of their cloud spend and incorporate a forward-looking strategy that considers the most relevant alternatives?

The answer depends on the design and usage patterns of your workloads.

For production workloads, using third-party cloud pricing and optimization tools, such as CloudHealth or CloudCheckr, is the first step. While many of the tasks performed by these tools can be performed manually, automation of the remediation of common mistakes that drive unnecessary costs is an essential tactic for long term success:

  • Reservations – Steady-state workloads benefit from reservations – with tools recommending and purchasing reserved instances to drive savings of up to 50%.
  • Sizing – Ensure the instance types selected are appropriately sized for your application.
  • Orphaned Snapshots and Volumes – Deleting an instance doesn’t always delete the attached disks and snapshots. Clean up old storage you aren’t using.

Secondly, forecasting costs for development workloads can be more difficult as servers are spun up and spun down – but often, they are launched and forgotten. At DataBank, a strategy we implemented was simply tagging resources with a developer’s name and sending a copy of consumption to the entire team. The visibility of consumption brought enough awareness to drive 50% savings.

Lastly, while public clouds excel at running highly variable spin-up and spin-down workloads, latency-sensitive, computationally, and IOPs intensive applications often perform better and are more efficient to operate on private and colocation infrastructure. Interestingly, the same tools used to manage costs will often provide insight into which applications are best suited for private infrastructure.

Invest time to understand your consumption under load, your application’s sensitivity to latency, and the potential hidden costs for labor, per transaction, and bandwidth when utilizing the public cloud. While unit costs may appear trivial, they add up quickly. If you don’t have access to commercial price and workload evaluation tools, consider using a free tool like LiveOptics.com to benchmark your applications over time and predict realistic public cloud costs.

Service providers like DataBank will often perform a free analysis of your workloads and design hybrid-cloud solutions, blending the benefits of each type of infrastructure into a singular, highly performant, and cost-effective solution.

How do you create a successful cloud spend strategy?

  1.  Start with a comprehensive inventory of your servers, applications, storage, and usage.
  2.  Identify opportunities to create shared services pools. For example, do you need a separate SQL server for every application? Create a single HA (High availability) SQL Cluster to service several applications. Apply this methodology to applications that drive considerable licensing or instance costs like SQL and Oracle.
  3.  Align Instance sizes with consumption.
  4.  Determine which applications can be shut down when not in use. Does the center run large batch processes?  Spin up the servers only when needed.
  5.  Analyze your potential bandwidth (egress) costs. While rarely factored into cost calculators, it’s often one of the highest costs. Consider moving bandwidth-intensive and latency-sensitive applications to traditional or even edge data centers.
  6. Plan your long-term data retention and archival needs to align with your security and compliance strategy. As data grows over time, so do costs. Leverage “cheap and deep” storage for long-term archival. Charges in a hybrid data center can be significantly lower as you achieve scale.
  7.  Assume your costs will be 30-40% higher than you planned initially. Over time, it will become apparent which applications are best suited for the public cloud and which should be repatriated to private infrastructure.

The most prominent mistake organizations make when planning their cloud spend is believing the hype. Public cloud can be an efficient option when workloads are transformed to take advantage of PaaS, microservices, and serverless computing. Conversely, lift and shift migrations often incur a significant cost penalty.

Effective and routine inventory management, applied on a schedule with financial transparency for application owners, supported by practical tools are the critical elements for long-term success. DataBank recommends that data center leaders evaluate their options and create a hybrid-cloud strategy repatriating workloads that are more efficient on private infrastructure. Plan upfront, work with facts, don’t rush things, and understand workloads to ensure that your first bill doesn’t lead to sticker shock.

Vlad Friedman is the CTO at DataBank. 

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Optimizing AI Infrastructure: The Critical Role of Liquid Cooling

In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...

AI-Driven Data Centers: Revolutionizing Decarbonization Strategies

AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...

Bending the Energy Curve: Decoupling Digitalization Trends from Data Center Energy Growth

After a decade of stability, data center energy consumption is now set to surge—but can we change the trajectory? Discover how small efficiency gains could cut energy growth by...

AI Reference Designs to Enable Adoption: A Collaboration Between Schneider Electric and NVIDIA

Traditional data center power, cooling, and racks aren’t sufficient for GPU-based servers arranged in high-density AI clusters...

Courtesy of AFL
Source: Courtesy of AFL

Scaling Up and Scaling Out in AI Data Centers

Manja Thessin, Enterprise Market Manager for AFL, highlights the importance of industry collaboration across factors such as AI hardware innovation and modular infrastructure ...

White Papers

Dcf Vertiv Wp Cover 2022 08 15 13 47 38

Vertiv’s Approach to Environmental, Social and Governance Matters

Aug. 16, 2022
Vertiv presents their approach to environmental, social, and corporate governance (ESG) including what they’re doing for the planet, for their people, and for their neighbors....