Gaining Control of Your Cloud Spend: A Data Center Decision-Maker’s Guide
Vlad Friedman CTO at DataBank, takes a look at the strategy behind cloud spend and data centers.
For IT decision-makers, there’s a lot of hype around the theory that the public cloud is always the least-expensive option. It might not be. Corporate spend on public cloud infrastructure has risen 25 percent year over year — to nearly $17 billion in the second quarter of 2020, hitting an all-time high, according to Synergy Research Group. While corporate spending may continue to increase in the public cloud space, companies are often surprised when receiving their invoices. Those surprises have many IT executives questioning whether they considered all of the alternatives when they executed their “cloud-first” strategy.
The net result is that many data center managers are now reviewing budgets for 2021 and are looking for ways to economize when it comes to cloud decision-making. With this in mind, how can organizations gain control of their cloud spend and incorporate a forward-looking strategy that considers the most relevant alternatives?
The answer depends on the design and usage patterns of your workloads.
For production workloads, using third-party cloud pricing and optimization tools, such as CloudHealth or CloudCheckr, is the first step. While many of the tasks performed by these tools can be performed manually, automation of the remediation of common mistakes that drive unnecessary costs is an essential tactic for long term success:
- Reservations – Steady-state workloads benefit from reservations – with tools recommending and purchasing reserved instances to drive savings of up to 50%.
- Sizing – Ensure the instance types selected are appropriately sized for your application.
- Orphaned Snapshots and Volumes – Deleting an instance doesn’t always delete the attached disks and snapshots. Clean up old storage you aren’t using.
Secondly, forecasting costs for development workloads can be more difficult as servers are spun up and spun down – but often, they are launched and forgotten. At DataBank, a strategy we implemented was simply tagging resources with a developer’s name and sending a copy of consumption to the entire team. The visibility of consumption brought enough awareness to drive 50% savings.
Lastly, while public clouds excel at running highly variable spin-up and spin-down workloads, latency-sensitive, computationally, and IOPs intensive applications often perform better and are more efficient to operate on private and colocation infrastructure. Interestingly, the same tools used to manage costs will often provide insight into which applications are best suited for private infrastructure.
Invest time to understand your consumption under load, your application’s sensitivity to latency, and the potential hidden costs for labor, per transaction, and bandwidth when utilizing the public cloud. While unit costs may appear trivial, they add up quickly. If you don’t have access to commercial price and workload evaluation tools, consider using a free tool like LiveOptics.com to benchmark your applications over time and predict realistic public cloud costs.
Service providers like DataBank will often perform a free analysis of your workloads and design hybrid-cloud solutions, blending the benefits of each type of infrastructure into a singular, highly performant, and cost-effective solution.
How do you create a successful cloud spend strategy?
- Start with a comprehensive inventory of your servers, applications, storage, and usage.
- Identify opportunities to create shared services pools. For example, do you need a separate SQL server for every application? Create a single HA (High availability) SQL Cluster to service several applications. Apply this methodology to applications that drive considerable licensing or instance costs like SQL and Oracle.
- Align Instance sizes with consumption.
- Determine which applications can be shut down when not in use. Does the center run large batch processes? Spin up the servers only when needed.
- Analyze your potential bandwidth (egress) costs. While rarely factored into cost calculators, it’s often one of the highest costs. Consider moving bandwidth-intensive and latency-sensitive applications to traditional or even edge data centers.
- Plan your long-term data retention and archival needs to align with your security and compliance strategy. As data grows over time, so do costs. Leverage “cheap and deep” storage for long-term archival. Charges in a hybrid data center can be significantly lower as you achieve scale.
- Assume your costs will be 30-40% higher than you planned initially. Over time, it will become apparent which applications are best suited for the public cloud and which should be repatriated to private infrastructure.
The most prominent mistake organizations make when planning their cloud spend is believing the hype. Public cloud can be an efficient option when workloads are transformed to take advantage of PaaS, microservices, and serverless computing. Conversely, lift and shift migrations often incur a significant cost penalty.
Effective and routine inventory management, applied on a schedule with financial transparency for application owners, supported by practical tools are the critical elements for long-term success. DataBank recommends that data center leaders evaluate their options and create a hybrid-cloud strategy repatriating workloads that are more efficient on private infrastructure. Plan upfront, work with facts, don’t rush things, and understand workloads to ensure that your first bill doesn’t lead to sticker shock.
Vlad Friedman is the CTO at DataBank.