Database Optimization: Emerging Technologies to Solve Performance Challenges

July 16, 2021
With the increased focus on infrastructure modernization, databases have started migrating from on-prem to the cloud, multi-cloud, and hybrid cloud. Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services, discusses how emerging technology like Optane DCPMM can solve database management challenges.

In this edition of Voices of the Industry, Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services, discusses how emerging technology like Optane DCPMM can solve database management challenges.

Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services.

The data-centric reality

We live and work in a data-centric world. Databases are at the core of almost every application we use. So, it is no surprise that the Database Management System (DBMS) market grew to 58.4 billion U.S. dollars in 2020.

With the increased focus on infrastructure modernization, databases have started migrating from on-prem to the cloud, multi-cloud, and hybrid cloud. As Gartner predicts, 75% of all databases will be deployed or migrated to a cloud platform by 2022. The same report reveals that only 5% of respondents ever considered for repatriation to on-prem environments.

This expansion leads to a constant data volume increase, accumulating zettabytes (that’s trillion gigabytes) upon zettabytes of data annually. Consequently, the demand for technologies capable of solving performance challenges the ever-growing datasets encounter is higher than ever.

Data management goals

While choosing an ideal infrastructure for database workloads, IT professionals generally focus on achieving the following:

  • Data integrity. Databases contain business-critical information vital for organizations that needs to stay protected against corruption and threats.
  • Database performance. With the amount of data constantly growing, demand for high-performance processing of datasets increases.
  • Cost-effectiveness. Organizations strive to lower infrastructure costs while maintaining data performance and integrity.

The status quo

To tackle the above-mentioned challenges, organizations tend to scale their resources up by upgrading memory and storage. While this does lead to database performance improvements, it comes with limitations and substantial expenses.

DRAM memory is costly, volatile, and has limited capacity. Traditional options such as SATA and even NVMe storage are cheaper, have greater capacity, and offer persistence, but lack speed when transferring data to the CPU. For data-hungry workloads, direct data access to CPU needs to be enabled for enhanced in-memory operations. Traditional storage cache cannot provide this form of acceleration, as opposed to DRAM, which serves data faster to the CPU for processing. CPU is not running empty cycles waiting for data availability, which enhances overall efficiency.

Out with the old and in with the new

Several OEMs are addressing the database optimization issue, pushing the frontiers of CPU, memory, and storage. Intel is one of them.

The new 3rd Generation Intel Xeon Scalable processors are built on 10nm technology with up to 40 cores per socket. The enhanced throughput delivers up to 1.64x more database transactions per minute, shortening the time required to perform high-quality data extraction from a database.

What the new CPUs also bring to the table is confidential computing. The newest generation of Intel Xeon Scalable processors fully supports Intel Software Guard Extensions (SGX) and complies with the marked needs for improved security and puts the concepts of confidential computing into production environments.

The new CPUs show impact on performance improvements for workloads requiring large in-memory operations when paired with Intel Optane Persistent Memory 200 series. Coming from the same ecosystem, the two components bring large datasets closer to the CPU, significantly boosting database performance.

Where memory meets storage

Intel DCPMM (Intel DC Persistent Memory Module – Intel Optane) combines the performance of DRAM with the capacity and data persistence of storage, all packed in a DIMM. It brings up to 25% higher memory bandwidth over the previous generation with in-memory database support and enhanced database management speed and performance.

When it comes to data integrity, this fusion of storage and memory enables data retainment over longer periods of time, even without a constant power supply. With no reloading upon restart, the data stays in the memory and is immediately available. This is important for mission-critical applications, where reloading from traditional storage can take anywhere from minutes to hours, depending on the database volume and complexity. Shorter and less frequent downtime results in fewer losses in case of outages and more reliability in general.

As for database management speed and performance, Optane DCPMM is faster and of higher endurance than most NAND storage. This speeds up not only transactions but real-time analytical workloads as well, with no performance drops, even under heavy workloads.

Finally, Optane Persistent Memory reduces costs related to database management. Relational databases perform best when located on a single server. Consolidating and reducing server footprint leads to saving on licensing, power consumption, and infrastructure. With 3rd Intel Xeon Scalable Processors allowing for more VMs per unit, virtualization enables greater performance on a smaller footprint and up to 25% lower costs per VM. Adding Optane DCPMM further lowers TCO with increased performance and consistent data integrity at a fraction of the cost of an all-DRAM system.

New technology, new possibilities

With a capacity of up to 512GB per unit and up to 6TB total system memory per socket, entire workloads can fit on a single Optane DCPMM module, making it suitable for even the largest datasets.

Since Optane DCPMM can act as both storage and memory, different caching, tiering, and storage combinations can be leveraged for additional performance optimization. For example, only the hottest tables or sub tables containing crucial data and indexes can be stored and cached in memory for the fastest access, leaving more memory free for increasing general system performance.

Another option for a highly effective, workload-optimized database management system involves a two-tier workload optimization for high disk I/O traffic. In these situations, Optane DCPMM can be used for hot data and SSDs for warm data.

The new 3rd Gen Intel Xeon Scalable processors are available with phoenixNAP’s Bare Metal Cloud servers, which can be provisioned automatically through API, CLI or Infrastructure as Code tools. The instances can be billed on an hourly basis and scaled or decommissioned with several simple clicks or lines of code. Monthly reservation options are also available for more predictable workloads and allow for improved cost savings.

The bottom line

When it comes to making our data-centric lives future-proof, Intel Optane Persistent Memory in combination with Intel’s 3rd Gen Xeon Scalable CPUs shows remarkable results. From aggregating datasets while ensuring privacy, data integrity, and high performance to embracing migration to virtualized environments, it far surpasses its predecessors. As more organizations leverage this technology, we are yet to see more innovative use cases it has in store.

Martin Wielomski is currently a Director of Product Management at phoenixNAP Global IT Services. He has years of experience in the Information Technology and Cloud Hosting industries and specializes in global business and product strategy development and international business, product management and evangelism. Martin believes in lifelong learning and leadership through engagement, while maintaining realistic and down to earth people approach.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Siwakorn1933/Shutterstock.com
Source: Siwakorn1933/Shutterstock.com

Vendor Diversification vs Vendor Consolidation: What’s the Best Way to Ensure Supply Chain Resilience for Your Data Center Construction Projects?

Joey Wagner, Program Management Subject Matter Expert for Blueprint Supply Chain, outlines the benefits of two supply chain vendor strategies and explores how each can impact ...

White Papers

Tmg Core Dcfwp Cover 2022 01 31 16 22 16 232x300

Approaching the Heat Limit with Liquid Immersion Technology

Feb. 3, 2022
Liquid cooling offers a less costly and more sustainable alternative to dealing with heat and recent advancements are driving change across the industry. TMGcore explores some...