Voice-recognition devices are ubiquitous in today’s society. Most smartphones have built-in voice assistants that can answer questions in seconds, and smart speakers like the Amazon Echo are now staples in many homes, with people using them to listen to music, do research, play games or even shop. Users are becoming accustomed to saying a speaker’s “wake word” and hearing a response in seconds.
As people use their gadgets more and more in their daily lives, most probably never think about the data centers working on the back end to power these incredible capabilities. Data centers make these technological achievements possible, and as an increasing number of consumers use voice technology, the demand on these facilities will likely rise substantially.
The Smart Speaker Market Is Growing Rapidly
Smart speakers are no longer niche gadgets exclusively for the most tech-savvy early adopters. In a December 2018 report, analysts from RBC Capital Markets found smart speaker adoption in the consumer market made a significant jump compared to 2017. The data shows 41 percent of Americans owned a voice-activated speaker in 2018, up from only 21.5% in 2017.
The RBC research found Amazon’s smart speakers were the most prominent among U.S. households, achieving a 31 percent market penetration. Google Home devices also helped propel this growth in the market, but Apple’s smart speakers only made up a small percentage of those sold.
Juniper Research anticipates a tripling in overall usage of digital voice assistants, resulting in 8 billion digital voice assistants by 2023. Additionally, the report suggests smart TVs will show even more growth than smart speakers. Before long, people may do away with remote controls altogether and give their televisions voice commands.
Some people who dream of having smart speakers don’t even have to purchase them to become owners. Spotify recently offered a promotion to people who signed up for its family plan and gave them a free Google Home Mini. Some apartment complexes also offer smart speakers as perks to those who sign new leases, especially if a company wants to emphasize that its units have smart gadget compatibility.
How Data Centers Enable Voice Assistants and Smart Speakers
One of the things that amazes people the most is that they can ask their voice assistants and smart speakers almost anything and receive virtually instantaneous answers. This is enabled by data centers and artificial intelligence (AI) algorithms.
When a person asks a question, the input goes to one of the servers owned by the smart speaker’s maker. Edge computing speeds up the process. Once the content reaches the server, algorithms evaluate the words and tone to interpret the request, and offer a response based on that assessment.
However, the algorithms only aim for near certainty when offering an answer. That’s why some people get responses that don’t make sense based on what they asked, or the smart speaker or voice assistant might say something like, “I’m sorry, I don’t understand.”
Some questions don’t require a complicated answer from the server. If a person asks a smart speaker what time it is, that answer is more straightforward to give because it relies on local data instead of something pulled from a Google search, for example.
Free Resource from Data Center Frontier White Paper Library
Get this PDF emailed to you.
Even these basic examples show that smart speakers and voice assistants couldn’t function without the ultra-fast processing and gigantic capacities data centers provide.
Voice Assistants Are Getting Smarter
Amazon recently disclosed that it is working on an in-house AI chip that could handle more queries locally on the device itself instead of sending requests to servers in the cloud. Then, people could potentially get even speedier responses to some of their questions.
Google is also working to come up with new smart speaker innovations. It recently made Google Assistant — the voice-powered helper on devices like Android smartphones and Google Home speakers — able to offer bilingual support, which requires even more AI processing power from data centers, since the data involved increases with each additional language.
In a blog post about the advancement, Google gave details about the algorithms they used. They have to determine the language spoken, as well as what was said, then generate an answer within seconds. The algorithm often focuses on the first part of a query first and uses that information to “guess” the language spoken.
The company currently allows people to pick two of the six supported languages as the ones Google Assistant recognizes. But there are plans to make the system recognize even more languages simultaneously.
The History of Data Centers and Voice Traffic
Of course, smart speakers are not the first time that voice-based traffic impacted data centers. The telemarketing industry generates a significant volume of voice-over-internet protocol (VoIP) traffic, and many telemarketing companies use cloud-based tools such as customer relationship management (CRM) platforms.
They offer several advantages, including reduced IT infrastructure costs and increased efficiency regarding everyday usage and initial training. Telemarketing tools that work in the cloud depend on data centers that can handle fluctuations in traffic. Time zones and seasonal demand can both cause increased levels of traffic coming into call centers, which, in turn, affects the workloads data centers handle.
The main difference now is that voice traffic today isn’t primarily from VoIP services anymore, but from people doing voice searches with their smart speakers or assistants. As consumers continue to use smart assistants, the demand on data centers will increase, partially because Google and Amazon are among the brands that store past voice command information from users.
It’s possible to go into the device or app settings and delete that content, but many people likely will not take those steps, especially if they are not overly concerned with privacy and security.
Voice Searches Are Becoming More Common
Statistics about voice searches show people frequently use their voices to find out what they need to know. In 2018, there were already over a billion voice searches per month, and 58 percent of respondents said they used voice searches to find local business information within the last year.
The demographics of voice search are changing as well. eMarketer predicts that an increasing amount of voice searches will originate from minors. By 2020, it forecasts 2.2 million kids under age 11 will use smart speakers.
Even though kids may not be the ones buying smart speakers, they’re embracing them in substantial ways. That increases the overall amount of information data centers must handle, too. It’s not difficult to imagine how a young person could perform dozens of voice searches while researching for a school history class assignment, for example.
All signs suggest voice searches and smart speaker usage will continue to climb, especially as people use voice searches in instances where computers are not available, such as while driving.
These trends mean data center managers must continue to scale up and meet demand, both concerning the storage needs and AI processing power that voice technology requires.