Serverless 101: Why It Matters for Data Center Professionals

Jan. 8, 2018
Serverless computing services offered by cloud platforms are gaining attention. It’s important for data center professionals to understand what serverless computing is and why it matters.

Serverless computing has been gaining a lot of attention as major cloud vendors like Amazon Web Services and Microsoft Azure invest in this type of architecture. It’s important for data center professionals to understand what serverless computing is and why it matters.

First of all, the term “serverless” is a bit of a misnomer. You’ll absolutely still require servers, but it’s the cloud computing execution model that’s a bit different.

Think of serverless computing as a cloud utility model, capable of adapting dynamically and managing the allocation of machine resources specifically based on what’s actually being used. The big difference here is that all management, capacity planning, resource allocation, and the delivery of those resources is done transparently, hidden from the user, developer or operator. From there developers can design applications to be “purely serverless,” where they allow the infrastructure to scale as it needs to. This can then be combined with microservices and containers to enhance DevOps and application delivery.

The concept of serverless really revolves around the ability for users, developers, and operators to run their code without ever having to manage it. Unlike traditional IaaS and even PaaS architecture, serverless capabilities allow developers to use the exact amount of compute resources for the job. So, whenever there is a pre-defined ‘event’ that occurs that might trigger the code, the serverless platform will then execute the task. The cool part is that the developer can simply except the results without having to tell the infrastructure how many times these ‘events’ have to run.

It’s the utility consumption of cloud resources when creating applications. The other big difference here is that all management, capacity planning, resource allocation, and the delivery of those resources happens transparently, hidden from the user, developer or operator. Developers can design applications to be “purely serverless,” where they allow the infrastructure to scale as it needs to. This can then be combined with microservices, various functions, containers to enhance DevOps and application delivery.

How Serverless Supports Advanced Microservices

Serverless architectures help bolster the development of microservices. Instead of building very rigid applications, you’re able to assemble an app from single-purpose services. These services have common APIs that can be accessed by developers to work on their app. Services can then be powered by functions, which act as building blocks for the microservices.

Ultimately, you’d have developers who use and create microservices to design and build applications. Throughout the application build process, developers would call services from functions, or reference a library and grab a different function to develop a specific service. In a serverless architecture, developers would never have to consider the server infrastructure throughout the entire application creation process. This is the utility model of cloud and the key difference from traditional environments.

So, here’s the important question: What’s the big deal and do you really need it?

Serverless Architecture – Major Players

Major cloud providers have already invested serious dollars into creating serverless architecture. Microsoft Azure offers an architecture where you can focus on your apps and not your infrastructure. From there, they promise vastly improved scaling to handle tens of thousands of concurrent functions almost instantly (within seconds), to match any workload, and without requiring scale configuration. Basically, this environment can react in near real-time. From there, there’s the very important cost advantage.

Serverless computing is event-driven, and resources are allocated as soon as they’re triggered by an event or function. This means that you’re only paying when your code is running and you’re only charged for the time and resources it takes to execute your code, with sub-second billing. Within the Azure cloud, you can leverage a few serverless application functions. This includes compute, storage, database, security, cloud messaging, API management, analytics, and even workflow orchestration.

Other cloud players have been working in the space of serverless compute for quite some time.

Amazon AWS was the first to release a cloud-based serverless architecture in 2014 with AWS Lambda. The AWS architecture promises no server management, very flexible scaling, high availability, and no idle capacity. Much like Azure, there is no need to pre- or over-provision capacity for things like compute and storage. For example, there is no charge when your code is not running. Within the AWS architecture, you can leverage compute, API proxy, storage, database, inter-process messaging, analytics, orchestration, and developer tooling.

In deploying serverless applications, developers and operators don’t need to be concerned with ensuring application fault tolerance and availability. Instead, serverless solutions like those from Azure or AWS handles all of these capabilities for you.[clickToTweet tweet=”Serverless 101: Amazon AWS was the first to release a cloud-based serverless architecture in 2014 with AWS Lambda” quote=”Serverless 101: Amazon AWS was the first to release a cloud-based serverless architecture in 2014 with AWS Lambda”]

There are others in the serverless compute space as well. Google Cloud Platform offers Google Cloud Functions and IBM has published OpenWhisk as an open source serverless platform. Others, like Iron.io aim to offer a serverless platform which aims to integrate microservices/microcontainer capabilities and asynchronous job-based processing. All of this while supporting public, private and multitenant cloud environments.

Serverless Architecture – Understanding Use-Cases

Serverless computing architectures are designed to improve cost controls and simplify how developers work with coding and their applications. From there, use cases around serverless compute can actually be pretty cool. Consider the following:

  • Web apps and backends. You can build powerful serverless web apps and backends designed to support things like web and even mobile requests. You can even design around chatbot requests for a variety of use-cases.
  • IoT backend. You can direct your IoT architecture to send messages to your serverless architecture which then integrates with analytics systems to call upon a function process.
  • Designing mobile backends. You can integrate various mobile backend functions via a serverless compute architecture. As Microsoft points out, a mobile application can capture an image, and then call an Azure Function to get an access token for uploading to blob storage. A second Azure Function is triggered by the blob upload and resizes the image to be mobile-friendly.
  • Data processing. Leveraging the flexibility of serverless storage and compute, you can design data-processing systems capable operating in real-time.

There will ultimately be more use-cases as organizations figure out which apps can run in a serverless, cloud-ready architecture. Either way, there are some serious benefits to working with serverless computing. This includes cost control, resources provisioning, and vast improvements around developer productivity and application delivery.

A Few Words of Caution

Now, before you go out there into the vast clouds and sign up for a serverless compute architecture, there are some cautions to heed. First of all, even though AWS Lambda, introduced by Amazon in 2014, was arguably the first serverless architecture being offered; it’s still a relatively new and emerging market.

To consider serverless, you really need to understand your application and use-case prior to deployment. Serverless architectures are well-suited for dynamically changing applications constantly requiring various levels of resources. If you have an app that’s going to sit dormant, working with serverless might not make sense. If your resources aren’t being used, they’re most likely dormant. This means that to bring them back up, you’ll have to re-spin up those resources. Ultimately, this could lead to additional latency and some performance issues.

You also have to be very cautious around your resource utilization. Serverless compute, even though it’s super-efficient, isn’t a sea of endless resources. You will have limits. This is why workloads like high performance computing (HPC) may not work very well with serverless compute storage or compute architectures. You might find more efficient and less expensive ways to deliver those HPC workloads and apps outside of serverless compute.[clickToTweet tweet=”Serverless 101: Serverless compute, even though it’s super-efficient, isn’t a sea of endless resources.” quote=”Serverless 101: Serverless compute, even though it’s super-efficient, isn’t a sea of endless resources.”]

When working with serverless compute architectures, remember that entire functions and processes are timed. This means that debugging, logging, and even analytics might be limited. Analyzing serverless code vs traditional server code is going to be different. Again, this could limit your abilities to debug issues if you’re trying to look at long periods of time.

Finally, it’s important to understand security. Amazon allows you to securely control access to your AWS resources with AWS Identity and Access Management (IAM). Manage and authenticate end users of your serverless applications with Amazon Cognito. Use Amazon Virtual Private Cloud (VPC) to create private virtual networks which only you can access. Similarly, Azure Active Directory provides cloud-based identity and access management. Using it, developers can securely control access to resources and manage and authenticate the users of their serverless apps.

However, because so much is controlled by the cloud provider, you will be very limited in the kinds of security tools you can deploy within the network or the end-point. For example, data loss prevention (DLP), intrusion prevention and detection (IPS/IDS), and even end-point detection and response (EDR) may not work with your serverless compute platform.

Final Thoughts

As we push forward into the digital age, the applications we develop will absolutely follow suit. This means they’ll be more agile, a lot more dynamic in nature, and support even more use-cases. Serverless compute architectures are aimed at the new generation of applications requiring dynamic resource controls and simplified management.

If you truly understand the type of application you have and your use-case, then a serverless architecture may absolutely be right for you. There are some great benefits to working with this type of elastic platform, and as the application and user become even more mobile, the market for serverless will likely grow in parallel.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

Julia Ardaran / Shutterstock.com

Beyond Hyperscale: Quantum Frederick's Vision for Sustainable Data Center Development

Scott Noteboom, CTO of Quantum Loophole, explains how Quantum Frederick created the first carbon neutral industrial zone.

White Papers

Thumbnail2

Choosing the Right Technology for Diesel Backup Generators

July 26, 2023
Environmental and long-term sustainability concerns are increasingly influencing our technology decisions, and that’s driving change in the market. Gone are the days of simple...