By Alexandr Ivenin, System Technical Lead, ClearScale
Whether you’re developing cloud-native applications or modernizing existing apps, it’s good to have choices ─ particularly in terms of technologies like serverless and containers. After all, cost, time to release, app scalability, complexity, use case and other factors can have a significant impact on the development process and project objectives. Some technologies are better at addressing these different issues than others.
But how do you determine which approach to use? Both serverless and containers are cloud-based, and can consistently deploy code inside isolated, discrete environments. They abstract apps from the underlying host environment, with the apps broken down and deployed as smaller components.
Both enable developers to build apps with less overhead and more flexibility than those hosted on traditional servers or virtual machines (VMs). And they allow for automating and dynamically scaling workloads.
Nonetheless, serverless and containers are different technologies. Understanding the basics of the technologies – and the differences between them - can help you determine which is the better option for a specific project.
What is Serverless?
With serverless, computing resources are provided on-demand and managed behind the scenes by a cloud service provider (CSP). The CSP is responsible for all infrastructure management, including scaling, scheduling, patching, and provisioning.
This allows developers to focus on writing code rather than dealing with the underlying infrastructure. Iteration can be faster as code can be shipped faster without set-up or provisioning. It’s considered cost-effective as end-users pay only for resources used and never for idle capacity.
Amazon Web Services (AWS) introduced serverless in 2014 with AWS Lambda. Today, every leading CSP offers a serverless platform but AWS Lambda remains one of the most popular. Simply upload your code along with its dependencies to Lambda as a function. It automatically gets deployed in a container. You can then run the Lambda function from any app running on an AWS service. Lambda takes care of provisioning, deploying, and managing the container as a managed service.
Serverless architectures are well-suited for use cases around microservices, mobile backends, and data and event stream processing. Serverless is particularly useful if you need traffic pattern changes to be automatically detected and handled instantly.
What are Containers?
Containers are executable units of software in which code is packaged with its relevant environment variables, configuration files, libraries, and software dependencies. They serve as a way of partitioning a machine, or server, into separate user space environments.
Each container runs only one app and doesn’t interact with other partitioned sections on the machine. The individual containers share the host machine's operating system (OS) kernel with each other, but they each run as if they were the only system on the machine.
Any kind of app can be run in a container, and any containerized app will run the same way no matter where it is hosted. As such, they can easily be moved around and deployed wherever needed.
Because managing hundreds or even thousands of containers across a distributed system can be difficult, container orchestration is used to manage large volumes of containers throughout their lifecycle. There are various options, but one of the most popular is Kubernetes, an open-source project.
Among the key use cases for containers are microservices, hybrid and multi-cloud scenarios, and for modernizing apps so they can be migrated to the cloud. Containers are also a good choice if you need the flexibility to install and use software with specific version requirements. You can choose the underlying OS and have full control of the installed programming language and runtime version.
Containers also make it easy to move applications into test environments and then production. You can then deploy them into a private or public cloud.
Serverless and Container Differences
While the overviews provided for serverless and containers spell out some of the differences, it’s worth calling out some of the most significant differences here, including:
Cost - Containers run constantly, and CSPs charge for server space even if it’s not being used by an app. With serverless, developers are only charged for the server capacity an app uses.
Maintenance - Containers are hosted in the cloud, but CSPs don’t update or maintain them. Developers are responsible for managing and updating each container they deploy. Serverless architecture has no backend for developers to manage. The CSP takes care of all management and software updates for the servers that run the code.
Control and flexibility - Containers allow developers to take full control of their apps. Although system settings must be manually configured, developers have greater flexibility. They can set policies and manage resources and security. With serverless, almost everything is managed by the CSP.
Physical machines – CSPs provision server space as it’s needed by an app. No specific machines are assigned for a given function or app. With containers, each container resides on one machine at a time and uses that machine’s OS although it can be moved easily to a different machine.
Scalability - In a container-based architecture, the number of containers deployed is pre-determined by the developer. In a serverless architecture, the backend automatically scales to meet demand.
Testing – It’s not easy to test serverless apps because the backend environment is hard to replicate on a local environment. Containers run the same no matter where they are deployed, so it’s easy to test a container-based app before it goes to production.
Time of deployment - Containers take longer to set up than serverless functions because you must configure things such as system settings and libraries. Once configured, containers take only a few seconds to deploy. Serverless functions are smaller than container microservices and aren’t bundled with system dependencies. They take only milliseconds to deploy.
Guidance for Choosing Serverless or Containers
While the choice of serverless or containers will depend on a wide variety of factors and there are always exceptions, there is a simple rule of thumb. Choose containers and container orchestrators when you need flexibility, or after you migrate legacy services. Choose serverless when you need speed of development, automatic scaling, and significantly lowered runtime costs.
Containers are usually the preferred choice if you require full control over the environment. They let you select a root operating system, run software in any programming language, and have full control over the software and its dependencies. This makes it easier for migrating legacy applications to the cloud since it’s possible to replicate the application’s original running environment more closely.
Serverless is usually the way to go if you need to perform relatively simple processing on event streams. It’s easy to set up, even at large scale. You only pay only for the time your serverless functions are in action. There’s no infrastructure to manage, so your development team can focus on writing code and its business value.
Developers using serverless architecture can release and iterate new apps quickly, without worrying about whether the app can scale. If an app doesn’t see consistent traffic or usage, serverless will be more cost-efficient than containers because end users are only charged for resources used.
Combine Serverless and Containers
In some cases, it may make sense to run some serverless functions and deploy some in containers. Consider this scenario: you have a complex containerized system and are running certain ancillary tasks triggered by events. Instead of running them in a container, you can separate those tasks to a serverless function to offload complexity from your containerized setup.
You can also use containers to easily expand a serverless application. Serverless functions typically persist data to cloud storage services. You can mount these services as Kubernetes persistent volumes, which lets you integrate and share stateful data between serverless and container architectures.
Once the decision is made to use serverless, containers or both, there are still other decisions to be made. That includes selecting hosting (i.e., should you go with EC2 instances or AWS Fargate) and what container orchestration service to use (i.e., Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Watch for a future blog on those issues.
Meanwhile, if you’re interested in learning more about serverless and containers – or have a project that may have a use for either one or both – talk to ClearScale. We have extensive experience in helping customers across a wide range of industries develop solutions that employ the most efficient, cost-effective technologies and deliver exceptional business value.
Get in touch today to speak with a cloud expert and discuss how we can help: