Container technology and Kubernetes, an open-source container orchestration system, are gaining popularity for modern application development. According to Red Hat, approximately two-thirds of IT leaders are already running containers and using Kubernetes. Adoption of both technologies is projected to increase, as organizations look to upgrade the portability and efficiency of their applications.

However, Kubernetes can be costly if not used effectively. Kubernetes is particularly expensive when it’s managed in-house. That’s why it’s essential for development teams to optimize Kubernetes clusters and consider using a managed Kubernetes solution, like Amazon Elastic Kubernetes Service (EKS).

However, regardless of the approach you take, it’s useful to know how you can achieve Kubernetes cost optimization. To that end, here are four key tactics.

1. Improve Cluster Cost Visibility

Kubernetes costs fall along six dimensions:

  • Workloads
    • Infrastructure: compute, networking and storage
    • Platform: EKS and ECR
    • DevOps: CI/CD and security
    • Observability: logging, metrics, and tracing
  • Operations: engineering

Within these dimensions, the best opportunities to cut costs come from the workloads, compute, EKS, and engineering buckets. But it doesn’t quite make sense to cut costs based on these categories alone. It’s better to think about costs at the cluster, pod or container, service or deployment, or namespace level. This is a more organized approach that aligns with how organizations deploy Kubernetes clusters in the real world.

Using this way of thinking about costs in mind, best practices for increasing visibility include:

  • Tagging AWS resources using native tooling, like Cost Explorer or Cost Usage Reports
  • Leveraging ecosystem tooling, such as Kubecost, StormForge, or CloudZero
  • Remediating clusters according to feedback from tooling
  • Performing continuous cost monitoring
  • Implementing showback or chargeback for internal platform users

Optimizing Kubernetes cluster costs is not a one-time thing. You’ll continue to evaluate your Kubernetes clusters over the long term. That’s why having clear visibility into spending is crucial.

2. Application Profiling

Organizations commonly overprovision Kubernetes resources for their applications. The reason for this is many IT teams don’t actually know how much their applications need in terms of resources. In many cases, this is because teams deploy publicly available container images and rarely look back to see how well their choices align with their needs. The truth is that CPU and memory are rarely allocated correctly the first time around.

Ongoing application profiling allows you to finetune resource allocations as you go. Engineers can estimate what they need from a resource standpoint and then right-size afterward. Observability tools like Amazon CloudWatch make it easy to figure out resource utilization using various data visualizations. Additional best practices for application profiling include load-testing applications to determine resource requirements at scale and then implement limits wherever needed. At ClearScale, we recommend focusing on memory limits first.

Ready to plan your next cloud project?

3. Autoscaling for Elasticity

Autoscaling is paramount when working with Kubernetes. There are two ways to think about autoscaling:

  • Horizontal or vertical pod autoscaling (HPA or VPA)
  • Cluster Autoscaler (CAS or Karpenter)

Pod autoscaling deals with autoscaling applications. This approach requires teams to right-size resource allocations to pods on an ongoing basis (re: application profiling). Horizontal pod scaling is more common than vertical scaling, although VPA is popular for legacy applications. Engineers can use different targets for determining how HPA scaling should happen, i.e., when CPU utilization hits 80% or when HTTP requests per minute crosses over 15,000.

Cluster Autoscaler is an open-source project that supports cluster autoscaling. This method uses EC2 Auto-scaling groups to scale compute and adds new compute instances when there are no available nodes with the capacity needed for the application. The compute instances that get added to the cluster are based on the existing EC2 auto-scaling group.

Karpenter is another open-source software that can automatically add or remove compute resources. However, Karpenter will dynamically choose compute resources that are best suited for the application needs. This is the key difference with Cluster Autoscaler. Karpenter provides more instance-type flexibility and better resource utilization overall.

4. Optimizing Cluster Compute

Optimizing cluster compute really means optimizing EC2 or AWS Fargate resources. Fortunately, developers have options that affect pricing, capacity, and performance. It’s important to strike the right balance. One of the biggest choices to make is whether to go with self-managed node groups, managed node groups, or serverless infrastructure.

Of course, with self-managed, organizations bring their own autoscaling groups running in a custom AMI. In-house engineers are responsible for patching and maintaining the underlying operating system. A service like Amazon EKS takes care of the control plane within a company’s own VPC. The two paths from here for the data plane involve going with EC2 compute in the user’s VPC or using AWS Fargate in an AWS VPC.

The right configuration depends on the target workload. There are more than 550 instance types, covering most business needs. Building an ideal list of instance types requires answering the following questions:

  • What processors can my workload use?
  • What are my workload’s performance requirements?
  • What is my workload’s consumption pattern?

For those that use EC2, Amazon offers multiple purchase options: on-demand instances, savings plans, and Spot instances. More savings typically means less flexibility. It’s up to IT leads to choose what they want to prioritize. We recommend using savings plans for steady-state workload capacity. If the ultimate priority is to offload operational IT burdens, then going serverless containers using AWS Fargate is the answer. You can learn more about optimizing costs for Fargate in this blog post.

Get Started with Amazon EKS Today

For those who don’t want to manage Kubernetes clusters in-house, it’s time to consider Amazon EKS, a managed service that makes it easy to maintain, scale, and deploy containerized applications at scale. Amazon EKS works for containerized applications in the cloud and on-premises. It also integrates with other popular AWS services, like EC2 and IAM, that are crucial for building out a robust container management capability.

At ClearScale, we can help you implement Amazon EKS for your unique container needs and help you achieve Kubernetes cost optimization. As an AWS Premier Tier Services Partner, we know how to use AWS containers services like Amazon EKS to their full potential. More importantly, we understand that our technical solutions are a means to an end, whether that be to cut costs, boost revenues, accelerate innovation, or achieve some other outcome.

Get in touch today to speak with a cloud expert and discuss how we can help:

Call us at 1-800-591-0442
Send us an email at sales@clearscale.com
Fill out a Contact Form
Read our Customer Case Studies