For all its well-documented benefits, the cloud isn’t the place for all workloads. Some don’t perform the same in the cloud as they do in on-premise environments. Others may be restricted to an on-premise data center or on-premise private cloud due to regulatory or other requirements. That was the case for a recent ClearScale customer.
The company, a sports and entertainment ticketing company, wanted to migrate from its on-premise, bare metal infrastructure to the AWS cloud. However, data privacy and PCI DSS compliance issues required it to keep certain workloads on site. There were also concerns about performance degradation and latency in the cloud, not to mention the company’s investment in its on-premise legacy systems.
ClearScale was tasked with developing and implementing a hybrid IT strategy, which would allow the company to move selected workloads to the cloud while keeping other workloads onsite. Complicating the project was the fact that the cloud migration would need to be phased for the various workloads since each had specific requirements.
The customer also requested that ClearScale develop a solution that would enable it to run as many of its applications as possible both on-premise and in the cloud. In addition, the customer was interested in using microservices, a distinctive method of developing software systems, for its application development going forward. The customer also needed the solution to be cost-effective and to meet PCI DSS requirements.
Containers, Microservices and Kubernetes
ClearScale determined the solution should employ containerization. With containerization, applications are abstracted from the environment in which they actually run. This allows the containerized applications, also referred to as microservices, to be deployed easily and consistently, regardless of the target environment.
Microservices structure applications as collections of loosely coupled services. This makes them easier to build, expand and scale. There’s no need to build and deploy an entirely new software version every time a function is changed or scaled.
ClearScale selected Kubernetes, a portable, extensible, open-source platform, to handle the containerized applications, along with Amazon Elastic Kubernetes Service (Amazon EKS). Amazon EKS runs the Kubernetes management infrastructure across multiple AWS availability zones to eliminate a single point of failure. It manages clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances and runs containers on those instances with processes for deployment, maintenance, and scaling. Using Kubernetes allows for running any type of containerized applications using the same toolset on-premises and in the cloud.
It also increases infrastructure utilization through the efficient sharing of computing resources across multiple processes. By dynamically allocating computing resources to fill the demand, Kubernetes could help the sport and entertainment ticketing organization avoid paying for computing resources it wasn’t using.
The Solution Architecture
To meet the customer’s requirements, the solution architecture includes production and non-production multi-tenant environments. Multi-tenancy generates cost savings but also can cause security concerns. That’s why ClearScale designed the multi-tenant environments to be comprised of two virtual private clouds (VPCs). Because the production environments handle credit cardholder data, they are architected with specific technical and security controls to help ensure PCI DSS compliance. The solution also leverages AWS’s certification as a PCI DSS 3.2 Level 1 Service Provider, the highest level of assessment available, as well as ClearScale’s own experience in PCI-compliant solutions.
Each tenant is deployed to a separate Kubernetes cluster and has its own Stream Processing cluster for building real-time data pipelines and streaming apps; a in-memory data structure store; and Amazon Relational Database Service (Amazon RDS) database. To avoid the issues associated with policy-based segregation when sharing cluster resources between multiple tenants, each tenant also has its own Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Amazon EKS makes it easy to run Kubernetes on AWS without the customer having to install and operate its own Kubernetes clusters. Each tenant's infrastructure also lies in a separate VPC for network level segregation. Cluster Autoscaler is used to adjust the number of instances in an autoscaling group (ASG) based on internal Kubernetes algorithms. In addition, Amazon CloudWatch monitors ASG CPU usage and triggers autoscaling scale up or down action when triggered.
The solution also employs AWS Application Load Balancer (ALB) Ingress Controller for Kubernetes. The controller triggers the creation of an application load balancer and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster. The Ingress resource uses the application load balancer to route HTTP or HTTPS traffic to different endpoints within the cluster.
Helm Charts is used to define, install, and upgrade Kubernetes applications. Jenkins, an open source automation server, is used for continuous integration and delivery (CI/CD) for the Docker image builds and deployment. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. It’s used for internal pod auto-scaling based on metrics-server. Using the Metrics API, the amount of resources currently used by a given node or a given pod can be identified.
Other solution components include HashiCorp’s Terraform, which is used for automating AWS resource provisioning. The open source tool codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
In conjunction with the new solution architecture, ClearScale worked with the sports and entertainment ticketing company to move selected workloads to a containerized environment in the cloud. A successful proof of concept using the initial workload validated the migration framework, workflows and processes. The remaining workloads have since been migrated and are performing faster and better.
Thanks to ClearScale’s experience in working with both Kubernetes and AWS services, the sports and entertainment ticket company now has a hybrid IT strategy in place that provides it with flexibility, cost savings, optimal performance for its workloads, and the necessary security to meet its PCI DSS requirements. It’s also been able to take advantage of the wide range of third-party services natively integrated with Kubernetes.
It also now benefits from the use of microservices which, even when managed by separate teams, can be updated independently. Another benefit: security patches are easy to apply and test.
The company did keep selected applications running on its bare metal servers. However, ClearScale’s ability to design environments to meet its unique requirements has spurred it to consider a next phase cloud migration.
For another example of ClearScale’s work using Kubernetes, read Creating PCI-Compliant SaaS Applications for the Cloud.
Get in touch today to speak with a Cloud expert and discuss how we can help: