The growth of cloud services and the increased demand by development organizations has created numerous opportunities for companies to leverage new technologies and services. These offerings allow the ability to refine engineering processes and capitalize on hybrid approaches to service offerings that allow for decreased time to develop and deploy, scaling up as demand increases, and monitoring the health of the various aspects of the development pipeline.
Tools like Chef and Docker are straight-forward to use and popular due to their ease of use once properly configured, but when it comes to utilizing them for detailed logging and control they tend to have limitations.
ClearScale, an AWS Premier Consulting Partner, recently had an opportunity to work closely with a client in the software integration space which provides services that allow companies to integrate across enterprise and legacy applications. Because they required an easy-to-use platform along with robust analysis tooling of integrations, they realized they needed to take their existing Chef and Docker solution and evolve it to meet the growing demands of their customer base. Approaching ClearScale, they challenged us to find a way to keep the ease of deployment while providing greater control and more visibility to how deployments operated with less communications overhead.
The ClearScale Solution
ClearScale knew that containerizing this approach was in our client’s best interest and began evaluating Kubernetes and AWS ECS as a more suitable alternative to existing Chef and Docker implementation. Kubernetes is a container management solution and provides a complete managed-execution environment. Not only does it manage the deployment and running of containers such as Docker, but it also manages and orchestrates those containers.
The advantage of utilizing AWS ECS meant that it was already tightly integrated to other AWS services, allowing for a robust end-to-end solution, and it was highly scalable. However, ECS is using load balancers for service discovery. The Application Load Balancer (ALB) offers path- and host-based routing as well as internal or external connections. Kubernetes is using a different strategy. Only requests coming from outside the cluster are passing through a load balancer. A virtual IP allows access to internal services without the need for a load balancer.
AWS ECS excels at providing a plain and simple approach for a microservice architecture, especially if most of the client’s services need to be accessible from the Internet. But, in our client’s case, their microservice architecture relies heavily on service-to-service communication, and Kubernetes is offering less communication overhead.
The added benefit to this open-sourced solution meant that not only would ClearScale be able to leverage Kubernetes in the AWS EC2 environments and AWS Private Virtual Cloud (PVC), but it also had the added benefit of integrating with other non-AWS services should the client choose to do so in the future.
Deploying OpenShift cluster as the application tier allowed ClearScale to pair the Kubernetes pods, the method by which the container management system uses to scale up and down based on demand, with OpenShift. This gave Kubernetes the ability to control all aspects of the application tier through the use of the pods — from API Gateways and traffic management, through the Service Tier.
In addition, ClearScale implemented Kube-monkey, a service that purposefully and randomly deletes Kubernetes pod clusters; this encourages and validates a fault-tolerant and failure-resilient application services.
Since one of the goals of the client was to have more robust logging of information about integrations that are used by their customer base, primarily because of regulatory requirements for certain industries, like HIPAA Security Rules that deal with how data should be encrypted at rest, ClearScale implemented a broad-ranging approach to data gathering.
For server monitoring, Amazon Simple Notification Service (SNS) was used to generate notifications when server events occurred, thus allowing for active monitoring and instant reporting of critical issues. Amazon CloudWatch Logs were used to aggregate application logs as well as metrics for monitoring memory and volume usage, service states, or any other vital data point that could be generated and stored for analysis.
With CloudTrail, all of the services being used by the AWS instance that ClearScale configured for the client could be tracked and audited to determine not only what services were being utilized and when, but also by whom. Leveraging the AWS API call history functionality, admins are able to determine resource change tracking, security analysis, and compliance auditing functions.
A final layer of environment validation was added in the form of New Relic, a software analytics package designed to provide real-time insight into any metric produced by the newly implemented OpenShift/Kubernetes solution in AWS. Actively gathering log data and analyzing it for security and resource usage is a valuable asset, but being able to actively monitor the application as well as the OpenShift/Kubernetes clusters allowed administrators the ability to understand in real-time how the application was functioning at a holistic level.
Being able to have Kubernetes containers execute and deploy builds within the application tier and manage the entire operation with finely tuned functions allowed ClearScale’s client to elevate their development process to the next tier, reducing time-to-market, and costs associated with some processes that Chef and Docker were not able to fully execute.
However, this robust implementation would not be as meaningful had the appropriate data logging and monitoring solutions had not been put in place. Coupling these data gathering mechanisms provided a validation and better understanding with a more robust container management solution like Kubernetes. This gave our client room to grow with their customers’ demands without concern of not being able to scale quickly.
Get in touch today to speak with a cloud expert and discuss how we can help: