The WWF’s existing IT infrastructure and CMS didn’t offer the stability, scalability, or security needed to support a new site launch.
ClearScale implemented auto-scaling groups, migrated data to Amazon RDS, and set up a monitoring service to track resource usage.
The WWF now has a highly available IT environment capable of handling traffic spikes and delivering exceptional user experiences.
Amazon RDS, Amazon Elastic File System, Amazon EC2, Amazon CloudWatch, AWS CodeCommit, AWS CloudBuild, Amazon Code Pipeline, AWS CodeDeploy, Amazon CloudFront
The World Wildlife Fund is the world’s leading conservation organization and committed to conserving nature and reducing the most pressing threats to the diversity of life on Earth. One of the ways it fulfills that mission is through education via channels such as the web and television.
One of the WWF’s recent projects was a website to support Our Planet, an original documentary series it created in conjunction with Netflix and Silverback Films. Development of the site included use of a new content management system (CMS). With the series scheduled to debut in April 2019, the website project and CMS implementation were on an extremely tight schedule.
The potential for heavy website traffic spikes, particularly during the series launch, created concerns about the WWF’s IT infrastructure that would power the CMS and website. Applications were currently run on a single server, hindering the infrastructure’s stability, scalability and security. Moving to an AWS Cloud environment would solve those issues, but the WWF needed a partner that could make it happen.
That partner had to be well-versed in AWS best practices for deploying a CMS to ensure optimal performance and security, particularly against DDoS and other external attacks. The partner also had to work quickly to meet the tight timeframes and be available to monitor the stability and security of the AWS environment during the critical website launch period. ClearScale, an AWS Premier Consulting Partner, was up for the challenges.
The ClearScale Solution — Phase One
In the first phase of the project, the ClearScale team assessed the WWF’s IT environment. Drawing on industry best practices and its own expertise, the team determined how best to optimize it for stability and security and made recommendations to the WWF. The team was then able to successfully implement the necessary changes and meet the tight schedule for the launch of the new series.
Throughout the critical launch period, the ClearScale team also remained on call to help with any issues that arose and to monitor the WWF’s environment for security and stability.
The ClearScale Solution — Phase Two
The next phase of the project was to ensure that the WWF had an adaptable environment that could scale and respond to unplanned, increased traffic events. The ClearScale team again drew on its expertise and vast experience to enhance the environment with a combination of AWS services and best practices.
The Database Side
Among the first tasks was to move the data tier to Amazon RDS for MySQL. The managed service takes on time-consuming database administration tasks including backups, software patching, monitoring, scaling, and replication.
Amazon RDS multi-Availability Zone (AZ) deployments provide enhanced availability and durability for the MySQL databases. Amazon RDS automatically creates a primary database instance and synchronously replicates the data to a standby instance in a different AZ. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby database instance.
In addition, Amazon RDS provides high-level security for MySQL databases. This includes network isolation using Amazon Virtual Private Cloud (VPC) and encryption at rest and in transit. Amazon RDS also provides metrics for the database instances using Amazon CloudWatch, a monitoring and management service. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing a unified view of AWS resources, applications, and services.
For application servers, the ClearScale team implemented auto-scaling groups, allowing the system to scale up or down based on system usage. When demand is high, the system will add servers automatically. When demand is down, the system will reduce resources to a specified minimum.
Amazon Elastic File System (Amazon EFS), another managed service, was integrated into the solution to provide parallel shared access to thousands of Amazon EC2 instances. This will enable applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies. Built to scale on demand to petabytes without disrupting applications, it will grow and shrink automatically as files are added or removed. As such, the applications will have the storage needed when it’s needed.
Monitoring, Content Delivery, and More
Amazon CloudWatch, a monitoring service for AWS Cloud resources and the applications run on AWS, was included to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in the AWS resources. This will help provide system-wide visibility into resource utilization, application performance, and operational health.
Another integral component of the solution is a continuous integration/continuous delivery pipeline to help automate release pipelines for fast, reliable application and infrastructure updates. Various AWS managed services will be employed, including Code Commit, Code Build, Code Pipeline, and Code Deploy.
Yet another key AWS service integrated into the ClearScale solution is Amazon CloudFront. This fast content delivery network (CDN) service securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. The CloudFront network has 180 points of presence (PoPs) and leverages the highly resilient Amazon backbone network for superior performance and availability for end users.
Both the series launch and the IT environment to support it were successful. The WWF also now has a plan for AWS infrastructure to meet its needs going forward. Scalable, secure, and requiring little hands-on management from the WWF team, it will help ensure the WWF’s applications are highly available, traffic spikes can be accommodated, and users’ experiences will continue to be exceptional.