Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 165 fully featured services from data centers globally, including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security and enterprise applications.. Millions of customers – including the fastest-growing startups, largest enterprises, and leading government agencies –trust AWS to power their infrastructure, become more agile, and lower costs.
The Five Pillars
The Five Pillars covered in the AWS Fundamentals comes from the AWS Well-Architected Framework. The Well-Architected Framework is the distillation of over a decade of experience building scalable applications on the cloud.
The Five Pillars consist of the following areas: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
The operational excellence pillar focuses on how you can continuously improve your ability to run systems, create better procedures, and gain insights.
When thinking about operational excellence in the cloud, it is useful to think of it in terms of automation.
Human error is the primary cause of defects and operational incidents. The more operations that can be automated, the less chance there is for human error.
In addition to preventing error, automation helps you continuously improve your internal processes. They promote a set of repeatable bests practices that can be applied across your entire organization.
When you think of operations as automation, you want to focus your efforts in the areas that currently require the most manual work and might have the biggest consequence for error. You’ll also want to have a process in place to track, analyze, and improve your operational efforts.
We will focus on the following two concepts for operational excellence:
- Infrastructure as Code
The security pillar focuses on how to secure your infrastructure on the cloud. Security and compliance is a shared responsibility between AWS and the customer. In this shared responsibility model, AWS is responsible for the security of the cloud. This includes the physical infrastructure, software, and networking capabilities of AWS cloud services. The customer is responsible for security in the cloud. This includes the configuration of specific cloud services, the application software, and the management of sensitive data.
When thinking about security in the cloud, it is useful to adopt the model of zero trust.
In this model, all application components and services are considered discrete and potentially malicious entities. This involves the underlying network fabric, all agents that have access to your resources, as well as the software that runs inside your service.
When we think of security in terms of zero trust, it means we need to apply security measures at all levels of our system. The following are three important concepts involved in securing systems with zero trust in the cloud:
- Identity and Access Management (IAM)
- Network Security
- Data Encryption
The reliability pillar focuses on how you can build services that are resilient to both service and infrastructure disruptions. Much like with performance efficiency, while the cloud gives you the means to build resilient services that can withstand disruption, it requires that you architect your services with reliability in mind.
When thinking about reliability in the cloud, it is useful to think in terms of blast radius. You can think of blast radius as the maximum impact that might be sustained in the event of a system failure. To build reliable systems, you want to minimize the blast radius of any individual component.
When you think in terms of the blast radius, the question of failure is no longer a question of if but a matter of when. To deal with failure when it happens, the following techniques can be used to limit the blast radius:
- Fault Isolation
The performance efficiency pillar focuses on how you can run services efficiently and scalably in the cloud. While the cloud gives you the means to handle any amount of traffic, it requires that you choose and configure your services with scale in mind.
When thinking about performance efficiency in the cloud, it is useful to think of your services as cattle, not pets.
In the on-premises model of doing things, servers were expensive and often manually deployed and configured. It could take weeks before a server was actually delivered and physically plugged into your data center. Because of this, servers were treated like pets – each one was unique and required a lot of maintenance. Some of them even had names.
The cloud way of thinking about servers is as cattle. Servers are commodity resources that can be automatically provisioned in seconds. No single server should be essential to the operation of the service.
Thinking of servers as cattle gives us many performance-related benefits. In the “pet model” of managing servers, it is quite common to use the same type of server (or even the same server) for multiple workloads – it was too much of a hassle to order and provision different machines. In the “cattle model,” provisioning is cheap and quick which gives us the freedom to select the server type that most closely matches our workload.
The “cattle model” also makes it easy for us to scale our service. Because every server is interchangeable and quick to deploy, we can quickly scale our capacity by adding more servers.
We will focus on the following two concepts for performance efficiency:
The cost optimization pillar helps you achieve business outcomes while minimizing costs.
When thinking about cost optimization in the cloud, it is useful to think of cloud spend in terms of OpEx instead of CapEx. OpEx is an ongoing pay-as-you-go model whereas CapEx is a one-time purchase model.
Traditional IT costs on on-premises data centers have been mostly CapEx. You pay for all your capacity upfront regardless if you end up using it. Purchasing new servers could be a lengthy process that involved getting sign-off from multiple parties. This is because CapEx costs were often significant and mistakes costly. After you have made a purchase, the actual servers could still take weeks to come in.
In AWS, your costs are OpEx. You pay on an ongoing basis for the capacity that you use. Provisioning new servers can be done in real-time by engineering without the need for a lengthy approval process. This is because OpEx costs are much smaller and can be reversed if requirements change. Because you only pay for what you use, any excess capacity can simply be stopped and terminated. When you do decide to use a service, provisioning can be done in the order of seconds and minutes.
Going from a CapEx model to an OpEx model fundamentally changes your approach to costing your infrastructure. Instead of large upfront fixed costs, you think in small ongoing variable expenses.
This pay-as-you-go model introduces the following changes to your cost optimization process:
- Pay For Use
- Cost Optimization Lifecycle
About AWS Solutions Implementations
AWS Solutions Implementations help you solve common problems and build faster using the AWS platform. All AWS Solutions Implementations are vetted by AWS architects and are designed to be operationally effective, reliable, secure, and cost efficient. Every AWS Solutions Implementation comes with detailed architecture, a deployment guide, and instructions for both automated and manual deployment.
About AWS Solutions Consulting Offers
AWS Solutions Consulting Offers are vetted solutions to dozens of common business and technical problems, delivered via consulting engagements provided by AWS Competency Partners. All Consulting Offers provide customers up-front with a list of what will be delivered by the consulting engagement, the requirements of the customer to participate in the engagement, as well as a diagram of the architecture solution that will be deployed into the customers’ account.