Sneak Peek: Tell me about a time when...
- You solved a customer pain point. Watch Amazon PM's answer.
- You solved a complex problem. Watch Amazon TPM's answer.
- You raised the bar. Watch answer.
The cloud computing field is booming. Amazon Web Services (AWS) dominates the market with a thirty-three percent market share as of the fourth quarter of 2021, which is more than Microsoft Azure and Google Cloud combined.
If you're preparing for a cloud engineering interview, it’s a matter of when, not if, you get questions related to AWS. This article will go through some of the most commonly asked interview questions. With these questions in mind, you'll be better prepared to land that dream job.
What is Amazon Web Services?
AWS is the cloud vendor from Amazon that offers a multitude of services, including storage, networking, analytics, and more. One of the main benefits of using AWS is that it can help you save money on infrastructure costs.
Additionally, AWS can help you scale your business more easily than if you were to use on-premises infrastructure, as it's based on pay-as-you-go pricing.
What are Load Balancers?
Load balancers act as a reverse proxy to distribute incoming network traffic across a network of servers. The following are the two most prevalent types of load balancers:
Network Load Balancers, also known as Layer 4 (L4) load balancers, are load balancers where the balancing is done on transport layer protocols such as IP, TCP, FTP, and UDP. Network load balancing is done with algorithms, which include round robin, weighted round robin, least response time, or fewest connections.
Application Load Balancers, also known as Layer 7 (L7) load balancers, are load balancers where the balancing is done on the application layer protocol such as HTTP. This type of load balancer supports more ways to distribute traffic, including cookie based distribution, SSL session based distribution, HTTP headers based distribution, and other parameters.
The advantages of using load balancers include:
- Improved performance: By distributing the traffic evenly across servers, load balancers can improve the performance of a website or application.
- Reduced response time: Load balancers can reduce the response time of a website or application, as the number of responses a server needs to tackle is comparatively less when divided.
- Increased availability: Load balancers can increase the availability of a website or application. Even if one or more server instances are down, the application can still be accessed by directing traffic to the remaining servers.
What's the Difference Between Horizontal Scaling and Vertical Scaling?
Vertical scaling increases the resources, such as computer, storage, and I/O, of a single server, enabling it to handle more traffic. Horizontal scaling is the process of adding additional servers to a load-balanced pool in order to handle more traffic.
What are the Differences Between SQL and NoSQL?
The main difference between SQL (Structured Query Language) and NoSQL is that SQL databases are based on a relational model. In contrast, NoSQL databases are based on a non-relational model.
SQL databases are easy to use and understand. They have a tabular format, and are ACID (Atomicity, Consistency, Isolation, and Durability) compliant. They can be scaled vertically, and you need to implement sharding by yourself manually.
NoSQL databases, on the other hand, can be more difficult to use and more difficult to understand, as they support a dynamic schema for unstructured data. However, they are much easier to scale, as they offer horizontal scaling and have built-in sharding.
What is Database Sharding?
Database sharding is a horizontal partitioning technique that splits a database into multiple smaller pieces called shards. Each shard is stored on a separate server and contains a subset of the data.
Sharding helps you improve the performance, scalability, and availability of database applications. It's built into NoSQL databases, and can be manually implemented in SQL databases.
How Do EKS and ECS Compare?
Amazon Elastic Kubernetes Service EKS is a managed Kubernetes service from AWS. EKS is fully compatible with the open source Kubernetes API, and supports all of the major Kubernetes features including storage, networking, security, monitoring, and logging.
Amazon Elastic Container Solution (ECS) is a container management service that lets you run and manage Docker containers on Amazon EC2 machines. ECS provides simple APIs that abstract away the details of container management, making it easy to run and manage your applications at scale.
How Do AWS and OpenStack Compare?
Both AWS and OpenStack are cloud platforms that provide users with on-demand access to a pool of resources that can be used to build, test, and deploy applications. AWS, which consists of infrastructure managed and maintained by Amazon, is a more mature platform, and offers a wider range of services and features than OpenStack.
AWS is also better suited for large-scale deployments, and provides users with more control over their environment. OpenStack is a more customizable and flexible open source platform. It's ideal for those looking to build a private cloud, or who want more control over their cloud environment.
Here's a more in-depth comparison:
|Virtual Servers||Nova Instance||EC2|
|Scalability||Heat with Scaling||AWS Scaling|
|Load Balancing||LBaas||Elastic Load Balancing|
|API||OpenStack API||EC2 AP|
What's the Difference Between Authentication and Authorization?
Authentication is verifying a user's identity. Authorization is granting access to resources to a user who is already authenticated.
For example, when you log in to a website, the website verifies your username and password against a database. This is authentication. Once you're logged in, the website checks to see if you have permission to view the page you're trying to access. If you do, you're authorized to view it. If you don't, you're not authorized, and you'll see an error message.
What are the Principles of System Design?
There are many principles of system design, but some of the most important include maintainability, scalability, availability, efficiency, and reliability. Read our blog on nailing the system design interview to learn more about these principles.
What AWS Service is Commonly Used for Data Archiving?
Amazon Glacier is an extremely low-priced service that's often used for data archiving.
What's an AWS CUR?
An AWS CUR (Cost & Usage Report) is a report that provides cost and usage information for your AWS account. It includes data on your AWS usage, charges, refunds, and credits. You can create a CUR by following this tutorial.
What is the Maximum Number of S3 Buckets You Can Create?
In an AWS account, you're allowed to create up to a hundred S3 buckets by default. If you have a paid AWS account, you can request a service limit increase to increase your S3 bucket limit to a thousand.
How Many Subnets Per VPC and VPCs Per Account or Region Can You Have?
A virtual private cloud (VPC) is a virtual network that is provisioned within a public cloud environment. A *subnet* is a portion of a VPC's IP address range that is allocated to a specific instance or group of instances.
There’s a limit of five VPCs for every region. Each VPC can have up to two hundred subnets.
What’s the Difference Between AWS Regions, Availability Zones, and Edge Locations?
AWS Regions are physical locations throughout the world where AWS operates data centers. These data centers are engineered so as to be isolated from failures in other AWS regions.
Availability zones are physically separated data centers within an AWS Region. *Edge locations* are endpoints for AWS, which are used for caching content through services like Amazon CloudFront.
What is Amazon CloudFront?
Amazon CloudFront is a content delivery network (CDN) offered by Amazon Web Services. It's a web service that speeds up the distribution of your static and dynamic web content to your users and reduces latency.
How Do You Upgrade or Downgrade a System With Near-Zero Downtime?
There are a few ways to upgrade or downgrade a system with near-zero downtime. These are the two most popular options.
Use a Blue/Green Deployment Strategy
With this strategy, you have two systems: one running the current version, and one running the new version. With the help of a load balancer, the incoming traffic is directed to the newer version, and then you can decommission the older version.
Use a Canary Release Strategy
With this strategy, you slowly roll out the new version to a small subset of users. Once you've verified and tested that the new version is working as expected, you can roll out the new version to all users. You can roll back during the initial tests if tests don’t go as expected.
How is Stopping an EC2 Instance Different From Terminating an EC2 Instance?
Stopping an EC2 (Elastic Compute Cloud) instance is equivalent to powering off a machine. The instance will remain in Amazon's EC2 infrastructure, and you will continue to be charged for it. Terminating an EC2 instance means that the machine is permanently deleted, and you will no longer be charged for it.
This article has covered many questions asked across various interviews dealing with AWS. One of the most important things to remember is that practice on the console while building and deploying services will provide you with more depth and knowledge, which will give you an edge over other candidates.
Practice questions related to cloud and system design from the Exponent's practice questions to familiarize yourself with other important topics.
If you liked this article, you might also enjoy these:
- How to Get an Amazon Referral
- How to Prepare for a Solutions Architect Interview (Questions & Answers)
- How to Find a Great Software Engineer Recruiter
This article was written by Hrittik Roy.