Introduction to Elastic Kubernetes Service on AWS

Amazon Web Services (AWS) is a widely used cloud platform that provides an extensive mixture of infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) offerings. AWS services enable scalable, flexible, and cost-effective access to computing resources and infrastructure.

Amazon Elastic Kubernetes Service (EKS) is a popular AWS service that helps organizations simplify the management and orchestration of Kubernetes-based containerized applications. Besides running containerized workloads on AWS, EKS also runs upstream Kubernetes and is conformant with cloud-native tooling, making EKS workloads compatible with those running on standard Kubernetes deployments.

In this first article of a three-part series, we’ll learn about the EKS service, its different components, benefits, and use cases.

Deep dive into Amazon EKS

Amazon EKS simplifies the deployment, management, and scaling of Kubernetes applications both on-premises and in the AWS cloud. An EKS cluster consists of two Amazon Virtual Private Clouds (VPCs). One managed VPC that hosts the control plane, and another VPC hosting the worker nodes. The EKS service relies on AWS infrastructure, such as load balancers, to provide resources and functionality to the cluster.

How Amazon EKS Works

As a containers-as-a-service (CaaS) offering, EKS automatically creates and scales the control plane and worker nodes, enabling managed cluster installation, operation, and maintenance.

Deploying a Kubernetes workload in an Amazon EKS cluster typically goes through the following workflow:

  • The process starts with creating an EKS cluster, which can be done through various ways, such as:
    • Utilizing a language-specific software development kit (SDK)
    • Through the AWS Management Console
    • Using the AWS Command Line Interface (CLI), or eksctl, the EKS CLI.
    • The command for creating an EKS cluster using the AWS CLI tool would be similar to this:

      aws eks create-cluster --region  <aws-region-code>  --
      name <my-cluster-name> --kubernetes-version <any-k8s-version-supported-by-eks> \
      --role-arn arn:aws:iam::<aws-account-id>:role/<IAM-role> \
      --resources-vpc-config subnetIds=<subID1,subID2>
    • Using an infrastructure-as-code (IaC) platform, such as AWS Cloudformation or Terraform
    • EKS offers four cluster deployment options. These are:

      • Amazon EKS: A fully managed service that enables teams to run Kubernetes workloads without having to install, operate or maintain cluster resources
      • Amazon EKS on AWS Outposts: Enables teams to run EKS services in on-prem facilities
      • Amazon EKS anywhere: Enables teams to create and operate Kubernetes clusters in on-prem facilities
      • Amazon EKS Distro: An open-source project that includes all resources and dependencies deployed by the EKS service
    • Provisioning an EKS cluster automatically deploys a master node that drives source requirements and networking for worker nodes.
  • After the master node is set, the EKS cluster needs to set up worker nodes to run the workloads. Worker nodes refer to a node group (a group of EC2 instances that can be provisioned to run workloads) or AWS Fargate (for serverless workloads).
  • To deploy Kubernetes workloads onto EKS worker nodes, Kubernetes-native tools like kubectl should be configured on the EKS cluster to enable communication with the Kubernetes EKS environment. The EKS cluster also utilizes AWS Controllers for Kubernetes (ACK) to connect containerized applications with the underlying infrastructure and AWS-native services.
  • To leverage AWS controllers, the EKS control plane relies on the Cloud Controller Manager (CCM) that allows administrators to connect the cluster with the AWS API for seamless integration. A controller manager (CM) is different from a CCM, where the CM helps manage the state of the cluster by orchestrating controllers.
  • Kubernetes workloads are finally deployed to the cluster worker nodes. Administrators can interact with the control plane using the kubectl tool, which helps create, manage and destroy deployment objects.The EKS console provides detailed information about cluster objects and resources, making it easier to correlate AWS resource usage with cluster performance.

Components of an EKS cluster

A typical Kubernetes ecosystem relies on a complex framework of various components that enable a seamless orchestration of containers. Some of the primary components of an EKS cluster include:

EKS Virtual Private Cloud

AWS Virtual Private Cloud (VPC) allows EKS clusters to execute production-grade workloads within a virtual network. A VPC is utilized for secure networking of workloads that offers an organization the flexibility to use their defined network ACLs and VPC security groups.

VPC also helps achieve the isolation of applications by restricting AWS resource-sharing between different VPC networks, allowing organizations to develop stable, highly available, and secure applications.

EKS worker nodes

EKS worker nodes are the AWS compute instances where all workloads are deployed. All worker nodes are registered with the master node belonging to the EKS cluster. Developers and administrators typically interact with worker nodes to manage the Kubernetes workloads and perform activities such as deploying containerized applications or autoscaling them.

The worker nodes can either be an EC2 instance node group or a Fargate profile, both of which provide seamless, automated scalability. Managed node groups automate the provisioning of EC2 instances, while Fargate profiles enable the deployment of containerized workloads on a serverless environment.

Fargate profile

Fargate eliminates the overhead in managing worker nodes by leveraging a serverless environment for container orchestration. In instances where EKS deploys workloads on Fargate, a Fargate profile enables administrators to determine the pod or pods executed on Fargate.

Administrators declare pods using the profile’s selectors, where each profile can have a maximum of five selectors—while each selector should contain at least one namespace and other optional labels.

EKS control plane

The collection of master nodes in EKS is alternatively called the control plane and is responsible for efficiently managing and monitoring new and existing worker nodes within the EKS cluster. A control plane controls critical activities of container orchestration, such as scheduling of containers or pods, monitoring pod availability, or holding cluster data.

The control plane has to run within a VPC managed by AWS; it is unique and cannot be shared between clusters. A typical control plane runs on at least three master nodes distributed across different AWS AZs. These nodes further host Kubernetes cluster orchestration and management components, including etcd, apiServer, the scheduler, and controller manager.

EKS features

Key features of Amazon EKS include:

  • Managed node groups: EKS-managed node groups automate worker node creation and lifecycle management for an EKS cluster using a single command. These groups run EC2 instance nodes using custom EKS Amazon Machine Images (AMIs) of an AWS account. During the update and graceful termination of nodes, node groups are leveraged to ensure application availability.
  • Control plane logging: EKS seamlessly integrates with AWS Cloudwatch for managing control plane logs. Administrators can enable the export of audit and diagnostic logs from the control plane to the organizational account’s Cloudwatch log for comprehensive monitoring and analysis.
  • Various log types include:

    • API server component logs
    • Audit logs
    • Authenticator logs
    • Controller manager
    • Scheduler

Important considerations

  • Public and private access points: By default, the endpoint for the Kubernetes API server that is used to connect with the cluster is defined with a public access mode. Although the publicly accessible API endpoint is secured using role-based access control (RBAC) and AWS Identity and Access Management (IAM) policies, for enhanced security and restricting communication within the VPC, administrators can additionally enable private access for an endpoint. This can be done by either limiting the IP addresses that can have access to the API server or entirely disabling access to the endpoint from the public internet.
  • Open-source command-line interface tool: eksctl is a simple command-line tool that can be used to manage EKS clusters and automate cluster management tasks. The command eksctl create cluster facilitates the creation of IAM roles and the base Amazon VPC for managing the AWS EKS control plane’s network access.
  • Per-region cluster limits: Each AWS account can create a maximum of 100 clusters per region.
  • Deployment: EKS clusters can be provisioned in several ways. These include the AWS console, using the eksctl CLI tool, and IaC solutions such as AWS CloudFormation or Terraform.

Why use Amazon EKS

As per a 2021 study, for the majority of the organizations surveyed, Kubernetes sits at the core of their IT strategy. EKS simplifies end-to-end processes of managing, operating, and maintaining Kubernetes workloads by abstracting the overhead of setting up Kubernetes clusters. The platform benefits from the extensive AWS ecosystem of services and infrastructure to offer resources for highly available, scalable workloads.

Benefits of EKS

The advantages of using EKS for containerized workloads include:

Granular access control

EKS seamlessly integrates Kubernetes RBAC with IAM, allowing administrators to directly assign cluster RBAC roles to IAM entities. Each EKS cluster leverages an OpenID Connect (OIDC) issuer URL that uniquely identifies the cluster. The OIDC provider allows cluster administrators to seamlessly use IAM roles for service accounts.

It is also possible to assign IAM policies to service accounts, allowing for fine-grained, controlled access to entities like other containerized services, applications hosted outside of AWS, and AWS resources external to the cluster.

Enhanced security

EKS typically offers a built-in secure Kubernetes configuration to enable the comprehensive security of Kubernetes workloads. For enhanced security of Kubernetes clusters, EKS facilitates integrations with various AWS services and AWS partner solutions such as AWS IAM, Amazon Key Management Service (KMS), and Amazon VPC.

The EKS service is also certified by most compliance and regulatory frameworks defined for sensitive applications. Comprehensive security and regulatory compliance ensures data privacy and system integrity, making EKS ideal for sensitive and regulated Kubernetes applications.

Some of the frameworks EKS is compliant with include PCI-DSS, ISO, HIPAA, SOC, and HITRUST CSF.

High availability and resilience

EKS runs Kubernetes master nodes across different AWS availability zones. The construct enables EKS to automatically scale the control plane based on changes in workload demands. These zones are connected with redundant, high-throughput, low-latency networks for fault-tolerant and highly available clusters.

Serverless

EKS coupled with AWS Fargate enables a CaaS model that offers a serverless platform to run Kubernetes-based applications directly on AWS. Fargate’s serverless platform allows organizations to choose predefined price points and pay only for resources used, allowing them to run pods on a logically isolated runtime environment, reducing resource conflicts between workloads. Fargate also eliminates the overhead of executing operational tasks such as provisioning and managing infrastructure, patching, upgrades, or managing security.

Easy to deploy and use

EKS is fully managed, allowing for the automatic creation of the control plane and worker nodes based on workloads. The service also offers multiple launch options (Web console, eksctl, and IaC) and can be connected with on-premises clusters using AWS Outposts for hybrid deployments.

Compatible with Kubernetes-native tools

EKS supports all major Kubernetes add-ons and is fully compatible with Kubernetes-native tooling. This allows organizations to utilize various Kubernetes community tools, such as CoreDNS, the Kubernetes Dashboard, the Kubernetes command line kubectl, and inherent AWS services offered out of the box.

Amazon EKS use cases

Common usage scenarios for EKS clusters include:

Deploying applications with tight AWS integrations

Kubernetes clusters typically rely on various supporting resources like message queues, object stores, and databases that are provisioned through a set of AWS-managed services. Usage of AWS Controllers for Kubernetes (ACK) offers a unified way of managing Kubernetes workloads and its related dependencies. This is essentially done by allowing workloads running within EKS to connect directly with an entire AWS-managed services and infrastructure ecosystem. With ACK, EKS application workloads do not need to define external cluster resources separately or execute managed services within the cluster.

Orchestrating service-based architectures

AWS App Mesh is a service mesh that enables seamless application-level networking by standardizing interservice communication in a microservices-based framework. App Mesh also helps to monitor and configure traffic flows, networking, and security for containerized microservices. Running service-oriented apps on EKS with the App Mesh abstracts the need to implement special libraries, write custom code, or configure communication protocols for the services.

Compatibility with different container repositories

EKS can pull container images from both public and private repositories to store, deploy, and manage container images. The ability to seamlessly blend with both container repositories makes it suitable for software teams running hybrid clusters that use the repositories for their containerized applications.

Summary

The EKS service allows organizations to run conformant Kubernetes clusters on the cloud without extensive expertise. EKS enables the orchestration of workloads on self-managed, serverless, and managed worker node instances, allowing for flexibility in the control and scheduling containerized workloads. With AWS Outposts and EKS Anywhere, administration teams can connect nodes on AWS, on-premises, and other cloud platforms for hybrid orchestration.

In this article, we delved into the features, benefits, and use cases of leveraging the managed EKS service by AWS. In the future articles of the series, we will explore other managed Kubernetes services to learn how they fare with each other on common points.

Was this article helpful?
Monitor your AWS environment

AWS Monitoring helps you gain observability into your AWS environment

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us