- Introduction
- Preparing the AWS Cloud9 Environment
- Creating a Scalable Amazon EKS Cluster
- Building and Pushing Docker Images
- Implementing Traffic Management with AWS Load Balancer Controller
- Sample Application on Kubernetes
- Advanced Kubernetes Deployment Strategies
- Securing Amazon EKS Deployments
- Real-World Use Cases
- Conclusion
Introduction
With the growing reliance on containerized applications, Kubernetes has become a cornerstone of cloud-native deployment strategies. Among managed Kubernetes services, Amazon Elastic Kubernetes Service (EKS) offers seamless integration with AWS resources, providing a scalable and resilient environment for running Docker applications. This article is dealing with deploying Docker containers on Kubernetes EKS cluster, emphasizing best practices for configuration, scaling, and security.
Preparing the AWS Cloud9 Environment
AWS Cloud9 is a cloud-based integrated development environment (IDE) that integrates with AWS services, making it a practical platform for managing aws EKS clusters. Before setting up the EKS cluster, we configure Cloud9 to include essential tools and permissions.

Cloud9 provides a browser-based IDE with built-in terminal, code editor, and AWS CLI, simplifying the setup and management of Kubernetes resources.
Step 1: Configure Cloud9
Start by setting up the environment with the necessary tools:
sudo yum -y install jq gettext bash-completion moreutils
Next, we enable IAM role support in the Cloud9 environment:
export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export ROLE_NAME="Cloud9AdminRole"
aws cloud9 update-environment --environment-id $CLOUD9_ENVIRONMENT_ID --managed-credentials-action DISABLE
aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
This setup ensures that the Cloud9 environment can securely interact with EKS, providing the necessary permissions.
Creating a Scalable Kuberneetes aws EKS Cluster
To manage containerized applications at scale, an Amazon EKS cluster needs to be configured with the right settings. A balanced approach ensures scalability, cost-efficiency, and high availability.
Step 2: Create the EKS Cluster
Use the eksctl
command to create an EKS cluster with managed node groups:
eksctl create cluster \
--name prod-cluster \
--region us-west-2 \
--nodegroup-name primary-nodes \
--node-type m5.large \
--nodes 3 \
--nodes-min 2 \
--nodes-max 5 \
--managed \
--version 1.29
This command sets up an EKS cluster named "prod-cluster" in the us-west-2
region. It creates a managed node group with m5.large
instances, which provide a good balance of CPU and memory for various workloads.

This cluster configuration ensures that the EKS environment can scale based on workload demands, with a minimum of 2 nodes and a maximum of 5 nodes.
Cluster Configuration Explained
- Node Type: The
m5.large
instance type offers versatility for different workloads. For development or testing,t3.medium
instances might be sufficient, while compute-intensive applications may requirec5.xlarge
. - Managed Nodes: Using managed node groups automates updates and node management, ensuring the cluster remains current with minimal manual intervention.
- Scaling Options: Setting a range for node scaling (minimum of 2, maximum of 5) allows for auto-scaling in response to workload changes.
Production Considerations
For production environments, the following configurations enhance resilience, performance adn cost:
- Multi-AZ Deployments: Distribute nodes across multiple Availability Zones (AZs) for increased fault tolerance.
- Cluster Autoscaler: Automatically adjust node counts based on pod requirements, optimizing resource allocation.
- Pod Disruption Budgets: Define budgets to prevent simultaneous disruptions to critical services during updates.
Building and Pushing Docker Images
Containerization with Docker facilitates the packaging and deployment of applications, and Amazon Elastic Container Registry (ECR) provides a secure, scalable repository for storing these images.

Herewith we have build and push Docker images to ECR for deployment on our Kubernetes cluster.

Above is the Docker image for the backend service, which have pushed to our Kubernetes cluster for deployment.
Setting Up ECR
Create ECR repositories for storing Docker images:
aws ecr create-repository --repository-name app-backend --region us-west-2
aws ecr create-repository --repository-name app-frontend --region us-west-2
Set environment variables for the ECR repository URIs:
export ECR_URI_BACKEND=$(aws ecr describe-repositories --repository-names app-backend --query 'repositories[0].repositoryUri' --output text)
export ECR_URI_FRONTEND=$(aws ecr describe-repositories --repository-names app-frontend --query 'repositories[0].repositoryUri' --output text)
Automating the Docker Workflow
Automating the Docker build and push process ensures consistency and reduces the chance of human error. Build the images and push them to ECR:
docker build -t app-backend .
docker build -t app-frontend .
docker tag app-backend:latest $ECR_URI_BACKEND:latest
docker tag app-frontend:latest $ECR_URI_FRONTEND:latest
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin $ECR_URI_BACKEND
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin $ECR_URI_FRONTEND
docker push $ECR_URI_BACKEND:latest
docker push $ECR_URI_FRONTEND:latest
This automated approach integrates easily into CI/CD pipelines, ensuring continuous delivery of updated containers.
Implementing Traffic Management with AWS Load Balancer Controller
Managing network traffic to an EKS cluster requires configuring load balancers. The AWS Load Balancer Controller facilitates this by integrating Application Load Balancers (ALB) with Kubernetes Ingress.
Controller Configuration Steps
Install the AWS Load Balancer Controller using Helm:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=prod-cluster \
--set serviceAccount.create=false \
--set region=us-west-2 \
--namespace kube-system
This command deploys the controller in the kube-system
namespace, linking it to the EKS cluster named "prod-cluster" in the us-west-2
region.
Using Ingress Rules
Define Ingress with YAML file rules to route traffic to different services within the cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
spec:
rules:
- host: "myapp.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-backend
port:
number: 80
This configuration directs external traffic to the "app-backend" service, using ALB to route incoming requests based on the specified rules.

Above is the Load Balancer configuration for the backend service, which routes traffic to the Kubernetes cluster based on defined rules.
Sample Application on Kubernetes
Deploying a sample application on Kubernetes demonstrates the end-to-end process of containerizing, deploying, and managing services within an EKS cluster.

The sample application consists of two services: a frontend and a backend, each running in separate pods within the EKS Kubernetes cluster. The frontend service communicates with the backend service to provide a seamless user experience.
Advanced Kubernetes Deployment Strategies
When deploying applications to Kubernetes, several strategies can improve availability, scalability, and resource efficiency.
Horizontal Pod Autoscaling
Configure autoscaling based on resource usage:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: backend-autoscaler
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
Zero-Downtime Deployments
To achieve zero downtime during updates, use rolling updates with strategies that gradually replace old versions:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
Multi-Environment Resource Management
Use namespaces to manage multiple environments in the same cluster:
apiVersion: v1
kind: Namespace
metadata:
name: staging
This approach helps separate resources and manage quotas for different teams or stages of development.
Securing Amazon EKS Deployments
Ensuring the security of an EKS cluster involves implementing robust IAM roles, network policies, and secrets management.
IAM Roles and Policies
Use IAM Roles for Service Accounts (IRSA) to control pod-level permissions:
eksctl create iamserviceaccount \
--name eks-app-sa \
--namespace production \
--cluster prod-cluster \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve \
--override-existing-serviceaccounts
Network Policies for Enhanced Security
Implement network policies to control traffic between pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-backend
namespace: production
spec:
podSelector:
matchLabels:
app: app-backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: app-frontend
Managing Secrets in Kubernetes
Store sensitive information securely using Kubernetes Secrets:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: production
type: Opaque
data:
username: xxxxxxxx # base64 encoded
password: xxxxxxxx
Use environment variables to inject secrets into containers.
Real-World Use Cases
Healthcare Data Processing
Use EKS to deploy AI models for real-time patient diagnostics, with data privacy ensured by processing sensitive information within the cluster.
Financial Real-Time Analysis
Deploy fraud detection models as microservices, scaling automatically during high-traffic periods to maintain low latency.
Industrial IoT Applications
Leverage EKS for processing large volumes of sensor data on the edge, detecting anomalies and optimizing maintenance schedules.
Conclusion
Deploying Docker applications on Amazon EKS offers a powerful approach for managing modern workloads. Through strategies such as autoscaling, secure configurations, and advanced deployment methods, businesses can leverage Kubernetes to build scalable, resilient, and secure environments that meet the demands of various industries.
Further Reading
- Amazon EKS Documentation
- Kubernetes Documentation
- AWS Load Balancer Controller
- Kubernetes Best Practices
- Kubernetes Security Guide
- Kubernetes Autoscaling
- Kubernetes Network Policies
- Kubernetes Secrets Management
- Kubernetes Ingress Controllers
- Kubernetes Namespace Management
- Kubernetes Zero-Downtime Deployments
- Kubernetes Multi-Environment Management
- Kubernetes Real-World Use Cases
- Kubernetes Production-Ready Clusters
- Kubernetes Monitoring and Logging
- Kubernetes Troubleshooting Guide