Skip to main content

Overview

The EKS module creates a production-ready Amazon EKS (Elastic Kubernetes Service) cluster with a managed node group, IAM roles with required policies, and an OIDC provider for IAM Roles for Service Accounts (IRSA).

Features

Managed Node Groups

Auto-scaling node groups with configurable instance types and capacity

OIDC Provider

Automatic OIDC setup for IRSA (IAM Roles for Service Accounts)

IAM Roles

Pre-configured cluster and node group IAM roles with required policies

Control Plane Logging

Optional CloudWatch logging for API server, audit, and other components

Private Endpoint

Cluster endpoint accessible only within VPC by default

Multi-AZ Support

Node groups deployed across multiple availability zones

Usage Examples

Basic Configuration

module "eks" {
  source = "[email protected]:opsnorth/terraform-modules.git//eks?ref=v1.0.0"

  cluster_name       = "dev-eks-cluster"
  kubernetes_version = "1.34"
  private_subnet_ids = module.vpc.private_subnet_ids

  primary_node_group_instance_types = ["m5.large"]
  primary_node_group_desired_size   = 2
  primary_node_group_min_size       = 1
  primary_node_group_max_size       = 3

  tags = {
    Environment = "dev"
  }
}

Production Configuration

module "eks" {
  source = "[email protected]:opsnorth/terraform-modules.git//eks?ref=v1.0.0"

  cluster_name       = "prod-eks-cluster"
  kubernetes_version = "1.34"
  private_subnet_ids = module.vpc.private_subnet_ids

  # Enable all control plane logs
  cluster_log_types = [
    "api",
    "audit",
    "authenticator",
    "controllerManager",
    "scheduler"
  ]

  # Production-sized nodes
  primary_node_group_instance_types = ["m5.xlarge", "m5.2xlarge"]
  primary_node_group_capacity_type  = "ON_DEMAND"
  primary_node_group_desired_size   = 3
  primary_node_group_min_size       = 3
  primary_node_group_max_size       = 10
  primary_node_group_disk_size      = 100

  # Node labels for workload placement
  primary_node_group_labels = {
    "workload" = "general"
    "tier"     = "production"
  }

  tags = {
    Environment = "production"
    CostCenter  = "engineering"
  }
}

With VPC Module

Compose with the VPC module:
module "vpc" {
  source = "[email protected]:opsnorth/terraform-modules.git//vpc?ref=v1.0.0"

  vpc_name           = "eks-vpc"
  vpc_cidr           = "10.0.0.0/16"
  aws_region         = "us-east-1"
  availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]

  public_subnet_cidrs  = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_subnet_cidrs = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]

  eks_cluster_name = "my-eks-cluster"
}

module "eks" {
  source = "[email protected]:opsnorth/terraform-modules.git//eks?ref=v1.0.0"

  cluster_name       = "my-eks-cluster"
  kubernetes_version = "1.34"
  private_subnet_ids = module.vpc.private_subnet_ids
}

Spot Instances for Cost Savings

module "eks" {
  source = "[email protected]:opsnorth/terraform-modules.git//eks?ref=v1.0.0"

  cluster_name       = "dev-eks-cluster"
  kubernetes_version = "1.34"
  private_subnet_ids = module.vpc.private_subnet_ids

  # Use Spot instances for non-production
  primary_node_group_capacity_type = "SPOT"
  
  # Multiple instance types for better Spot availability
  primary_node_group_instance_types = [
    "m5.large",
    "m5a.large",
    "m5n.large"
  ]

  primary_node_group_desired_size = 2
  primary_node_group_min_size     = 1
  primary_node_group_max_size     = 5

  tags = {
    Environment = "dev"
  }
}
Spot instances can be interrupted with 2-minute notice. Only use for fault-tolerant workloads or non-production environments.

Connecting to the Cluster

Update kubeconfig

aws eks update-kubeconfig \
  --region us-east-1 \
  --name my-eks-cluster

Verify Connection

kubectl get nodes
kubectl get pods -A

Using Terraform Output

output "configure_kubectl" {
  value = "aws eks update-kubeconfig --region ${var.aws_region} --name ${module.eks.cluster_id}"
}

IRSA (IAM Roles for Service Accounts)

The module automatically creates an OIDC provider for IRSA:

Creating an IRSA Role

data "aws_iam_policy_document" "irsa_assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Federated"
      identifiers = [module.eks.oidc_provider_arn]
    }

    actions = ["sts:AssumeRoleWithWebIdentity"]

    condition {
      test     = "StringEquals"
      variable = "${replace(module.eks.oidc_provider_url, "https://", "")}:sub"
      values   = ["system:serviceaccount:my-namespace:my-service-account"]
    }
  }
}

resource "aws_iam_role" "irsa_role" {
  name               = "my-app-irsa-role"
  assume_role_policy = data.aws_iam_policy_document.irsa_assume_role.json
}

resource "aws_iam_role_policy_attachment" "irsa_policy" {
  role       = aws_iam_role.irsa_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}

Kubernetes Service Account

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: my-namespace
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-app-irsa-role

Using in a Pod

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: my-namespace
spec:
  serviceAccountName: my-service-account
  containers:
  - name: app
    image: my-app:latest
    # Pod automatically receives AWS credentials via IRSA
IRSA provides temporary AWS credentials to pods without needing to store access keys. This is the recommended approach for AWS access from EKS.

Inputs

NameDescriptionTypeDefaultRequired
cluster_nameName of the EKS clusterstringn/ayes
kubernetes_versionKubernetes versionstring"1.28"no
private_subnet_idsPrivate subnet IDs (minimum 3 for HA)list(string)n/ayes
cluster_log_typesControl plane log types to enablelist(string)["api", "audit", "authenticator", "controllerManager", "scheduler"]no
primary_node_group_instance_typesInstance types for the node grouplist(string)["m5.large"]no
primary_node_group_capacity_typeON_DEMAND or SPOTstring"ON_DEMAND"no
primary_node_group_disk_sizeDisk size in GBnumber20no
primary_node_group_desired_sizeDesired node countnumber2no
primary_node_group_min_sizeMinimum node countnumber2no
primary_node_group_max_sizeMaximum node countnumber2no
primary_node_group_labelsLabels for the node groupmap(string){}no
cluster_security_group_idsAdditional security group IDslist(string)[]no
tagsTags to apply to all resourcesmap(string){}no

Outputs

NameDescription
cluster_idThe name/id of the EKS cluster
cluster_arnThe ARN of the EKS cluster
cluster_endpointEndpoint for EKS control plane
cluster_versionThe Kubernetes version
cluster_security_group_idSecurity group ID of the cluster
cluster_certificate_authority_dataBase64 encoded certificate data (sensitive)
cluster_iam_role_arnIAM role ARN of the cluster
node_group_iam_role_arnIAM role ARN of the node group
primary_node_group_idNode group ID
primary_node_group_arnNode group ARN
oidc_provider_urlOIDC provider URL (for IRSA)
oidc_provider_arnOIDC provider ARN (for IRSA)

Design Decisions

The cluster endpoint is only accessible within the VPC (endpoint_public_access = false). Access requires:
  • VPN connection to VPC
  • Bastion host in public subnet
  • AWS Cloud9 environment
  • EC2 instance with kubectl
This prevents unauthorized access to the Kubernetes API.
The module uses managed node groups instead of self-managed nodes or Fargate:
  • Pros: Automatic updates, simplified management, integrated autoscaling
  • Cons: Less customization than self-managed nodes
For advanced use cases, consider creating additional node groups outside the module.
The module enforces a minimum of 3 private subnets for:
  • High availability across multiple AZs
  • EKS control plane requirements
  • Better pod distribution
Validation error occurs if fewer than 3 subnets are provided.
The OIDC provider is created automatically to enable IRSA. This is required for:
  • AWS Load Balancer Controller
  • External Secrets Operator
  • EBS CSI Driver
  • Any workload needing AWS API access

Post-Deployment Setup

Install Essential Add-ons

# Install using Helm
helm repo add eks https://aws.github.io/eks-charts

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-eks-cluster \
  --set serviceAccount.create=true \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=<IRSA_ROLE_ARN>

Troubleshooting

Cannot Connect to Cluster

Error:
Error: You must be logged in to the server (Unauthorized)
Solution: Update your kubeconfig:
aws eks update-kubeconfig --region us-east-1 --name my-eks-cluster
Verify IAM user/role has access:
aws eks describe-cluster --name my-eks-cluster --region us-east-1

Node Group Not Scaling

Check Cluster Autoscaler logs:
kubectl logs -n kube-system -l app=cluster-autoscaler
Verify node group settings:
aws eks describe-nodegroup \
  --cluster-name my-eks-cluster \
  --nodegroup-name primary-node-group

IRSA Not Working

Verify OIDC provider:
aws eks describe-cluster \
  --name my-eks-cluster \
  --query "cluster.identity.oidc.issuer" \
  --output text

aws iam list-open-id-connect-providers
Check service account annotations:
kubectl describe serviceaccount -n my-namespace my-service-account
Test IAM role assumption:
kubectl run test --rm -it --image=amazon/aws-cli \
  --serviceaccount=my-service-account \
  --namespace=my-namespace \
  -- aws sts get-caller-identity

Best Practices

Use Latest Kubernetes Version

Keep clusters updated to the latest supported version for security patches and features.

Enable Control Plane Logging

Enable all log types in production for audit trails and troubleshooting.

Right-Size Nodes

Start with smaller instances and scale up based on actual usage metrics.

Use IRSA

Always use IRSA instead of instance profiles or hardcoded credentials.

Multi-AZ Deployment

Deploy node groups across at least 3 availability zones for resilience.

Resource Quotas

Implement Kubernetes resource quotas to prevent resource exhaustion.

VPC Module

Create VPC with subnets for EKS

Vault Module

Deploy Vault with IRSA

K8s Scheduler Deployment

Deploy K8s Scheduler on EKS

Infrastructure Guide

Complete infrastructure deployment

Build docs developers (and LLMs) love