This repository a Basic EKS Cluster with VPC
- Creates a new sample VPC, 2 Private Subnets and 2 Public Subnets
- Creates Internet gateway for Public Subnets and NAT Gateway for Private Subnets
- Creates EKS Cluster Control plane with two managed node groups
- Creates a basic kubernetes deployment behind an nginx ingress controller
Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply
Note: The policy resource is set as
*
to allow all resources, this is not a recommended practice.
You can find the policy here
git clone https://github.com/sttuartt/eks-demo.git
cd <repository_root>/remote_state
Edit the details in the main.tf
file to set the following values as appropriate:
- aws_region
- random_string (ensures bucket name is unique)
- prefix (bucket name)
- ssm_prefix (used to store the bucket name and locks table arn)
- common_tags
Run the following:
terraform init
terraform plan
terraform apply -auto-approve
cd <repository_root>/eks
Edit the details in the variables.tf
file to set the cluster_name default value.
Edit the details in the versions.tf
file to set the values for the s3 backend:
- bucket (ensure random string in name matches that used in the remote_state deployment)
- key (state file name)
- region
- encrypt (boolean)
- dynamodb_table (ensure random string in name matches that used in the remote_state deployment)
Note: These values should be taken from the output of the remote state backend configured in steps 1 and 2 above.
Run the following (ensure you are still in the <repository_root>/eks directory):
terraform init
terraform plan
terraform apply -auto-approve
Note: This will take approx. 15 minutes to provision
EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster.
This following command used to update the kubeconfig
in your local machine where you run kubectl commands to interact with your EKS Cluster.
The ~/.kube/config
file is updated with cluster details and certificate by running the following command:
aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>
e.g.
aws eks --region ap-southeast-2 update-kubeconfig --name demo-eks-cluster
kubectl get nodes
kubectl get pods -n kube-system
cd <repository_root>/apps
kubectl apply -f http-echo.yaml
kubectl get deployment http-echo
kubectl get service http-echo-service
kubectl get ingress http-echo
Obtain the Address value from the last command, and perform a curl
statement against it:
Note: It might take a minute or so for the address to become available when running the previous command
curl <Address>
e.g.
curl adc377573478043edbc909bac2cae94e-314602352.ap-southeast-2.elb.amazonaws.com
This should produce the following output:
hello world
Navigating to this address in your browser should produce the same result
To clean up your environment, destroy deployments and resources in the reverse order
-
cd <repository_root>/apps kubectl delete -f http-echo.yaml
-
cd <repository_root>/eks terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve terraform destroy -target="module.eks_blueprints" -auto-approve terraform destroy -auto-approve
Note: The destroy task is split up to avoid
Error: context deadline exceeded
errors -
cd <repository_root>/remote_state terraform destroy -auto-approve