Prerequisites
You should understand the basic knowledge about container (Docker)
What is the container?
How does it work?
Kubernetes Overview
Introduction
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8S as an abbreviation results from counting the eight letters between the "K" and the "s".
Kubernetes is a production-ready, open-source platform designed with Google's accumulated container orchestration experience, combined with the community's best-of-breed ideas from 2014
What can Kubernetes do for you?
With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day.
Containerization (For ex: Docker) helps package software to serve these goals, enabling applications to be released and updated without downtime.
Kubernetes helps you ensure those containerized applications run where and when you want and helps them find the resources and tools they need.
Kubernetes provides you with:
Service discovery and load balancing: Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container. (Same with AWS ECS function, ECS has desired count and running count)
Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and don't advertise them to clients until they are ready to serve.
For example, the K8S cluster has Node A and Node B. Node A is shutdown → All containers in Node A will be deployed again to Node B automatically
Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
Architecture Overview
Nodes
There are two types of nodes:
A Master-node type, which makes up the Control Plane, acts as the “brains” of the cluster.
One or More API Servers: Entry point for REST / kubectl
etcd: Distributed key/value store
Controller-manager: Always evaluating current vs the desired state
Scheduler: Manage and control the pods in worker nodes
A Worker-node type, which makes up the Data Plane, runs the actual container images (via pods).
Made up of worker nodes
kubelet: Acts as a pipe between the API server and the node
kube-proxy: Manages IP translation and routing, exposes for the internet users
Pod
A thin wrapper around one or more containers
DaemonSet
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
ReplicaSets
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Deployment
Makes it easier for updating your pods to a newer version.
Service
An abstract way to expose an application running on a set of Pods as a network service
AWS EKS Overview
Introduction
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane (master node)
Runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability.
Automatically scales control plane instances based on load, detects and replaces unhealthy control plane instances, and provides automated version updates and patches for them.
Is integrated with many AWS services to provide scalability and security for your applications, including the following capabilities:
Amazon ECR for container images
Elastic Load Balancing for load distribution
IAM for authentication
Amazon VPC for isolation
Architecture
Hands-on
Prerequisites
Setup K8S Tools
Install eksctl and kubectl
curl --silent --location "<https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$>(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv -v /tmp/eksctl /usr/local/bineksctl completion bash >> ~/.bash_completion . /etc/profile.d/bash_completion.sh . ~/.bash_completion
sudo curl --silent --location -o /usr/local/bin/kubectl \\ <https://s3.us-west-2.amazonaws.com/amazon-eks/1.21.5/2022-01-21/bin/linux/amd64/kubectl> sudo chmod +x /usr/local/bin/kubectl
Install AWS cli
curl "[<https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip>](<https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip>)" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install
Install jq, envsubst (from GNU gettext utilities) and bash-completion
sudo yum -y install jq gettext bash-completion moreutils
Install yq for yaml processing
echo 'yq() { docker run --rm -i -v "${PWD}":/workdir mikefarah/yq "$@" }' | tee -a ~/.bashrc && source ~/.bashrc
Verify the binaries are in the path and executable
for command in kubectl jq envsubst aws
do
which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
done
Set the AWS Load Balancer Controller version
echo 'export LBC_VERSION="v2.4.1"' >> ~/.bash_profile echo 'export LBC_CHART_VERSION="1.4.1"' >> ~/.bash_profile . ~/.bash_profile
Enable kubectl bash_completion
kubectl completion bash >> ~/.bash_completion . /etc/profile.d/bash_completion.sh . ~/.bash_completion
Make sure the your environment has AWS Credentials with Admin permission
Launching EKS Cluster via EKSCTL
Create EKS Cluster
eksctl create cluster -f eksworkshop.yaml
After run the create command, you can view the status of creation via Cloudformation Console
https://ap-northeast-1.console.aws.amazon.com/cloudformation/home?region=ap-northeast-1#/stacks?filteringStatus=active&filteringText=&viewNested=true
Add User's permission to view the EKS Console
rolearn=arn:aws:iam::AWS_ACCOUNT_ID:user/AWS_USER_NAME eksctl create iamidentitymapping --cluster eksworkshop-eksctl --arn ${rolearn} --group system:masters --username admin
View the Permission again
kubectl describe configmap -n kube-system aws-auth
Deploy the K8S Dashboard
Deploy the official K8S Dashboard
export DASHBOARD_VERSION="v2.6.0" kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml
Expose the Dashboard to be accessible via Proxy
Since this is deployed to our private cluster, we need to access it via a proxy. kube-proxy is available to proxy our requests to the dashboard service. In your workspace, run the following command:
kubectl proxy --port=8080 --address=0.0.0.0 --disable-filter=true &
Access the dashboard
In your Cloud9 environment, click Tools / Preview / Preview Running Application
Scroll to the end of the URL and append
/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The Cloud9 Preview browser doesn’t appear to support the token authentication, so once you have the login screen in the cloud9 preview browser tab, press the Pop Out button to open the login screen in a regular browser tab
Get a Dashboard login token by running this command
aws eks get-token --cluster-name eksworkshop-eksctl | jq -r '.status.token'
Copy the output of this command and then click the radio button next to Token then in the text field below paste the output from the last command.
Deploy Applications to EKS
Clone the Application Git Repo
cd ~/environment git clone https://github.com/aws-containers/ecsdemo-frontend.git git clone https://github.com/aws-containers/ecsdemo-nodejs.git git clone https://github.com/aws-containers/ecsdemo-crystal.git
Deploy Backend NodeJS
Apply Deployment
cd ~/environment/ecsdemo-nodejs kubectl apply -f kubernetes/deployment.yaml
Apply Service
cd ~/environment/ecsdemo-nodejs kubectl apply -f kubernetes/service.yaml
Ensure your deployments and services are created successfully
kubectl get deployments -A
kubectl get services -A
Deploy FrontEnd
Apply Deployment
cd ~/environment/ecsdemo-frontend kubectl apply -f kubernetes/deployment.yaml
Apply Service
cd ~/environment/ecsdemo-frontend kubectl apply -f kubernetes/service.yaml
Ensure your deployments and services are created successfully
kubectl get deployments -A
kubectl get services -A
Scale the Applications
Scale the Backend
kubectl scale deployment ecsdemo-nodejs --replicas=2
Scale the Frontend
kubectl scale deployment ecsdemo-frontend --replicas=2
Cleanup
cd ~/environment/ecsdemo-frontend kubectl delete -f kubernetes/service.yaml kubectl delete -f kubernetes/deployment.yaml
cd ~/environment/ecsdemo-nodejs kubectl delete -f kubernetes/service.yaml kubectl delete -f kubernetes/deployment.yaml
export DASHBOARD_VERSION="v2.6.0"
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml unset DASHBOARD_VERSION
コメント