This doesn't include foundational concepts; please do read about those and come back. This, is a quickstart recipe for hitting the ground running, and some mitigation against future forgetfulness.
Pre-requisites:
Get thee to a linux terminal, and then go straight to the horse's mouth and follow the instructions there. Or, chance the happiest of paths below (if you have root):
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
kubectl: OK
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client --output=yaml
. Yay!You need something to run k8s clusters, and there's almost too many tools for this job. In "local dev" land, the two that caught my eye were mainly minikube and kind. Was going to go with Kind (Kubernetes IN Docker) here, as it hews a tad closer to real automation environments (and is ever so slightly less bloated and a smidgen more performant). But wait:
eksctl
(which even the boffins at AWS agree is better). I am now of the mind that eksctl-interfaced EKS is the sanity-preserving way into the world of kubernetes (at least, for an AWS-flavored experience: I'm told things are even user-friendlier over at Azure AKS and GCP GKE, which... yeah, one day soon).eksctl
are here. Using
eksctl
for all that follows!
At this point, if you began from a standing start,
kubectl config view
should yield a json-shaped emptiness: zero contexts, zero clusters, etc.
Now while you can sudo kubeadm init
to create a default config, that seems to entail also installing kubeadm. We'll stick to just using eksctl for simplicity:
Login to AWS via the aws cli as needed, and then in your project folder:
create a cluster.yml
file (see: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/). A toy example with just one nodegroup:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test-cl
region: us-east-1
# handy to explicitly list AZs when AWS complains about other AZs not having capacity to meet your needs.
availabilityZones:
- us-east-1a
- us-east-1b
# normally you'd have a couple of these
nodeGroups:
- name: ng-1
instanceType: t2.small
desiredCapacity: 2
run eksctl create cluster -f ./cluster.yml
If anything goes wrong, sometimes you can end up with a deployed cloudformation stack but no actual cluster on top of it! Trying to apply your cluster yaml file again will faceplant. In that case:
eksctl delete cluster -f ./cluster.yml --force
to nuke everything and try againOnce your cluster is successfully deployed:
kubectl config view
gives you a more robust responsekubectl get nodes
also lists available nodes.If you went with a cluster in the cloud, this is where you have to start the timer, because you will absolutely be paying for that cluster. In the case of EKS at least, it doesn't matter how much you shrink the resources - the kubernetes control plane they've instantiated for you never sleeps, so expect a steady drip of AT LEAST a couple of bucks a day just for that, never mind other costs.
You can scale your nodes to zero tho, if it helps you sleep at night. Example:
eksctl scale nodegroup --cluster=test-cl --nodes=0 --name=ng-1 --nodes-min=0 --nodes-max=1
If you do this, don't forget to scale 'em back up before you carry on with the rest of this lab!
Onward!
Essentially, this comes down to: a container from an image repo, a description of how to deploy it, and an LB in front that makes it all reachable. No fancy ssl certs or anything real-life-y here!
create an lb.yml
for a LoadBalancer
service:
apiVersion: v1
kind: Service
metadata:
name: lb
spec:
type: LoadBalancer
selector:
app: nodeinfo
ports:
- protocol: TCP
port: 3000
targetPort: 3000
REALLY IMPORTANT - k8s maintains its topology through labels and selectors: without
app: nodeinfo
in this example, the LB quite literally will not be able to find the deployments it's supposed to front, and you will not reach anything.
kubectl apply -f lb.yml
to deploy the load balancerkubectl get service/lb
to yield the external ip that makes the app reachable - will need it in a bit to test reachability!create a deployment.yml
for the actual app (which you've dockerized and pushed to ECR):
The meat and potatoes of the yaml definitions for kubectl is the spec
portion of the file. Without handy examples to refer to, you would have to browse k8s docs for the exact spec outline required for each type of resource.
Without getting into the weeds around apiVersion
values, the kind
(an abstraction for type of resource) etc, you can at least play with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeinfo-deployment
namespace: default
labels:
app: nodeinfo
spec:
replicas: 2
selector:
matchLabels:
app: nodeinfo
template:
metadata:
labels:
app: nodeinfo
spec:
containers:
- name: nodeinfo
image: <account>.dkr.ecr.<region>.amazonaws.com/<image>:latest
ports:
- containerPort: 3000
kubectl apply -f deployment.yml
to deploykubectl get deployments.app
to check the deployment detailsNow you can reach the app at http://<external-ip>:3000
(since 3000 is the target port number we've been playing with in this wee example).
TA-DAAA!!!
You definitely do want to clean up your little toy deployment. Unless... I dunno, you're made of money. Follow the instructions here to do so!