From Zero to Deploy (kubernetes quickstart)

Posted: 17 August, 2023 Category: backend Tagged: kubernetesaws

This doesn't include foundational concepts; please do read about those and come back. This, is a quickstart recipe for hitting the ground running, and some mitigation against future forgetfulness.


  • you have aws cli installed with credentials ready to go
  • You have opened a fantastical window in your OS and hurled Docker Desktop out of it... not because Docker Desktop did anything wrong, but because it isn't its fault that there's too many layers between it and such things as docker or k8s on non-linux OSes, and you're tired of damn near all your machine's resources falling through that gap to be gobbled up by the RAM monster that dwells within its murky... ok I'm getting carried away. (Look, I love Docker Desktop as much as the next person, OK? I'm holding a vigil for its cute fat blue whale, right now. Sniff).

1. In the beginning was kubectl...

Get thee to a linux terminal, and then go straight to the horse's mouth and follow the instructions there. Or, chance the happiest of paths below (if you have root):

  1. Download the latest stable release:
    curl -LO "$(curl -L -s"
    And then you should probably verify the download, right?
    curl -LO "$(curl -L -s"
    echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
    emoji-point_up the result of the above should be kubectl: OK
  2. Do the actual install:
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
  3. confirm your suspicions that you have actually installed kubectl: kubectl version --client --output=yaml. Yay!

2. And the kube was empty.

You need something to run k8s clusters, and there's almost too many tools for this job. In "local dev" land, the two that caught my eye were mainly minikube and kind. Was going to go with Kind (Kubernetes IN Docker) here, as it hews a tad closer to real automation environments (and is ever so slightly less bloated and a smidgen more performant). But wait:

  • (option A) Using Kind for creating clusters
    • Reading around more, I found that it was fairly new, and advice from devOps sages on t'internet was generally hrrmm... maybe don't bother with running your own clusters?. Which... yeah, it can be pretty resource-intensive computing emoji-sweat_smile.
    • If this is the path you want to take, the install instructions here worked, unlike the ones on the homepage.
  • (option B) Using Amazon EKS directly, for clusters
    • With this method, you're cloud-native from the jump. However, it seems the API for EKS is a bit gnarly, and mere mortals have mostly congregated around a tool called eksctl (which even the boffins at AWS agree is better). I am now of the mind that eksctl-interfaced EKS is the sanity-preserving way into the world of kubernetes (at least, for an AWS-flavored experience: I'm told things are even user-friendlier over at Azure AKS and GCP GKE, which... yeah, one day soon).
    • Install instructions for eksctl are here.

emoji-rotating_light Using eksctl for all that follows!

3. Let there be Clusters!

emoji-bulb At this point, if you began from a standing start, kubectl config view should yield a json-shaped emptiness: zero contexts, zero clusters, etc. Now while you can sudo kubeadm init to create a default config, that seems to entail also installing kubeadm. We'll stick to just using eksctl for simplicity:

Login to AWS via the aws cli as needed, and then in your project folder:

  • create a cluster.yml file (see: A toy example with just one nodegroup:

      kind: ClusterConfig
        name: test-cl
        region: us-east-1
      # handy to explicitly list AZs when AWS complains about other AZs not having capacity to meet your needs.
        - us-east-1a
        - us-east-1b
      # normally you'd have a couple of these
        - name: ng-1
          instanceType: t2.small
          desiredCapacity: 2
  • run eksctl create cluster -f ./cluster.yml

    • emoji-hourglass Be patient! This'll take a while because it will bootstrap an entire vpc by default (yes, you can point it a pre-existing vpc later on in your adventures).
  • emoji-bulb If anything goes wrong, sometimes you can end up with a deployed cloudformation stack but no actual cluster on top of it! Trying to apply your cluster yaml file again will faceplant. In that case:

    • you can do the equivalent of eksctl delete cluster -f ./cluster.yml --force to nuke everything and try again
    • you can also log in to cloudformation and nuke the stack there. In either case be patient; might take a sec to tear down the stack either way.

Once your cluster is successfully deployed:

  • You'll see that kubectl config view gives you a more robust response
  • kubectl get nodes also lists available nodes.

If you went with a cluster in the cloud, this is where you have to start the timer, because you will absolutely be paying for that cluster. In the case of EKS at least, it doesn't matter how much you shrink the resources - the kubernetes control plane they've instantiated for you never sleeps, so expect a steady drip of AT LEAST a couple of bucks a day just for that, never mind other costs.

You can scale your nodes to zero tho, if it helps you sleep at night. Example:

eksctl scale nodegroup --cluster=test-cl --nodes=0 --name=ng-1  --nodes-min=0 --nodes-max=1

emoji-point_up If you do this, don't forget to scale 'em back up before you carry on with the rest of this lab!


4. Unto this new Eden, deploy.

Essentially, this comes down to: a container from an image repo, a description of how to deploy it, and an LB in front that makes it all reachable. No fancy ssl certs or anything real-life-y here!

  • create an lb.yml for a LoadBalancer service:

    apiVersion: v1
    kind: Service
      name: lb
      type: LoadBalancer
        app: nodeinfo
        - protocol: TCP
          port: 3000
          targetPort: 3000

    emoji-point_up REALLY IMPORTANT - k8s maintains its topology through labels and selectors: without app: nodeinfo in this example, the LB quite literally will not be able to find the deployments it's supposed to front, and you will not reach anything.

    • kubectl apply -f lb.yml to deploy the load balancer
    • kubectl get service/lb to yield the external ip that makes the app reachable - will need it in a bit to test reachability!
  • create a deployment.yml for the actual app (which you've dockerized and pushed to ECR): The meat and potatoes of the yaml definitions for kubectl is the spec portion of the file. Without handy examples to refer to, you would have to browse k8s docs for the exact spec outline required for each type of resource. Without getting into the weeds around apiVersion values, the kind (an abstraction for type of resource) etc, you can at least play with the following:

    apiVersion: apps/v1
    kind: Deployment
      name: nodeinfo-deployment
      namespace: default
        app: nodeinfo
      replicas: 2
          app: nodeinfo
            app: nodeinfo
            - name: nodeinfo
              image: <account>.dkr.ecr.<region><image>:latest
                - containerPort: 3000
    • kubectl apply -f deployment.yml to deploy
    • kubectl get to check the deployment details

Now you can reach the app at http://<external-ip>:3000 (since 3000 is the target port number we've been playing with in this wee example). TA-DAAA!!!

5. (Destroy the world in a) cataclysmic flood of deletes

You definitely do want to clean up your little toy deployment. Unless... I dunno, you're made of money. Follow the instructions here to do so!