Elastic Beanstalk vs. ECS vs. Kubernetes: Part 3
Mar 24, 2017This is Part 3 of Elastic Beanstalk vs. ECS vs. Kubernetes, see Part 1
Kubernetes
As a Google sponsored system, running Kubernetes on anything but Google Container Engine always seemed somewhat like a mismatch to me. AWS seemed to have too many nuances with how their load balancers, VPCs, and security groups worked when compared to Google Container Engine, that running Kubernetes on top of a different IaaS would be risky and error-prone.
I was very wrong.
So wrong, in fact, that provisioning a cluster using kops and then launching a few containers with kubectl was so simple, it just worked.
Compared to ECS, the Kubernetes experience is quite different. There’s more of an abstraction and a feeling that there’s a lot more happening behind the scenes than what you’re aware of. But, because Kuberenetes is an open source project, you’re free to dig into as much of the guts of Kubernetes as you’d like.
Labels and selectors play a big role in the communication between containers, as does DNS. The amount of declarative code that’s required to run pods and and create services seems more terse when compared to ECS and also easier to understand. Kubernetes uses a pod as the primary unit of containerization which is somewhat analagous to ECS’ task. And while both platforms have services, they mean different things.
A nice convenience Kuberentes offers is the use of deployments – which are the recommended way to launch and update stateless pods. Since this is a very typical workflow, it’s fairly straightfoward:
- Create a deployment file
- Define your containers as a template inside the deployment including the number of replicas
- Run
kubectl create -f deployment.yaml
When it’s time to re-deploy, you have a couple options:
- Edit deployment.yaml and run
kubectl apply -f deployment.yaml
- Run
kubectl edit deployment [DEPLOYMENT NAME]
(downloads latest deployment config from state store, opens editor)
This is very close to the simplicity provided by Elastic Beanstalk in Part 1.
If you recall, ECS services are used to keep a certain number of tasks running and define health-check and scaling thresholds.
Kubernetes services provide a single endpoint to access a group of pods. When tagged as a LoadBalancer the service is associated with an AWS load balancer and becomes publicly accessible.
Finally, we get to Kubernetes’ built-in dashboard. Running kubectl proxy
gives us a localhost endpoint we can use to gain access to the cluster and load http://localhost:8001/ui/. This presents us with the ability to manage a significant portion of the cluster through a nicely designed web interface. Pods, deployments, and services can be created and launched and each instance can be inspected to see what containers are running on what instance.
CPU, memory, and network graphs are available to show the health of the cluster and each individual instance.
Heapster can also be installed to get even better metrics which are shipped to an InfluxDB container that runs in your cluster.
- Auto-generate Kubernetes ConfigMaps from Environr
- Elastic Beanstalk Secrets as a Service
- Keep Your Tooling Simple
- Hosted Secrets Management for Kubernetes
- Start Using Feature Toggles Now
- Ansible, Puppet, Chef: No thanks
- Gogland IDE
- Super Cheap and Flexible Hosting of your Go Application
- Elastic Beanstalk vs. ECS vs. Kubernetes: Part 4
- Elastic Beanstalk vs. ECS vs. Kubernetes: Part 3
Recent posts