Open Source
10 items
10 items
The operating system for the cloud that runs 84% of organizations' containerized workloads
Kubernetes (K8s) is a container orchestration platform that automates deployment, scaling, and management of containerized applications. It abstracts away individual machines into a unified computing surface. You declare what you want (a Deployment with 3 replicas), and Kubernetes figures out how to make it happen. The control plane watches the cluster state and continuously reconciles actual state with desired state through a set of controllers.
You describe the desired state (3 replicas of my app), not the steps to get there. Kubernetes controllers continuously reconcile actual state with desired state. If a pod dies, the controller creates a new one automatically.
Every component follows the same pattern: watch for changes, compare desired vs actual state, take action to reconcile. This simple pattern scales from managing pods to managing entire cloud infrastructure.
A Pod is one or more containers that share network namespace and storage. Containers in a pod communicate via localhost and can share files. Pods are ephemeral - they can be killed and recreated anywhere in the cluster.
Kubernetes emerged from Google's internal system called Borg, which had been running containers at massive scale since 2003. When Docker made containers accessible to everyone in 2013, the industry needed a way to orchestrate them at scale.
The problem Kubernetes solves:
Without orchestration, running containers in production requires manual work: - Which server has capacity for this container? - What happens when a server dies? - How do containers find each other? - How do you roll out updates without downtime? - How do you scale from 3 to 300 instances?
Kubernetes automates all of this. You declare what you want, and it figures out how to make it happen.
Core concepts:
Custom Resource Definitions let you extend Kubernetes with your own object types. Combined with custom controllers (operators), you can teach Kubernetes how to manage databases, message queues, or any stateful application.
All cluster state lives in etcd, a distributed key-value store using Raft consensus. The API server is the only component that talks to etcd. This design provides consistency and enables the watch mechanism that powers controllers.
Kubernetes emerged from Google's internal system called Borg, which had been running containers at massive scale since 2003. When Docker made containers accessible to everyone in 2013, the industry needed a way to orchestrate them at scale.
The problem Kubernetes solves:
Without orchestration, running containers in production requires manual work: - Which server has capacity for this container? - What happens when a server dies? - How do containers find each other? - How do you roll out updates without downtime? - How do you scale from 3 to 300 instances?
Kubernetes automates all of this. You declare what you want, and it figures out how to make it happen.
Core concepts:
| Aspect | Advantage | Disadvantage |
|---|---|---|
| Declarative Configuration | Desired state is version-controlled, reproducible, and self-documenting | Learning curve to understand YAML schemas and the reconciliation model |
| Abstraction Layer | Portable across cloud providers; same manifests work on AWS, GCP, Azure, on-prem | Adds complexity; debugging requires understanding both Kubernetes and underlying infrastructure |
| Self-Healing | Automatic restart of failed containers, rescheduling on node failures | Can mask underlying issues; restart loops may hide bugs instead of surfacing them |
| Extensibility (CRDs) | Can extend Kubernetes to manage any resource type with operators | CRD proliferation can make clusters hard to understand and maintain |
| Control Plane Overhead | Rich features: scheduling, networking, storage, RBAC, observability | Significant resource overhead; not suitable for small deployments or edge devices |
| Networking Model | Flat network with built-in service discovery and load balancing | Network policies can be complex; debugging network issues is challenging |
| etcd Dependency | Strong consistency guarantees for cluster state | etcd is a single point of failure; requires careful backup and HA setup |
| RBAC | Fine-grained access control for multi-tenant clusters | Complex to configure correctly; easy to be too permissive or too restrictive |