What Is Kubernetes and Why?
Part of Day One: Getting Started
This is the first article in Day One: Getting Started. Start here if you're brand new to Kubernetes.
You just found out your company uses Kubernetes. Or maybe you saw it on a job description. Or your manager said "we're deploying to K8s now" and you nodded along, hoping it would make sense eventually.
Let's make it make sense.
What You'll Learn
By the end of this article, you'll understand:
- What Kubernetes is - Container orchestration at scale
- Why it exists - The problem it solves (managing thousands of containers)
- Why your company uses it - Real-world benefits (reliability, scaling, zero-downtime deploys)
- What problems it doesn't solve - Setting realistic expectations
- Whether you need to learn it - Spoiler: yes, if you're deploying to it!
The Container Orchestration Challenge
flowchart TD
Dev[Development<br/>docker-compose up<br/>✅ Easy]
Prod[Production<br/>100 servers<br/>1000s of containers]
Manual[Manual Management<br/>❌ Impossible]
K8s[Kubernetes<br/>✅ Automated]
Dev --> Prod
Prod --> Manual
Manual -.->|Solution| K8s
style Dev fill:#2f855a,stroke:#cbd5e0,stroke-width:2px,color:#fff
style Prod fill:#2d3748,stroke:#cbd5e0,stroke-width:2px,color:#fff
style Manual fill:#c53030,stroke:#cbd5e0,stroke-width:2px,color:#fff
style K8s fill:#2f855a,stroke:#cbd5e0,stroke-width:2px,color:#fff
The Problem: Too Many Containers, Not Enough Hands
Imagine you've containerized your application. You have:
- A frontend container (React app)
- A backend container (API server)
- A database container (PostgreSQL)
- Maybe a cache (Redis)
On your laptop: Docker Compose handles this beautifully. One docker-compose up and everything runs.
Don't have Docker Compose experience? That's okay!
Not everyone comes to Kubernetes from Docker Compose—and that's perfectly fine.
The key point: On a single computer, managing a few containers is easy. Your company probably already has a solution for this (Docker Compose, single-server Docker, or even just running processes directly).
The problem Kubernetes solves isn't running containers on one machine—it's managing hundreds or thousands of containers across dozens or hundreds of servers. That's where manual management becomes impossible.
You can learn Kubernetes without Docker Compose experience. The concepts transfer: containers need to run, they need to talk to each other, they need to restart when they crash. Kubernetes does this at massive scale.
In production: You have 50 frontend containers, 30 backend containers, 10 databases across 100 servers. Now you need to:
- Ensure containers are running (and restart them if they crash)
- Spread them across servers (load balancing)
- Connect them together (networking)
- Handle traffic spikes (scaling up/down)
- Deploy updates without downtime
- Monitor everything
Doing this manually is impossible. This is why Kubernetes exists.
What Kubernetes Actually Is
Kubernetes (K8s) is a container orchestration platform. It's software that manages containers for you.
Think of it like an operating system for a data center:
- Your laptop's OS manages processes, memory, and files
- Kubernetes manages containers, servers, and networking
The key idea: You tell Kubernetes what you want ("I want 3 copies of my API running"), and Kubernetes makes it happen. If something breaks, Kubernetes fixes it automatically.
Wait, What's the Difference Between Container, Image, and Pod?
If you're new to containers, these terms can be confusing. Here's the relationship:
Container Image (the blueprint):
- A packaged file containing your application code and all its dependencies
- Like a
.zipfile or installer—it doesn't run by itself, it's just the package - Stored in a registry (Docker Hub, AWS ECR, Google Artifact Registry, etc.)
- Example:
nginx:1.21is an image—the nginx web server version 1.21 packaged up
Container (the running instance):
- A running instance of an image
- Like opening an application from an installer—now it's actually executing
- Has its own filesystem, process space, and network
- Example: When you run
docker run nginx:1.21, you start a container from the image
Pod (Kubernetes wrapper):
- Kubernetes doesn't run containers directly—it wraps them in Pods
- A Pod is the smallest unit Kubernetes manages
- Usually 1 container per Pod (but can have multiple containers that work together)
- Example: Your nginx container runs inside a Pod managed by Kubernetes
The flow:
flowchart LR
Image["<b>Container Image</b><br/>nginx:1.21<br/>(stored in registry)"]
Container["<b>Container</b><br/>Running nginx process<br/>(executing)"]
Pod["<b>Pod</b><br/>Kubernetes wrapper<br/>(managed by K8s)"]
Image -->|"docker run<br/>or kubectl"| Container
Container -->|"wrapped inside"| Pod
style Image fill:#2d3748,stroke:#cbd5e0,stroke-width:2px,color:#fff
style Container fill:#4a5568,stroke:#cbd5e0,stroke-width:2px,color:#fff
style Pod fill:#2f855a,stroke:#cbd5e0,stroke-width:2px,color:#fff
TL;DR: Image = packaged app, Container = running app, Pod = Kubernetes' management unit for containers
What Does 'Telling Kubernetes What You Want' Look Like?
You interact with Kubernetes using the kubectl command-line tool.
Here's what checking on your app looks like:
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# my-app-7c5ddbdf54-abc123 1/1 Running 0 2m
# my-app-7c5ddbdf54-def456 1/1 Running 0 2m
# my-app-7c5ddbdf54-ghi789 1/1 Running 0 2m
That's it. One command to see what's running. Three copies of your app, all healthy.
Don't worry if this looks foreign. In the next article, you'll run your first kubectl commands, and they'll become second nature quickly.
The Shipping Container Analogy
The name "Kubernetes" means "helmsman" (ship pilot) in Greek. The logo is a ship's wheel. This isn't random—containers (Docker) are literally named after shipping containers.
Before shipping containers (1950s):
- Every cargo was different (boxes, barrels, crates)
- Loading/unloading was manual and slow
- Each port handled things differently
After shipping containers:
- Everything goes in standard 20' or 40' boxes
- Cranes can move them automatically
- Any port can handle any container
- Global trade exploded
Kubernetes is the Port Authority:
- Docker containers are the standardized boxes
- Kubernetes is the crane system that moves them around
- It doesn't matter what's inside the container—K8s handles it the same way
Why Companies Adopt Kubernetes
-
Run Anywhere
Why it matters: No vendor lock-in, move workloads freely
Same Kubernetes runs on:
- AWS (EKS)
- Google Cloud (GKE)
- Azure (AKS)
- Your company's data center
- Your laptop (for development)
Benefit: Switch cloud providers without rewriting deployment infrastructure.
-
Self-Healing
Why it matters: Fewer 3 AM pages, automatic recovery
Kubernetes automatically handles failures:
- Container crashes → Kubernetes restarts it
- Server dies → Kubernetes moves containers to healthy servers
- Traffic spike → Kubernetes scales up automatically
Benefit: Operations team sleeps better, applications stay running.
-
Declarative Configuration
Why it matters: Infrastructure as Code, everything version-controlled
Traditional approach: Imperative scripts
# Run these commands in this exact order... docker run container-a sleep 5 docker run container-b # Hope nothing breaks!Kubernetes approach: Declarative YAML
# Describe desired state, Kubernetes figures out how spec: replicas: 3 # I want 3 running containers: - name: my-appBenefit: Git tracks changes, rollbacks are easy, no procedural scripts.
-
Rolling Updates
Why it matters: Deploy anytime, no maintenance windows
Update your app from v1 to v2 without downtime:
- Kubernetes starts new v2 containers
- Waits for them to be healthy
- Gradually stops v1 containers
- If v2 fails, automatically rolls back to v1
Benefit: Deploy during business hours, users never notice.
What Kubernetes Isn't
Kubernetes is NOT:
- ❌ A replacement for Docker (K8s uses Docker/containerd)
- ❌ A cloud provider (it runs ON clouds)
- ❌ Easy (it's powerful but complex)
- ❌ Required for small projects (might be overkill)
Kubernetes IS:
- ✅ An orchestrator for containers
- ✅ Platform for running distributed systems
- ✅ Industry standard for production deployments
- ✅ Worth learning if you're shipping software at scale
The Trade-Off
Complexity vs. Capability
Kubernetes adds complexity:
- New concepts to learn (Pods, Services, Deployments)
- YAML configuration everywhere
- More moving parts
Kubernetes adds capability:
- Automatic scaling and healing
- Zero-downtime deployments
- Runs anywhere
- Battle-tested at Google/Cloud Native scale
When it's worth it: Teams shipping multiple services, need high availability, or running at scale.
When it's not: Single-server apps, hobby projects, teams without ops experience.
Your Company Probably Uses Kubernetes If...
-
Microservices Architecture
You have 10+ independent services (not a monolith)
-
High Availability Requirements
Need 99.9%+ uptime, can't afford extended outages
-
Frequent Deployments
Deploy multiple times per day, need fast iteration
-
Major Cloud Provider
Running on AWS, GCP, or Azure (all offer managed K8s)
-
Platform/DevOps Team
Company has dedicated infrastructure team
If 2 or more apply: Kubernetes makes sense for your company.
What this means for you: You don't need to learn how to install Kubernetes (that's the platform team's job). You need to learn how to use Kubernetes to deploy your applications.
That's what Day One is about.
What You'll Actually Do with Kubernetes
Remember the scenarios from the overview? Here's how Kubernetes addresses them:
- Deploy your app →
kubectl applypushes your changes to the cluster - Check logs →
kubectl logsshows what's happening inside containers - Update config → ConfigMaps and Secrets manage environment variables
- Roll back →
kubectl rollout undoinstantly reverts bad deployments - Scale →
kubectl scaleadjusts how many copies are running
We'll cover each of these in Day One and Level 1-2.
Reflection Questions
These aren't hands-on exercises (we'll do that in the next article), but take a moment to think through these questions:
Exercise 1: Identify Your Scenario
Which of these describes your company?
- Monolithic application on a single server
- Microservices architecture (10+ services)
- High availability requirements (99.9%+ uptime)
- Multiple deployments per day
- Running on major cloud provider (AWS, GCP, Azure)
How many apply? If you checked 2 or more, Kubernetes makes sense for your company.
Why This Matters
Understanding why your company adopted Kubernetes helps you appreciate the complexity trade-off. If you're running 50 microservices across 100 servers, the overhead of learning Kubernetes is worth it. If you have 1 app on 1 server, maybe not.
Exercise 2: Match the Problem to the Solution
We listed 6 orchestration challenges earlier. Can you match each problem to the Kubernetes feature that solves it?
Problems:
- Containers crash and need to restart automatically
- Load needs to be distributed across many servers
- Services need to find and talk to each other
- Traffic spikes require spinning up more instances quickly
- Updates need to happen without taking the app offline
- Need to monitor health across hundreds of containers
Kubernetes Features: Self-healing, Load balancing, Service discovery, Scaling, Rolling updates, Health checks
Answers
- Self-healing - Kubernetes restarts crashed containers automatically
- Load balancing - Services distribute traffic evenly across pods
- Service discovery - Kubernetes DNS lets services find each other by name
- Scaling - Deployments can increase/decrease replicas on demand
- Rolling updates - Gradual replacement of old containers with new ones
- Health checks - Probes continuously monitor container health
The pattern: Every kubectl command you learn (coming in the next articles) maps back to solving one of these problems. Kubernetes isn't abstract—it's solving real operational challenges your team faces daily.
Exercise 3: What's Your Current Deploy Process?
Before Kubernetes (or right now, if you haven't deployed yet):
How does your team currently deploy applications?
- SSH into servers and run commands?
- CI/CD pipeline that deploys to VMs?
- Docker Compose on a single server?
- Already using Kubernetes (but you don't understand it yet)?
Write down 2-3 pain points with your current process.
Why This Exercise Matters
When you deploy your first application to Kubernetes (next article!), you'll compare:
Before: Manual SSH, forgotten steps, downtime during deploys, "works on my machine" problems
After: kubectl apply -f deployment.yaml and Kubernetes handles the rest
Understanding your current pain points helps you appreciate what Kubernetes solves.
Quick Recap
| Question | Answer |
|---|---|
| What is Kubernetes? | Container orchestration platform |
| Why does it exist? | Managing containers at scale is impossible manually |
| What problem does it solve? | Automated deployment, scaling, healing, and updates |
| Do I need to learn it? | If your company uses it, yes! |
Further Reading
Official Documentation
- What is Kubernetes? - Official overview
- Kubernetes Components - Architecture overview
Deep Dives
- The Illustrated Children's Guide to Kubernetes - Visual story explaining K8s concepts
- The Kubernetes Origin Story - How Google's Borg became Kubernetes
- Borg: The Predecessor to Kubernetes - Official Kubernetes blog on Borg history
Related Articles
- Day One: Getting Started - Complete learning path overview
What's Next?
You understand why Kubernetes exists. Now let's get you connected: Getting kubectl Access will show you how to connect to your company's cluster and verify you're ready to deploy.
Remember: Every Kubernetes expert started by asking "What even is this?" You're on your way.