Table of Contents
The Problem With How Most Teams Handle Secrets
Most write introductions to Kubernetes demonstrate you how to make a Secret item in just three lines of YAML. They never reveal that their secrets, by default, are base64 encoded strings in etcd, that aren’t encrypted, that aren’t rotated or audited.
It’s no small mistake. That’s a security hole which has been exploited in real life.
Base64 is encoding and NOT encryption. It only takes a moment and anyone with the etcd access can crack your database password. But if RBAC isn’t managed closely in clusters, that’s a much greater number of individuals than you might think.
The real beginning of a secrets management is at the architecture level — not at the kubectl create secret step, but at the architecture level.
What “Secrets Management in Kubernetes” Actually Means
It is well to be specific about the problem space before starting work on the tools.
Kubernetes secrets management addresses 3 different concerns:
- Storage Security: Do secrets get encrypted at rest in etcd?
- Access control: Who/what may read a secret, and when?
- Lifecycle management – How are secrets rotated, versioned and revoked?
Most teams solve 1 of these. Setups that are already mature can address all three. The difference between the two states will most often be in where incidents occur.
Secrets Management in Kubernetes: External KMS, Vaults, and Encryption – The Real Architecture Breakdown
Encryption at Rest with KMS Providers
Envelope encryption is supported by Kubernetes for storage of secrets in etcd. DEK(Secret) is the data encryption key, which encrypts the secret; the key encryption key KEK encrypts the DEK. The KEK exists outside the cluster, and is stored in a Key Management Service, such as AWS KMS, Google Cloud KMS, or Azure Key Vault.
Even if someone happens to drop your etcd, then they have encrypted blobs. Otherwise they are worthless blobs.
Creating this will involve creating an EncryptionConfiguration file on the API server itself:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- kms:
name: myKMSPlugin
endpoint: unix:///tmp/socketfile.sock
apiVersion: v2
- identity: {}The final identity provider is a “last resort” and a footgun. In the absence of KMS, Kubernetes proceeds to store secrets without encryption. I see teams miss this step and get it set up and forget about it which makes it a silent Time Capsule winner.
This printer was observed in a staging environment, where the KMS plugin’s connectivity failed, and secrets were being delivered in a plain old fashion. All the team saw were no warnings, no errors. Worth auditing explicitly.
HashiCorp Vault — More Than Just a Secret Store
For reasons that shall remain a mystery, Vault is the most widely used external secrets backend in the Kubernetes world and is for good reason. Supports out-of-the-box dynamic secret generation, short-lived credentials, fine-grained policies and full audit logging.
The teams typically use for integration is the Vault Agent Sidecar Injector pattern. Vault deploys an init container and sidecar to your pods which come with authentication via kubernetes service accounts, extracting secrets to a shared in-memory volume.
You shouldn’t have any secrets in the environment variables. No secrets in Secret objects with Kubernetes. None of the data is stored; the pod receives its requirements at runtime.
In this, it excels:
- Database credentials – rotating hourly.
- Dynamic certificates for each of the services provided.Dynamic certificates are generated for each required service.
- Assigning cloud provider credentials with narrowly-drafted IAM policies
When things get tricky:
- Operational investment of the HA setup at vault is vital.
- When Vault is sealed, and the agent is not able to authenticate, the cold start will take a long time.
- Don’t mess around at 2 AM if you have a problem with sidecar injection, it’s not fun, okay?
While sidecar has been the recommended approach, I have tried this in a few different cluster configurations, and am consistently getting good results when using Vault Kubernetes auth in tandem with the External Secrets Operator (ESO). It conveys more granular control on synching of secrets and facilitates drift detection.
External Secrets Operator – The Bridge That Makes Sense
ESO is an operator that is open source and securely accesses external secret stores (like Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, and so on) and transforms the contents into native Kubernetes Secret objects.
The process is as follows:
- You create a SecretStore or ClusterSecretStore that references your external SecretStore.
- You define an ExternalSecret resource which correlates specific keys in the external store to a Kubernetes Secret.
- ESO fetches, syncs and optionally rotates the secret every N minutes.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-secret
data:
- secretKey: DB_PASSWORD
remoteRef:
key: secret/data/prod/db
property: passwordWhat you get is a standard Kubernetes Secret that is owned by ESO, and automatically refreshed at ESO’s mercy. ESO detects the change in the underlying value in Vault, and updates the Secret without interacting with users.
It means as much as it does as a sound bite. When teams only rotate the hands out in panic mode, it is their own fault. The possibility of human error is eliminated by automating it.
For teams committed to securing Kubernetes, ESO is fast becoming a bare minimum and is not something that can be considered as an addition.
The Environment Variable Trap
This is something that many articles just skim over; sometimes, even after making Vault and ESO work, they end up in environment variables; and that is a bad thing.
The environment variables which are available in container are:
- Available to any process in container
- Found by certain frameworks on crashes or debugging
- If the isolation between the container and the host is not perfect, then the /proc//environ file can be accessed by the host.
The preferred alternative is to have secret files in an in-memory tmpfs. The data is not stored on disk and the application is read from the filesystem path.
volumes:
- name: secrets-vol
emptyDir:
medium: Memory
volumeMounts:
- name: secrets-vol
mountPath: /var/secrets
readOnly: trueThough it is a minor pod spec change this has a dramatic impact on the security posture – particularly in environments where Kubernetes Network Security policies are not consistently blocking intra-pod traffic.
My Take on Cloud-Native KMS Options
AWS Secrets Manager + ASCP
AWS Secrets and Configuration Provider (ASCP) provides AWS Secrets Store CSI Driver. It mounts secrets directly as volumes via CSI interface, with authentication using IRSA (IAM Roles for Service Accounts).
If you’re already on AWS, it’s tight integration. The drawback: vendor lock-in – too late to move to GCP or multi-cloud later on and have to re-imagine your secret fetching layer.
GCP Secret Manager
Very much like Google’s offering, but less polluted in terms of the model it uses for IAM. The External Secrets Operator has good support for Workload Identity.The External Secrets Operator has good support for Workload Identity (WI).
Contrary to AWS, my experience revealed that GCP’s audit logging for secret access is also more granular out of the box – every attempt to access a secret appears in the Cloud Audit Logs without setting anything up.
Azure Key Vault
While CSI’s implementation for Azure is working now, it has had a few bumps in the road with identifying pod identity. Things got much better with the transition from AAD Pod Identity to Azure AD Workload Identity (via Workload Identity), but they were still not as seamless as they could be without having to make a migration effort with existing clusters.
What Most People Misunderstand About Secret Rotation
But the world of rotation seems simple — change the number, update the references, woop dee doo, that’s it! In practice, it’s a distributed coordination problem.
Once a secret rotates, if there is no running pod that understands a particular secret, then that pod begins to fail. The naive solution is to restart all pods after a rotation. What is more correct is to create applications that can deal with secret reload, without rebooting – that is, reading it every time it is requested, or every time it’s refreshed after a fixed period.
There are frameworks which come with some built-in support for this. Others require a container. It is better to consider it before than after constructing the rotation pipeline.
It is important that you know two patterns:
- Blue/Green secret rotation – Duplicating the new secret on the old one, gradually moving the consumers in each of those versions over to the new one, and finally removing the old one. When it is known both on one end, and the other, it works well for passwords to databases.
- You can reference a particular version of a secret in your config with versioned secret references. When rotation occurs, it becomes a deployment change that is explicit, as well as auditable.
Audit Logging – The Part Everyone Skips Until They Need It
Log access to secrets. This is not “a pod read a secret”, this is which service account, which pod, at what time, with what specific key.
When you enable/configure the right verbosity for Kubernetes API audit logs, you’ll be able to see secret read events too. Vault has a native audit backend that does just this. Both AWS and GCP expose it in their respective logging systems.
It is the teams that do not do this step, that have to scramble around after an incident asking the other teams, “how long did the attacker have access?
Creating a simple audit policy for Kubernetes:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets"]Stores metadata parameters (Who, When, What resource, but not secret value!) It’s a good place to start and not to pollute your audit log with sensitive data.
Putting It Together – A Practical Stack That Works
What follows is a realistic configuration for production that is secure, complex enough for operations and maintainable:
| Layer | Tool | Why |
|---|---|---|
| Secret Store | HashiCorp Vault (or AWS/GCP equivalent) | Dynamic secrets, fine-grained policies, audit logs |
| Sync Layer | External Secrets Operator | Decoupled from pod lifecycle, supports multiple backends |
| Encryption at Rest | KMS envelope encryption on etcd | Protects against etcd compromise |
| Access Control | Kubernetes RBAC + Vault policies | Defense in depth |
| Secret Delivery | Filesystem mounts (tmpfs) | Avoids environment variable exposure |
| Auditing | Kubernetes audit policy + Vault audit backend | Full access trail |
This is not the only type of architecture that is valid. However, it resolves all three issues related to storage, access, and lifecycle without having to build a custom system to solve these problems.
Honest Recommendation – Who Needs What
If you are a small team on a managed cluster (EKS, GKE, AKS): You can begin with your cloud provider’s native secrets integration and ESO. This is because it incurs minimal operating costs and for most workloads it is fine.
When dealing with sensitive data, compliance or multi-cloud configurations: Vault, it appears, is worth the operational investment. It is quite sufficient due to the audit logging and dynamic credential generation.
If you’re continuing to use base64 encoded secrets in plain YAML, put in a repository: Stop. It’s the one number that needs to be made the biggest priority thing to fix irrespective of anything else (what you do or don’t do).
Secrets management is no bed of roses in Kubernetes. It is not visible in demo. Yet, it’s an area where failure to get right results out has repercussions that is very obvious, albeit at inconvenient times.
I’m a technology writer with a passion for AI and digital marketing. I create engaging and useful content that bridges the gap between complex technology concepts and digital technologies. My writing makes the process easy and curious. and encourage participation I continue to research innovation and technology. Let’s connect and talk technology!



