Objective
The overall objective of this POC is to secure our application secrets and integrate them into our GitOps workflow. To accomplish this objective, I am going to utilize Hashicorp Vault to store our secure data and External Secret Operator to secretly fetch those secrets from Vault and create Kubernetes secrets inside the Cluster.
Anyone who doesn't know about GitOps is a process where we are using Git in our day-to-day system admin operation as we are using it for the development process. The concept is quite popular and emerging these days.
Pre-requisite
- helm version 3, follow official instruction for installation.
- Access to working OpenShift Cluster. If you don't have Cluster for this particular POC, you can use Code Ready Container for your local development and testing. I have also created a youtube video that explains what Code Ready Container is and how you can start using it.
-
oc
orkubectl
CLI to interact with Openshift Cluster
youtube video
Vault
Vault Introduction
If you don't know what Vault is, I suggest reading this excellent official article. Directly from the article, "Vault is an identity-based secret and encryption management system." A secret can be anything we want to secure, and for our use case, we want to secure a couple of secrets like BMC Secrets, Quay Pull Secrets, Github Secrets.
Vault Installation
We are using helm version 3 to install the vault.
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm search repo hashicorp/vault
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/vault 0.19.0 1.9.2 Official HashiCorp Vault Chart
For Openshift install, we have to override some of the values. The override creates a Vault cluster using the Raft integrated storage backend and deploys Vault in HA mode with three replicas.
Copy following content and save in file override-values.yaml
global:
openshift: true
ui:
enabled: true
injector:
image:
repository: "registry.connect.redhat.com/hashicorp/vault-k8s"
tag: "0.14.2-ubi"
agentImage:
repository: "registry.connect.redhat.com/hashicorp/vault"
tag: "1.9.2-ubi"
server:
dataStorage:
enabled: true
storageClass: ocs-storagecluster-cephfs
route:
enabled: true
host: vault-server.apps.ztp-hub.ztp.home.lab
image:
repository: "registry.connect.redhat.com/hashicorp/vault"
tag: "1.9.2-ubi"
ha:
enabled: true
replicas: 3
raft:
enabled: true
Now install Vault as follows,
oc new-project vault
helm install vault-server hashicorp/vault -f override-values.yaml
Accessing Vault UI
This helm chart doesn't include Vault UI routes by default. Let's make the following OCP routes to get to the UI. Save the following text to the file route-vault-console.yaml
and run oc apply -f route-vault-console.yaml
to apply it.
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: vault-console
namespace: vault
labels:
app.kubernetes.io/instance: vault-server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: vault-ui
app.kubernetes.io/part-of: vault-server
helm.sh/chart: vault-0.19.0
annotations:
openshift.io/host.generated: 'true'
spec:
host: vault-console-vault.apps.ztp-hub.ztp.home.lab
to:
kind: Service
name: vault-server-ui
weight: 100
port:
targetPort: http
wildcardPolicy: None
Now you should be able to access UI from your browser, navigate to http://vault-console-vault.apps.ztp-hub.ztp.home.lab/
Vault Configuration
Next, initialize and unseal vault-server-0
pod:
oc exec -ti vault-server-0 -- vault operator init
oc exec -ti vault-server-0 -- vault operator unseal
Finally, join the remaining pods to the Raft cluster and unseal them. The pods will need to communicate directly, so we'll configure the pods to use the internal service provided by the Helm chart:
oc exec -ti vault-server-1 -- vault operator raft join http://vault-server-0.vault-server-internal:8200
oc exec -ti vault-server-1 -- vault operator unseal
oc exec -ti vault-server-2 -- vault operator raft join http://vault-server-0.vault-server-internal:8200
oc exec -ti vault-server-2 -- vault operator unseal
To verify if the Raft cluster has successfully been initialized, run the following.
First, log in using the root token on the vault-server-0
pod:
$ oc exec -ti vault-server-0 -- vault login
Token (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key-Value
--- -----
token s.tqN1UqGPJjtZKzILimuV7PfX
token_accessor vusPbKI42SnRM4VHW2iLO5BK
token_duration โ
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
bpandey@Balkrishnas-MacBook-Pr
Next, list all the raft peers:
oc exec -ti vault-server-0 -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
0c1af425-73c3-c369-886c-ee9dd95fe443 vault-server-0.vault-server-internal:8201 leader true
7e3e94e1-3e14-00ff-3b77-b503a4df348b vault-server-1.vault-server-internal:8201 follower true
dc496945-b067-89e2-919d-fef47319d9fc vault-server-2.vault-server-internal:8201 follower true
External Secret Operator (ESO)
External Secret Operator Introduction
If you don't know what External Secrets Operator is, I suggest reading this another excellent official article. The operator integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, etc. This component reads data from external APIs and automatically injects the values into a Kubernetes Secret.
External Secret Operator Installation
We use helm to deploy External Secret Operator, which runs as a deployment resource within your Kubernetes cluster.
Let's install ESO as follows,
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets \
--create-namespace
Validate,
oc get po -n external-secrets
NAME READY STATUS RESTARTS AGE
external-secrets-b48757b96-kkwcd 1/1 Running 0 3d20h
External Secret Operator Configuration
With the help of CustomResourceDefinitions, we can supply configuration to this operator so that it can access the Vault and create a Kubernetes secret. For our purpose, we are going to create the following two CRD definitions,
- ClusterSecretStore: is a cluster scoped
SecretStore
, its global to the Cluster that can be referenced by allExternalSecrets
from all namespaces.
Create a file cluster-secret-store.yaml
with following content.
apiVersion: external-secrets.io/v1alpha1
kind: ClusterSecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://vault-server-internal.vault:8200"
path: "secret"
version: "v1"
auth:
tokenSecretRef:
name: "vault-token"
key: "token"
namespace: vault
In the above configuration file,
-
spec.provider.vault.server
pointed to your vault server address. In this POC, both vault and external secrets operators are running in the same Cluster, so we use the Kubernetes service object to access the vault. If the Vault server is external, use the appropriate DNS name. -
spec.provider.vault.path
should be equals tosecret
, Vault itself implements lots of different secret engines, as of now, this operator only support the KV Secrets Engine. -
spec.provider.vault.version
, as per the documentation, this can bev1
andv2
, but I saw some issue with thev2
version. I saw the following error message. So usev1
for now.
{"level":"error","ts":1646765745.1578374,"logger":"controllers.ExternalSecret","msg":"could not update Secret","ExternalSecret":"sno01/pullsecret-cluster-sno01","SecretStore":"/vault-backend","error":"could not get secret data from provider: key \"secret/openshiftpullsecret\" from ExternalSecret \"pullsecret-cluster-sno01\": cannot read secret data from Vault: Error making API request.\n\nURL: GET http://vault-server-internal.vault:8200/v1/secret/data/openshiftpullsecret\nCode: 404. Errors:\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
-
spec.provider.vault.auth.tokenSecretRef
basically referencing to Kubernetes secrets. For simplicity, let's use the Vault root token here, which we generated as a part of the Vault initialization above.
First, generate base64 content with vault secrets token.
echo -n "s.tqN1UqGPJjtZKzILimuV7PfX" |base64
cy50cU4xVXFHUEpqdFpLeklMaW11VjdQZlg=
Now create a file vault-token-secret.yaml
with the following content, ensure token: cy50cU4xVXFHUEpqdFpLeklMaW11VjdQZlg=
matches with your vault cluster root token.
---
apiVersion: v1
kind: Secret
metadata:
name: vault-token
namespace: vault
data:
token: cy50cU4xVXFHUEpqdFpLeklMaW11VjdQZlg=
Now apply both files as follows,
oc apply -f cluster-secret-store.yaml
oc apply -f vault-token-secret.yaml
- ExternalSecret: This object describes what data should be fetched from the secret store (in this case, vault) and how the data should be transformed and saved as Kubernetes Secrets.
Create a file external-pullsecret-cluster.yaml
with following content.
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: pullsecret-cluster-sno01
namespace: sno01
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: pullsecret-cluster-sno01
data:
- secretKey: .dockerconfigjson
remoteRef:
key: secret/openshiftpullsecret
property: dockerconfigjson
In the above configuration file,
-
spec.refreshInterval
is the amount of time before the values reading again from the SecretStore provider. In this case, we are saying sync secrets every 15s. The valid time units are "ns", "us" (or "ยตs"), "ms", "s", "m", "h" -
spec.secretStoreRef
defines which SecretStore to use when fetching the secret data. You can have multiple SecretStore pointed to the different secret providers. In this case, we are pointing Vault usingClusterSecretStore
created on step 1. -
spec.target
the Kubernetes secrets name you want to create. -
spec.data
this section defines the connection between the Kubernetes Secret keys and the Provider data. It's a list object, and you can fetch data from multiple endpoints. In this case, we say get propertydockerconfigjson
from vault endpointsecret/openshiftpullsecret
and create Kubernetes secrets with the key name.dockerconfigjson
with that vault data.
Let's visualize this with the actual use case. We don't have any secrets created so far in the Vault. Let's add some data inside the Vault as follows,
First, enable the kv endpoint as follows; this endpoint is not allowed by default.
oc exec -it vault-server-0 -- vault login
oc exec -it vault-server-0 -- vault secrets enable -path=secret/ kv
Now add data,
oc exec -it vault-server-0 -- vault kv put secret/openshiftpullsecret dockerconfigjson='{"auths":{"cloud.openshift.com":{"auth":"3BlbnNoaWZ0LXJl==","email":"[email protected]"},"quay.io":{"auth":"ZZMVhJRUJUR1I3WUwxN05VMQ==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"3BlbnNoaWZ0LXJl==","email":"[email protected]"},"registry.redhat.io":{"auth":"==","email":"[email protected]"}}}'
Now let's apply the ExternalSecret
object created above.
oc apply -f external-pullsecret-cluster.yaml
As soon as you apply this object, the External Secrets Operator will fetch data from the Vault and create Kubernetes Secrets in your sno01
namespace.
Let's Validate
As soon as the ExternalSecret manifest is created, the External Secrets controller creates a Kubernetes Secret on the Cluster that contains the secret saved in Vault.
So we should have the following object in a cluster
- ExternalSecret
oc get externalsecret -n sno01 pullsecret-cluster-sno01
NAME STORE REFRESH INTERVAL STATUS
pullsecret-cluster-sno01 vault-backend 15s SecretSynced
- Kubernetes secret object created by External Secret Operator.
oc get secrets -n sno01 pullsecret-cluster-sno01
NAME TYPE DATA AGE
pullsecret-cluster-sno01 Opaque 1 38s
As you can see, a new secret is present with the following content, where base64 data matches Vault secrets content.
oc get secrets -n sno01 pullsecret-cluster-sno01 -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJjbG91ZC5vcGVuc2hpZnQuY29tIjp7ImF1dGgiOiIzQmxibk5vYVdaMExYSmw9PSIsImVtYWlsIjoiZXhhbXBsZUByZWRoYXQuY29tIn0sInF1YXkuaW8iOnsiYXV0aCI6IlpaTVZoSlJVSlVSMUkzV1V3eE4wNVZNUT09IiwiZW1haWwiOiJleGFtcGxlQHJlZGhhdC5jb20ifSwicmVnaXN0cnkuY29ubmVjdC5yZWRoYXQuY29tIjp7ImF1dGgiOiIzQmxibk5vYVdaMExYSmw9PSIsImVtYWlsIjoiZXhhbXBsZUByZWRoYXQuY29tIn0sInJlZ2lzdHJ5LnJlZGhhdC5pbyI6eyJhdXRoIjoiPT0iLCJlbWFpbCI6ImV4YW1wbGVAcmVkaGF0LmNvbSJ9fX0=
...
You may compare the decoded secret data to the secret you created earlier; both should match.
echo -n "eyJhdXRocyI6eyJjbG91ZC5vcGVuc2hpZnQuY29tIjp7ImF1dGgiOiIzQmxibk5vYVdaMExYSmw9PSIsImVtYWlsIjoiZXhhbXBsZUByZWRoYXQuY29tIn0sInF1YXkuaW8iOnsiYXV0aCI6IlpaTVZoSlJVSlVSMUkzV1V3eE4wNVZNUT09IiwiZW1haWwiOiJleGFtcGxlQHJlZGhhdC5jb20ifSwicmVnaXN0cnkuY29ubmVjdC5yZWRoYXQuY29tIjp7ImF1dGgiOiIzQmxibk5vYVdaMExYSmw9PSIsImVtYWlsIjoiZXhhbXBsZUByZWRoYXQuY29tIn0sInJlZ2lzdHJ5LnJlZGhhdC5pbyI6eyJhdXRoIjoiPT0iLCJlbWFpbCI6ImV4YW1wbGVAcmVkaGF0LmNvbSJ9fX0=" |base64 --decode
{"auths":{"cloud.openshift.com":{"auth":"3BlbnNoaWZ0LXJl==","email":"[email protected]"},"quay.io":{"auth":"ZZMVhJRUJUR1I3WUwxN05VMQ==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"3BlbnNoaWZ0LXJl==","email":"[email protected]"},"registry.redhat.io":{"auth":"==","email":"[email protected]"}}}
That's all for now. In this blog, we discussed how to use vault and the external-secrets operator to manage your secure data and how to securely roll out those secrets in a Kubernetes cluster.
Top comments (0)