Skip to main content

Secret in Kubernetes

Secret in Kubernetes


Secrets in Kubernetes are sensitive information like Shh keys, tokens, credentials etc. As in general its must require to store such kind of secret object in encrypted way rather than plantext to reduce the risk of exposing such kind of information to unauthorised species.

A secret is not encrypted, only base64-encoded by default. Its require to create an EncryptionConfiguraton with a key and proper identity.

All secret data and configuration are stored onto etcd which is accessible via API server. Secret data on nodes are stored on tmpfs volumes. Individual secret size is limited to 1MB in size. The larger size limit is discouraged as it may exhausted apiserver and kubelet memory. 

To use secret its require that pod needs to reference with secret. A secret can be used in 2 ways with pod: as file in a volume mounted on one or more containers, or use by kubelets while pulling images from the pod.

There are two steps involved in setting up secret into pod definition yaml file.

1) Creating secrets
2) Injecting the secrets into POD

Secrets in K8s has two ways to create.
Imperative method & Declarative method.

Imperative method: In this method we can create secrets without using secrets definition file, just by using kubectl command.

Step 1: Creating secret

> kubectl create secret generic
ie: 
> kubectl create secret generic (secret-name) --from-literal=(key)=(value)
> kubectl create secret generic app-secret --from-literal=DB_Host=mysql
  • generic: Create a secret from a local file, director or literal value.
  • docker-registry: Creates a dockercfg Secret for use with a Docker registry. Used to authenticate against Docker registries.
  • tls: Create a TLS secret from the given public/private key pair.
  • from-file or --from-env-file: Its a path to a directory containing one or more          configuration files.
  • --from-literal: key-value pairs, each specified using.

We can specify  key value multiple times in secret definition file. like,
> kubectl create secret generic app-secret -from- =DB_USER=Root
> kubectl create secret generic app-secret -from-literal=Password=password

Specifying multiple key value pair could be complicated and make the definition file confusing. So, we have other way around where we can specify file name instead of writing multiple key value using --from-file=(path-to-file)
ie: 
> kubectl create secret generic \ app-secret --from-file=new_secret.properties

Declarative method: Secrets can be created using definition file.
   > kubectl create -f xyz.yaml
Layout of xyz.yaml Secret definition file
apiVersion:v1
kind:secret
metadata:
  name:new_secret
data:
  DB_Host:postgrac
  DB_USER: root
  DB_Password: password

Step 2: Inject secret into pod

As in Step 1 we have created Secret using Imperative and Declarative method. Now, its time to inject those secret into pod definition file.

In pod.definition.yaml file , add environment variable "envFrom" which state as list under spec:

envFrom: Its a environment variable
secretRef: Reference of secret definition file which we are pointing.
key: This describe the content of secret

  envFrom:
     -secretRef:
 name:new-secret
         key:DB_Password

Run below command to create the pod using pod-definition.yaml
> kubectl create -f pod-defination.yaml


  • Encode a Secret

Now, in this there is a issue. The password which we have specify is readable and its not safe to specify and prone to risk of attack. To take care of it it's require to hash or encoded the password using echo command.

From any linux server use below echo command to hash or encode the password.
> echo -n 'root' | base64
  cm94v9a==
> echo -n 'password' | base64
  cGFzd3d33==


So, it would be as mentioned below which is now safe to code below encoded username and password in definition file.


data:
  DB_Host:postgrac
  DB_USER: cm94v9a==
  DB_Password: cGFzd3d33==

  • Decode a secret

We can also decode the password or key which we have encrypted using below method from any linux server using "--decode"
echo -n 'cm94v9a==' | base64 --decode
  • Mounting Secret as Volume
Secrets can be mount as file using a volume definition file into pod. The mount path consist of file whose name will be a key of secret created with the kubectl create secret step earlier.

Example of mounting secret as volume
spec:
    containers:
    - image: nginx
      command:
       - sleep
        - "4000"
      volumeMounts:
      - mountPath: /nginxpassword
        name: vPostgres
    volumes:
    - name: vPostgress
        secret:
            secretName: dbase
   
A secret is only sent to a node if a pod on that node requires it. Kubelet stores the secret into a /tmpfs so that the secret is not written to disk storage. Once the Pod that depends on the secret is deleted, kubelet will also delete its local copy of the secret data as well.

  • Commands to keep in mind: 

To view the secret
> Kubectl get secrets

To view the detail of secrets with attributes
> kubectl describe secrets new-secret

To view the complete full function of secrets
> kubectl get secret new-secret -o yaml

To edit the secret
>kubectl edit secret new-secret


Having said that, there are other better ways of handling sensitive data like passwords in Kubernetes, such as using tools like Helm Secrets, HashiCorp Vault



Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x
You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out.
Now the option left is from the command line of VCSA appliance.
Below steps will make it possible to change the FQDN of the VCSA from the command line.
Access the VCSA from console or from Putty session.Login with root permissionUse above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_netOpt for option 3 (Hostname)Change the hostname to new nameReboot the VCSA appliance.After reboot you will be successfully manage to change the FQDN of the VCSA .

Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client.

If you are using self-signed certificate, you can regenerate the certificate with the help of below KB 2112283 article.



Happy Sharin…

VM Creation Date & Time from Powercli

Most of the times we have several requirement when we talk about IT environment like designing , deployment , compliance check or for Security auditing the environment.
Somewhere during security auditing we require to provide several information to security team to get successful audit.
One of them is the compliance of Virtual machine auditing of creation date and time.
Here into this post we will explore how to get the creation date and time of virtual machine hosted into the vCenter or ESXi.
To get the details we will use VMware Powercli to extract the details.
By default there is no function added into Powercli to get such details, so here we will add a function of vm creation date.
Below is the function which needed to be copy and paste into the Powercli.
=======================================================================
function Get-VMCreationTime { $vms = get-vm $vmevts = @() $vmevt = new-object PSObject foreach ($vm in $vms) { #Progress bar: $foundString = "       Found: "+$v…

Could not connect to one or more vCenter Server systems: https://FQDN:443/sdk

Recently I got a case where vCenter 6.0 where the webclient was not showing inventory while loading. Issue occur when the customer was performing migration activity of virtual machine.
We verified that the vpxd services of vCenter, which is VCSA (Appliance), went into stopped stated just after starting means its crashing.
On VCSA Shell: service-control --status vmware-vpxd shows "stopped" service-control --start vmware-vpxd starts the service starts for a couple of seconds and stops again
VCSA 6.0 is linked with extrnal PSC 6.0. Verified the services of PSC and found all looks into good state.
Tried to power off both the VCSA and PSC and Power on in sequence where we started first PSC and later VCSA. After restarting the VCSA, status of the VPXD services was same as it was getting stopped after couple of seconds.
Checked the VPXD logs and found that the heartbeat between ESXi and VCSA was getting timed out for more than 1032 ms or more.
VCSA has generated the core dump at /var/core. …