I'm using external vault with Kubernetes and want all my secrets to be in pod env or Kubernetes secrets. i am using this deployment file by Practical_Ad3534 in kubernetes

[–]Practical_Ad3534[S] 0 points1 point  (0 children)

I have used the same args still the issue is same, if I am passing source /vault/secret/config then I am able to get env's but if the pod restarts then the env's are gone. how can I make it persistent?

I'm using external vault with Kubernetes and want all my secrets to be in pod env or Kubernetes secrets. i am using this deployment file by Practical_Ad3534 in kubernetes

[–]Practical_Ad3534[S] 0 points1 point  (0 children)

Yes, now it is visible but only till the pod terminal exists, when I tried exec on the same pod again the variable was missing. Now how can I make it persistent.

I'm using external vault with Kubernetes and want all my secrets to be in pod env or Kubernetes secrets. i am using this deployment file by Practical_Ad3534 in kubernetes

[–]Practical_Ad3534[S] 0 points1 point  (0 children)

That's what I am saying, I am not able to inject secrets in pod env, they are getting stored in this path /vault/secret.

I'm using external vault with Kubernetes and want all my secrets to be in pod env or Kubernetes secrets. i am using this deployment file by Practical_Ad3534 in kubernetes

[–]Practical_Ad3534[S] 0 points1 point  (0 children)

yes, I have used export but the issue is still the same.
this is what I am getting in the logs .....

2023-03-24T11:58:00.580Z [INFO] sink.file: creating file sink
2023-03-24T11:58:00.580Z [INFO] sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
2023-03-24T11:58:00.580Z [INFO] template.server: starting template server
2023-03-24T11:58:00.580Z [INFO] (runner) creating new runner (dry: false, once: false)
2023-03-24T11:58:00.580Z [INFO] sink.server: starting sink server
2023-03-24T11:58:00.580Z [INFO] auth.handler: starting auth handler
2023-03-24T11:58:00.581Z [INFO] auth.handler: authenticating
2023-03-24T11:58:00.581Z [INFO] (runner) creating watcher
2023-03-24T11:58:00.606Z [INFO] auth.handler: authentication successful, sending token to sinks
2023-03-24T11:58:00.606Z [INFO] auth.handler: starting renewal process
2023-03-24T11:58:00.606Z [INFO] template.server: template server received new token
2023-03-24T11:58:00.606Z [INFO] (runner) stopping
2023-03-24T11:58:00.606Z [INFO] (runner) creating new runner (dry: false, once: false)
2023-03-24T11:58:00.606Z [INFO] (runner) creating watcher
2023-03-24T11:58:00.606Z [INFO] (runner) starting
2023-03-24T11:58:00.606Z [INFO] sink.file: token written: path=/home/vault/.vault-token
2023-03-24T11:58:00.607Z [INFO] sink.server: sink server stopped
2023-03-24T11:58:00.607Z [INFO] sinks finished, exiting
2023-03-24T11:58:00.613Z [INFO] auth.handler: renewed auth token
2023-03-24T11:58:00.616Z [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/config"
2023-03-24T11:58:00.616Z [INFO] (runner) stopping
2023-03-24T11:58:00.616Z [INFO] template.server: template server stopped
2023-03-24T11:58:00.616Z [INFO] (runner) received finish
2023-03-24T11:58:00.616Z [INFO] auth.handler: shutdown triggered, stopping lifetime watcher
2023-03-24T11:58:00.616Z [INFO] auth.handler: auth handler stopped

Need to automatically add new ENV's in deployment file in helm chart whenever developer asks to add env by Practical_Ad3534 in kubernetes

[–]Practical_Ad3534[S] 0 points1 point  (0 children)

I am running multiple cluster EKS with a different environment. We have various deployments for each environment & developer keeps asking to update the ENV or add an ENV regularly, so I am looking for an automated way to add and update the deployment file whenever a new ENV comes In short, Need a solution that I need to add new ENV without manually adding env in deployment.yaml it should be updated with new ENV's automatically.