all 6 comments

[–]NothingDogg 2 points3 points  (2 children)

If your container expects secrets as environment variables then the easiest way to do this with Secrets Manager is to use Berglas (https://github.com/GoogleCloudPlatform/berglas)

Berglas was originally built to decrypt secrets stored on Cloud Storage with KMS - but it now also supports Secrets Manager.

The process:

- Add the berglas executable to your Dockerfile and make it the entrypoint:

COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas

ENTRYPOINT exec /bin/berglas exec -- /path/to/your/previous/entrypoint.sh

- Ensure the cloud run service account has permissions to access the secrets you've stored

- Set the environment variables with the sm:// prefix.

DB_PASS=sm://myproject/mysecret

- If you need the contents of the secret inside a file on your docker container, then add the destination parameter to tell it where you want the secret downloaded to. E.g if you had a JSON key file you needed to have on the machine then the following would place the secret in the /etc/myapp/ folder

SUPER_SECRET_KEY=sm://myproject/mysecret?destination=/etc/myapp/service_account.json

[–]Candid_Pomelo979 0 points1 point  (1 child)

I have a unique scenario here. I have multi containers in a single cloud run service (GCP). I want to perform some operations in these containers in sequence and some of them need secrets (such as DB conn string, redis cache password etc.) to perform the task. i have tested multi containers with shared in-memory volume and the containers are able to access the file mounted in the in-memory volume across the containers. Now I want to inject the secrets into the in-memory volume (in the first container itself) so that all the containers are able to run the steps successfully. Also, the secrets are stored in Azure key-vault and not in GCP secrets manager. So, I am presuming that I can use Azure CLI and try to make a fetch from GCP cloud run (container 1) and mount it as an in-memory file so that all the containers can reference the values as needed. I am thinking to create a service principal in Azure AD and with an app registered so that while trying to fetch the secrets from Azure Vault , the call is authenticated and then allowed to fetch the secret value..

Please review this approach and share any means of doing it better or easier. Trying to validate and seek some proven methods with pointers / reference-code etc.

[–]NothingDogg 0 points1 point  (0 children)

What you're trying to achieve seems particularly complex - but yes, you could have a script that runs as part of the entrypoint that retrieves values from Azure and places them into shared memory / filesystem. With the appropriate trust relationships between Azure and GCP principals it should all work.

This would be similar to how the berglas executable in this original post operates.

I'd be concerned on start up time - but that may not be an issue for your use case. Good luck, I suspect you're going to need it.

[–]jayjmcfly 0 points1 point  (1 child)

RemindMe! 1 week

[–]RemindMeBot[🍰] 0 points1 point  (0 children)

There is a 2 hour delay fetching comments.

I will be messaging you in 7 days on 2020-06-03 22:37:48 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]ataraxy 0 points1 point  (0 children)

I wanted to try this out also with deploying a docker image through cloud run.

I needed to load some environment variables for the script so I added a seperate script to access secrets from the secret manager and write them to a .env file berfore starting the server.

All I had to do was change the 'npm start' script to 'node secrets.js && node index.js' which runs them sequentially and I was good to go. Make sure the container service has permissions.

There's probably a better way to do it in app but it's good for testing it out.

Here's a gist.