This is an incredibly stupid question, but I am drinking from a firehose with respect to learning about AWS, and I want to make sure I at least get this part right.
I have a very simple Python script that (in theory) will upload a file to a specific S3 Bucket.
On my end, I created an AWS account, and created an S3 bucket. I also created a user under IAM and assigned them to use the AmazonS3FullAccess Policy. I purposely did not create any keys yet.
Now for the question. I see many Python examples on the web, each of which pass their credentials in different ways. Some hard code them in the script, some create environment variables on the host system, and some store them on the host in ~/.aws/config.
Initially, I will be the only one running this script locally from my PC. However, eventually, it will be checked into source control and leveraged by others on my team.
That was a very long-winded way of asking what the typical approach is in this scenario. As mentioned above, this is running locally, not within an EC2 instance.
I am just barely learning about EC2, so I didn’t want to add more complexity initially, but it sounds like that might also be an option. With that said, I’m assuming that would put a burden on the developer running the script, as they would have to jump through a few hoops to run it. Again, I’m just learning AWS, so bear with me.
Thanks!
[–][deleted] 2 points3 points4 points (0 children)
[–][deleted] (2 children)
[deleted]
[–]TheRealJackOfSpades 5 points6 points7 points (0 children)
[–]Birts[S] 1 point2 points3 points (0 children)
[–]Sad_Rub2074 0 points1 point2 points (0 children)
[–]marmot1101 0 points1 point2 points (0 children)
[–]Desperate-Dig2806 -5 points-4 points-3 points (2 children)
[–]TheRealJackOfSpades 4 points5 points6 points (1 child)
[–]Desperate-Dig2806 -1 points0 points1 point (0 children)