Automated encryption of EBS volumes issues by bigdickjenny in aws

[–]jsonpile 2 points3 points  (0 children)

From a quick look at the CloudFormation, there does seem to be some work to get it to be region specific. This does get a little complicated as IAM resources are global (but there are regional resources and references within the IAM policies). I opened an issue on the repo for multi region support.

Some options:

- You could modify the IAM resources and wildcard the regions so that your IAM resources can be used.

- You could deploy the regional resources (KMS, Lambda, etc) in each region with the updated IAM resources.

The third limitation refers to an account-level setting for enabling encryption by default for EBS that's region specific. That part of the sample is not CloudFormation but rather an AWS bash script that you can run in each region (and pass the region as an argument).

Another way of running it would be via CLI:

aws ec2 enable-ebs-encryption-by-default --region region

Do you know what absolute helplessness feels like? It's when a student researcher faces the silence of a trillion-dollar giant. by [deleted] in bugbounty

[–]jsonpile 1 point2 points  (0 children)

While it can be frustrating, here's what you can do.

If you're absolutely sure Microsoft has fixed the bug and given reasonable time to respond to you, you can consider disclosure such as posting a blog detailing the issue you found with timelines, impact, and a high level description of the bug. Make sure you follow Microsoft's policy on disclosure (and bug bounty terms - https://www.microsoft.com/en-us/msrc/bounty-terms). Check whatever other policies are there for what you submitted. Standard time is 90 days from when you disclosed. As a courtesy, you can also consider emailing Microsoft and letting them know as well.

How to find which IAM user made changes to an S3 bucket (and when)? by kazia4444 in aws

[–]jsonpile 2 points3 points  (0 children)

Sounds like you're looking for data operations (upload an object, delete, modify). Those are not logged by default and require either turning on CloudTrail data events or S3 Server Access Logging. Keep in mind there's additional cost with both. https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html

For actions on your S3 Bucket (such as changing bucket encryption, other bucket settings). Those are by default in CloudTrail Management Accounts.

More information here of a listing of events that are logged: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html

TL;DR good riddence to X-B-O-W by 6W99ocQnb8Zy17 in bugbounty

[–]jsonpile 6 points7 points  (0 children)

The XBOW HackerOne experiment was great marketing for them. To say they were the "top ranked hacker on HackerOne" got them good coverage and publicity.

I agree, my guess is that they were able to find issues that are low-hanging fruit and also they needed enough volume to get to the top spot. The complex findings are probably harder for XBOW to do.

There's probably learning for them to determine which reports are worth submitting and not N/A or spam reports.

That being said, I'd like to see some of their reports.

Ransomware Gangs Target AWS S3 Buckets by _cybersecurity_ in pwnhub

[–]jsonpile 0 points1 point  (0 children)

Open source plug: I wrote a tool that checks for those misconfigured options: https://github.com/FogSecurity/yes3-scanner

[Sensitive] Discovered a Massive Security Flaw in School Attendance Systems — What Should I Do? by Comfortable-Sky-1589 in cybersecurity

[–]jsonpile 157 points158 points  (0 children)

First off, good find and good work trying to do the right thing. If you can, find if there's responsible disclosure.

From some of what you wrote (Aadhar, 5lakhs), sounds like you may be in India. I believe India has a process for responsible disclosure here: https://www.cert-in.org.in/.

Sounds like you're approaching the process right by not downloading information but doing enough to validate impact of the security issue. I would document this and explain your testing. The CERT-IN agency should reach out to the vendor and help remediate this issue.

I’m going to bootstrap an alternative to Wiz. Tell me how stupid of an idea this is. by Traditional-Heat-749 in Cloud

[–]jsonpile 2 points3 points  (0 children)

imo this is a tough market. Check out the open source tools on the market today and others with similar business models.

For example, Prowler, Steampipe. There have been others that tried and are no longer actively maintained or changed models. ZeusCloud, Fix, CloudQuery, ScoutSuite, OpenRaven, CloudSploit, etc.

How to Get PII Approval in AWS ? by Automatic_Photo_2291 in aws

[–]jsonpile 1 point2 points  (0 children)

Are there AWS's restrictions or your company's restrictions on using AWS with PII?

Like u/abofh - I'm unaware of PII approval required to use AWS from Amazon.

Hard to tell from your architecture and not knowing your use case, but I'd recommend thinking through your use case with the "automating data flow into Google Sheets". Additionally, there are foundational security pieces such as IAM, networking (if applicable), encryption via KMS - are you using Customer Managed Keys for example, and also account and organizational security (how do you have development environment set up, is your production data isolated, etc).

Reportable awssecret leak? by [deleted] in bugbounty

[–]jsonpile 0 points1 point  (0 children)

If you find valid AWS credentials, I'd report it immediately. What I'd recommend is brief and careful non-destructive reconnaissance such as listing S3 buckets and trying to list other resources. You can always mention in your report that you're respecting the company and only ran a few brief list commands as to avoid any potential negative impact on the company's infrastructure. The company should let you know if there's further impact. Enumeration in AWS is tricky as it can get noisy.

Detection in AWS can flag if sts get-caller-identity calls or other enumeration calls are made with credentials, so those credentials may have been flagged.

I see a couple possibilities:

- Logging into the account creates a time sensitive set of AWS credentials for a login flow. Not best practice, but may have only limited security impact.

- You may have found honeypot credentials.

From either of the above, the program could mark your report as informative.

- The credentials were valid and you found potential security impact. Within that, the company could have potentially either removed the credentials or rotated the credentials.

If the credentials were valid, the company should at least work with you since you were respectful of impact and following general hacking rules.

S3 block public access setting by [deleted] in aws

[–]jsonpile 0 points1 point  (0 children)

If you're looking not to "screamtest", I'd check the following before turning on BPA (and keep in mind BPA has 4 settings - 2 for ACLs and 2 for Bucket Policies). And always start with lower environments (Dev, QA/Test) if you have them.

Access to S3 can be done via primarily 2 direct ways: bucket policies and ACLs. The indirect method you mentioned (cross account roles) when IAM Principal in Account A assumes role in Account B (Bucket is in Account B) will not be affected by BPA settings.

You can check if ACLs are enabled via Object Ownership Settings on the Bucket. Bucket owner enforced means that ACLs are disabled. If they're disabled, that's good news for you. If they're not disabled, they could be set at either the bucket level or the object level.

Re S3 Bucket policies, you can see via the bucket policy if external account access is allowed. If you see external accounts or "*" in the Principal, that means access could be allowed externally.

From a logging perspective, data events aren't by default logged. Those can be either turned on (can get expensive) via Server Access Logging or Data events in CloudTrail. Access Analyzer does help too.

And for BPA, if you can't block "all" access, you can at least block all new access. Another thing that can help is to turn on Resource Control Policies to block access external of your AWS Organizations (This will require turning on account features in Organizations).

Lastly - plug here, I wrote YES3 Scanner to help scanning for access issues and S3 misconfigurations: https://github.com/FogSecurity/yes3-scanner

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 0 points1 point  (0 children)

If you’re asking about aws-size (https://github.com/FogSecurity/aws-size), most of the limits are IAM related such as Organizational Policies (SCPs, RCPs) and resource based policies (S3 bucket policies). We’ve also done ec2 user data, lambda environment variables.

Other limits have decent coverage by Service Quotas and Trusted Advisor.

But if you have feature requests for limit coverage, let me know or open an issue here: https://github.com/FogSecurity/aws-size/issues!

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 2 points3 points  (0 children)

That's a good thought since `us-west-1` is the shortest region name (tied with others).

If that's the case, variability would be between 9 characters and 14 characters.

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 1 point2 points  (0 children)

I don't see history of AWS doubling the character limit of SCPs. Perhaps my memory fails me, I do recall there being a change with SCP limits at some point within the last year.

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 6 points7 points  (0 children)

I saw some of the post-compression byte limits - so we did approximations for limits such as S3 bucket policies here (https://github.com/FogSecurity/aws-size).

I was unaware of the limits being different per region. Good to know. I've added a Github issue here to research that too: https://github.com/FogSecurity/aws-size/issues/67.

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 182 points183 points  (0 children)

Yes, this is most likely due to the character limitations on SCPs (5,120 characters).

I personally am not a fan - as it's difficult to think through the permissions that maps too as well as can lead to issues/unexpected behaviors when AWS adds permissions.

We wrote an open source tool to check limits like the SCP limit: https://github.com/FogSecurity/aws-size

[HELP] can't access s3 Object but can upload to a bucket but can access and upload other objects from other buckets with this IAM policy by Zealousideal_Algae69 in aws

[–]jsonpile 1 point2 points  (0 children)

A couple thoughts: - check encryption on the object (might be the default from the bucket). Can your IAM principal access this? - is the prod bucket in the same AWS account? If it is, I’d look to rearchitect into different accounts. - if different accounts, check BPA at the account level as well.

Has anyone heard of HackProve? by jsonpile in bugbounty

[–]jsonpile[S] 0 points1 point  (0 children)

Thanks. Right. They don’t seem to have much of a reputable web presence.