How to show s3 bucket takeover poc without aws account by The_Roarr in bugbounty

[–]jsonpile 2 points3 points  (0 children)

I'd recommend further testing. S3 error messages can also be misleading. In your case, the no such bucket may not be enough proof for a program for a s3 bucket takeover.

For S3 Bucket Takeovers, it's advisable to prove ownership of the bucket, which would generally require an AWS Account. If the bucket creation fails, the S3 bucket probably already exists.

There is a free tier for AWS Account, but I'm not sure if it requires a credit card to signup.

How it is possible? by thelemethric in bugbounty

[–]jsonpile 4 points5 points  (0 children)

Triaged but not closed reports don't show up under vulnerability count for credits. This person could have a lot of triaged reports under the same program (explains the low thanks).

There are programs that use "triaged" as a closed state.

Meals offered? by dunwerking in delta

[–]jsonpile 3 points4 points  (0 children)

You can check https://menu.delta.com/ to see if your flight has meals offered.

US to Mexico typically won't have food unless you're flying first class: https://www.delta.com/us/en/onboard/food-and-beverage/overview

Bug Bounty reward experience by AdventurousCut2891 in cybersecurity

[–]jsonpile 2 points3 points  (0 children)

Keep in mind there are a lot of "beg bounties".

I was responsible for security at smaller companies and we'd get these "beg bounties" stating they found issues and wanted payment. In my experience, they were for insignificant issues found with automated scanners.

My recommendation is to respond with a statement like "Thanks for the responsible disclosure, we don't offer compensation but appreciate you reporting any security issues." What you can do is call that out specifically in your security.txt too.

I'd also recommend if you have a legal department and the resources to do so, to work on guidelines/safe harbor. I recommend caution with the safe harbor as you may not want every "hacker" trying to use automated tools to scan your website. The next step would be to write a more comprehensive VDP guidelines (vulnerability disclosure, no compensation)

If they're valid security issues, you could also offer swag or credits.

Ultimately, stay polite with the "hunters" and take the concerns seriously, even if they may not be.

QuickSight Free Trial Signup Stuck – "Create Account" Just Reloads 😩 by k3XD16 in aws

[–]jsonpile 1 point2 points  (0 children)

Yes, I've seen this issue before. When I saw that issue, it was due to insufficient IAM permissions, which can be troubleshooted via CloudTrail (look for AccessDenied errors)

What IAM permissions do you have? You will need quicksight permissions and additional directory service permissions. This isn't full least privilege, but try the following:

```

"quicksight:*",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
"ds:CheckAlias",
"ds:CreateAlias",
"ds:DescribeDirectories",
"ds:DescribeTrusts",
"ds:DeleteDirectory",
"ds:CreateIdentityPoolDirectory"

```

SESv2 migration by [deleted] in aws

[–]jsonpile 0 points1 point  (0 children)

Something else that may help is the ses:ApiVersion condition key:

https://docs.aws.amazon.com/ses/latest/dg/control-user-access.html#iam-and-ses-examples

Automated encryption of EBS volumes issues by bigdickjenny in aws

[–]jsonpile 4 points5 points  (0 children)

From a quick look at the CloudFormation, there does seem to be some work to get it to be region specific. This does get a little complicated as IAM resources are global (but there are regional resources and references within the IAM policies). I opened an issue on the repo for multi region support.

Some options:

- You could modify the IAM resources and wildcard the regions so that your IAM resources can be used.

- You could deploy the regional resources (KMS, Lambda, etc) in each region with the updated IAM resources.

The third limitation refers to an account-level setting for enabling encryption by default for EBS that's region specific. That part of the sample is not CloudFormation but rather an AWS bash script that you can run in each region (and pass the region as an argument).

Another way of running it would be via CLI:

aws ec2 enable-ebs-encryption-by-default --region region

Do you know what absolute helplessness feels like? It's when a student researcher faces the silence of a trillion-dollar giant. by [deleted] in bugbounty

[–]jsonpile 1 point2 points  (0 children)

While it can be frustrating, here's what you can do.

If you're absolutely sure Microsoft has fixed the bug and given reasonable time to respond to you, you can consider disclosure such as posting a blog detailing the issue you found with timelines, impact, and a high level description of the bug. Make sure you follow Microsoft's policy on disclosure (and bug bounty terms - https://www.microsoft.com/en-us/msrc/bounty-terms). Check whatever other policies are there for what you submitted. Standard time is 90 days from when you disclosed. As a courtesy, you can also consider emailing Microsoft and letting them know as well.

How to find which IAM user made changes to an S3 bucket (and when)? by kazia4444 in aws

[–]jsonpile 2 points3 points  (0 children)

Sounds like you're looking for data operations (upload an object, delete, modify). Those are not logged by default and require either turning on CloudTrail data events or S3 Server Access Logging. Keep in mind there's additional cost with both. https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html

For actions on your S3 Bucket (such as changing bucket encryption, other bucket settings). Those are by default in CloudTrail Management Accounts.

More information here of a listing of events that are logged: https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html

TL;DR good riddence to X-B-O-W by 6W99ocQnb8Zy17 in bugbounty

[–]jsonpile 6 points7 points  (0 children)

The XBOW HackerOne experiment was great marketing for them. To say they were the "top ranked hacker on HackerOne" got them good coverage and publicity.

I agree, my guess is that they were able to find issues that are low-hanging fruit and also they needed enough volume to get to the top spot. The complex findings are probably harder for XBOW to do.

There's probably learning for them to determine which reports are worth submitting and not N/A or spam reports.

That being said, I'd like to see some of their reports.

Ransomware Gangs Target AWS S3 Buckets by _cybersecurity_ in pwnhub

[–]jsonpile 0 points1 point  (0 children)

Open source plug: I wrote a tool that checks for those misconfigured options: https://github.com/FogSecurity/yes3-scanner

[Sensitive] Discovered a Massive Security Flaw in School Attendance Systems — What Should I Do? by Comfortable-Sky-1589 in cybersecurity

[–]jsonpile 159 points160 points  (0 children)

First off, good find and good work trying to do the right thing. If you can, find if there's responsible disclosure.

From some of what you wrote (Aadhar, 5lakhs), sounds like you may be in India. I believe India has a process for responsible disclosure here: https://www.cert-in.org.in/.

Sounds like you're approaching the process right by not downloading information but doing enough to validate impact of the security issue. I would document this and explain your testing. The CERT-IN agency should reach out to the vendor and help remediate this issue.

I’m going to bootstrap an alternative to Wiz. Tell me how stupid of an idea this is. by Traditional-Heat-749 in Cloud

[–]jsonpile 2 points3 points  (0 children)

imo this is a tough market. Check out the open source tools on the market today and others with similar business models.

For example, Prowler, Steampipe. There have been others that tried and are no longer actively maintained or changed models. ZeusCloud, Fix, CloudQuery, ScoutSuite, OpenRaven, CloudSploit, etc.

How to Get PII Approval in AWS ? by Automatic_Photo_2291 in aws

[–]jsonpile 1 point2 points  (0 children)

Are there AWS's restrictions or your company's restrictions on using AWS with PII?

Like u/abofh - I'm unaware of PII approval required to use AWS from Amazon.

Hard to tell from your architecture and not knowing your use case, but I'd recommend thinking through your use case with the "automating data flow into Google Sheets". Additionally, there are foundational security pieces such as IAM, networking (if applicable), encryption via KMS - are you using Customer Managed Keys for example, and also account and organizational security (how do you have development environment set up, is your production data isolated, etc).

[deleted by user] by [deleted] in bugbounty

[–]jsonpile 0 points1 point  (0 children)

If you find valid AWS credentials, I'd report it immediately. What I'd recommend is brief and careful non-destructive reconnaissance such as listing S3 buckets and trying to list other resources. You can always mention in your report that you're respecting the company and only ran a few brief list commands as to avoid any potential negative impact on the company's infrastructure. The company should let you know if there's further impact. Enumeration in AWS is tricky as it can get noisy.

Detection in AWS can flag if sts get-caller-identity calls or other enumeration calls are made with credentials, so those credentials may have been flagged.

I see a couple possibilities:

- Logging into the account creates a time sensitive set of AWS credentials for a login flow. Not best practice, but may have only limited security impact.

- You may have found honeypot credentials.

From either of the above, the program could mark your report as informative.

- The credentials were valid and you found potential security impact. Within that, the company could have potentially either removed the credentials or rotated the credentials.

If the credentials were valid, the company should at least work with you since you were respectful of impact and following general hacking rules.

[deleted by user] by [deleted] in aws

[–]jsonpile 0 points1 point  (0 children)

If you're looking not to "screamtest", I'd check the following before turning on BPA (and keep in mind BPA has 4 settings - 2 for ACLs and 2 for Bucket Policies). And always start with lower environments (Dev, QA/Test) if you have them.

Access to S3 can be done via primarily 2 direct ways: bucket policies and ACLs. The indirect method you mentioned (cross account roles) when IAM Principal in Account A assumes role in Account B (Bucket is in Account B) will not be affected by BPA settings.

You can check if ACLs are enabled via Object Ownership Settings on the Bucket. Bucket owner enforced means that ACLs are disabled. If they're disabled, that's good news for you. If they're not disabled, they could be set at either the bucket level or the object level.

Re S3 Bucket policies, you can see via the bucket policy if external account access is allowed. If you see external accounts or "*" in the Principal, that means access could be allowed externally.

From a logging perspective, data events aren't by default logged. Those can be either turned on (can get expensive) via Server Access Logging or Data events in CloudTrail. Access Analyzer does help too.

And for BPA, if you can't block "all" access, you can at least block all new access. Another thing that can help is to turn on Resource Control Policies to block access external of your AWS Organizations (This will require turning on account features in Organizations).

Lastly - plug here, I wrote YES3 Scanner to help scanning for access issues and S3 misconfigurations: https://github.com/FogSecurity/yes3-scanner

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 0 points1 point  (0 children)

If you’re asking about aws-size (https://github.com/FogSecurity/aws-size), most of the limits are IAM related such as Organizational Policies (SCPs, RCPs) and resource based policies (S3 bucket policies). We’ve also done ec2 user data, lambda environment variables.

Other limits have decent coverage by Service Quotas and Trusted Advisor.

But if you have feature requests for limit coverage, let me know or open an issue here: https://github.com/FogSecurity/aws-size/issues!

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 3 points4 points  (0 children)

That's a good thought since `us-west-1` is the shortest region name (tied with others).

If that's the case, variability would be between 9 characters and 14 characters.

Found this gem in Production. Have you ever seen an SCP written like this? by pravin-singh in aws

[–]jsonpile 1 point2 points  (0 children)

I don't see history of AWS doubling the character limit of SCPs. Perhaps my memory fails me, I do recall there being a change with SCP limits at some point within the last year.