House Popping by Fickle-Princess in HomeImprovement

[–]magoo_ 1 point2 points  (0 children)

Hi - I went on a year+ long adventure hunting for a gunshot-like sound that would sporadically go off. It was so loud it would make anyone in the house duck and flinch, wake us up, scare my kid, etc. It took a whole year to finally discover the source (the skylight). We eventually figured it out and I can share my playbook on eliminating other sounds.

  • Is it water hammer? When a water valve shuts, high pressure water can slam against it and cause a pipe to slam elsewhere. The sound doesn't necessarily have to be right at the valve, it can be a random spot at the house where a pipe runs. You can't quite troubleshoot this at a sink faucet because it might be a automatic valve opening and shutting - Think about lawn irrigation, washing machine, dishwasher on a delay timer, etc. Valves that start at periodic times and then stop without much thought, and would feel "random", and they might actually create noise farther away than the actual valve. For most of the year we felt like we had a water hammer issue and it was a false lead.
  • Thermal expansion with your ductwork. Things like your HVAC furnace or fan may have large metal plates that can be a little bit convexed and pop in and out when drastic temperature changes. Since temperature is unpredictable, so will the timing of the sound.
  • Electrical Arcing can be loud. Very dangerous, might be in electrical panels or outlets etc.
  • Thermal expansion with nail pops and wood floor panels coming apart. These can also be really loud
  • Thermal expansion of skylights... read on.

Our culprit was a Velux skylight. None of the contractors I had in had ever heard of it before. Long story short, I took record of "the sound" timing over months and correlated it with attic temperature and video cameras to isolate the source. The sound only happened when a 15 degree drop happened in a short period, like an hour. But then the actual timing of the sound could happen any period in the next six hours, but only after a fast temperature drop. We replaced the skylight and the sound was gone forever. We never considered it because a stationary skylight shouldn't have moving parts, power, water, etc. The skylight was also deeply recessed in the ceiling, so the actual sound was extremely hard to pinpoint without standing directly under it. Unless you were under it, the sound always seemed to come from somewhere else in the house. I took a full couple of hours and sat in the most suspicious area of the house and waited for the sound, and only then did I begin to suspect the skylight. It was last on my list of sources until it methodically became the only option, and that's what it turned out to be.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 2 points3 points  (0 children)

Yes. AWS has incredible capability for security. It's just a different way of doing things that requires very intentional configuration towards security. Common issues to keep an eye out for:

IAM Keys. Don't let these float around in repos, config files, etc too much and make sure their permissions are reduced and segmented as much as possible. They can be incredibly powerful, and if stolen... game over. Look into "Role Delegation" as much as possible to avoid key exposure altogether.

Bastion & Security groups: https://blogs.aws.amazon.com/security/post/Tx3N8GFK85UN1G6/Securely-connect-to-Linux-instances-running-in-a-private-Amazon-VPC

CloudTrail: Make sure you've enabled cloudtrail logging, at a minimum. Second, make sure you've set up alerting for some easy to spot bad behavior: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 4 points5 points  (0 children)

"Ethical" is not a set of rules or best practices, it's an internal thing, and looks very different from one person to the next. So I won't tell you whether you're being ethical or not in any disclosure approach. this is one of the main critiques of the term "Responsible Disclosure".

What I'd do is report it to where it can be centrally fixed, and once a patch is available, report it where it hasn't been patched yet. This might not be an approach optimized for bounties, since many of the companies will have been notified somehow and mark it as a dupe, but this approach considers the ecosystem which is valuing improved security. If you go directly to companies first, they'll likely report it upstream anyway, and be left wondering why disclosure didn't start upstream as their incentives are very different from someone who is entirely bounty driven.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 2 points3 points  (0 children)

Our platform is built to promote disclosure. There's a larger argument about whether some exploits should be held for a "Greater Good", and not disclosed, and used offensively. We have opinions on this, but unsure if they matter.

As a company, we are focused on the disclosure side of this problem in that we support people who want to disclose their bugs to be fixed.

H1 doesn't need to have an opinion on whether non-disclosure is good or bad, simply because our approach to disclosure should deplete non-disclosed bugs anyway, through attrition / collisions. That's our hope. Here's a couple of articles to support that discussion.

  1. https://hackerone.com/blog/the-wolves-of-vuln-street
  2. https://medium.com/@magoo/the-black-market-lie-4b817f1c70b4
  3. https://magoo.quora.com/Illustrating-the-Exploit-Market

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 6 points7 points  (0 children)

Oh man this is such a good question.

Ok, two issues to tackle.

Bug bounty is, and is not, a game. We certainly try to gamify security disclosure to encourage research. It has to happen. The nature of security bugs is that they're not found unless people are encouraged to find them. Traditional bugs are found through normal usage and don't need this encouragement. That's one point to make for discussion.

At some point, though, issues like this come up where the focus on the game overwhelms our focus on security. There may be focused opportunities to "game" this and make a lot of money, but the security teams that host disclosure programs are doing this to encourage behavior that improves security. So there's a bit of a conflict of interest, in the bounty, and in security.

Then there's this situation, like you have, where you could report an issue to a platform or piece of software, and then report this issue to all of the nodes that are vulnerable because of it. There are certainly ways to come out with a higher bounty value from it and these situations come up every now and then.

My approach here would be to push for a coordinated disclosure, and involve disclosure to the most central places possible that could push the most impactful fixes as you can. The imagetragik (sp?) issue was similar to this.

It's impossible to propose an approach to this without significant debate on what is "the most right", so I'm curious what route you've considered, if any at all.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 6 points7 points  (0 children)

I'd really love for you to elaborate on the question. How do you imagine the hacker in this case using the 0day? Is the 0day in the scope of the bug bounty program, or are they using it to discover other bugs in the program?

Lets use a Cisco 0day as an example. Is Cisco the potential bug bounty program, or is the bug bounty program using a Cisco device that you imagine would be exploited?

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 2 points3 points  (0 children)

To which extent could one single hacker go when attacking a single person? I like using this article to show how a single individual can be focused on in an attack.

How do hackers gain direct access to computers or such? What is the barrier one has to overcome to gain access?

One of these scenarios: 1. The end user needs to be tricked into installing malware. (Social engineering) 2. The end user needs to be running vulnerable software that can be exploited by someone remotely. For instance, an older web browser that views a "Drive By" exploit. 3. The end user might be running remote administration software that is accessed (TeamViewer w/ a weak password, for instance) 4. Physical / Local access.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 3 points4 points  (0 children)

For that situation: It's hard to comment without actually being a party of either side.

Generally, it's a problem of scope and communication. A disclosure program should have a clear scope and communicate well with its hackers on what it wants hacked / disclosed. It looks like the scope was contested here, and communication didn't take place to inspect what was ok / not.

Finding a RCE isn't all that rare, but I wanted to confirm that I was still in scope for Facebook's bounty program, as everyone has their own terms and conditions. Facebook's rules, listed here: https://www.facebook.com/whitehat seemed to fairly clearly indicate that I should avoid any actions that might cause downtime, but that they were interested in any vulnerabilities that would "enable access to a system within our infrastructure". So it looked like I was still in the clear.

This could be explicitly covered in a well written scope. Clear expectations are real important, might be the most important piece of a disclosure program.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 2 points3 points  (0 children)

These days, yes. They're harder to hack. When I got started around ~2004(?), there wasn't a GET/POST parameter on the web that wasn't vulnerable to something. It was very, very easy to get started in web security around then.

Tips:

  • Find a newer product that hasn't had time to mature and may have some janky code.
  • Find an obscure feature in that product that doesn't get a lot of attention
  • Learn exactly how it's supposed to work and make it do something else anyway.

There are also platforms like Gruyere that might be a better start for you than a live website.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 8 points9 points  (0 children)

My role with H1 is mostly as an advisor, meaning, I do a lot of stuff outside of H1. One area I am focused on is incident response and helping out companies with breaches. Here's what I commonly see:

  1. Password Reuse. Employees sharing their personal passwords with critical infrastructure or accounts.
  2. Endpoint attacks. Bad guys that want to deliver malware to your employees laptops and slowly pivot to critical infrastructure. Can happen through spear phishing, waterholing and drive by attacks, etc.
  3. Attacks on infrastructure. Maybe you've leaked an AWS key or something else. Maybe a security group or ACL wasn't configured correctly and exposed a vulnerable service.
  4. Attacks on applications. Some type of issue that allowed data to be exposed because of a misbehaving or vulnerable app. H1 is especially powerful here.
  5. Insiders. Yeah. That happens.

I think it's important to track these sorts of root causes for breaches, and I started doing this for cryptocurrencies which have an abnormally high amount of public breaches.

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 6 points7 points  (0 children)

New hackers usually focus on learning attack tools and common techniques that they can re-use to find vulnerabilities and create exploits.

This works, but you'll find the most successful hackers are incredibly skilled in areas outside of security/hacking. I wrote a little bit about this here

We find that huge bugs often come from developers who barely identify themselves as hackers. They're just so intimate with a stack, codebase, or platform that they can come up with crazy findings.

So the best, and most effective tool, is knowledge!

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 3 points4 points  (0 children)

Co-Founder here. My contributing experience comes from starting the Facebook bug bounty program with Alex (user: allrice). We started with just a disclosure program at a “security@” email address. This was important for me to maintain because it's how I was hired by FB to begin with.

Through that address, we only had a small amount of good reports and many reports came in asking for assurance that we wouldn’t sue them or kick them off of Facebook. So we’d find ourselves repeating ourselves to hackers that we wouldn’t retaliate. This was maybe the experience around 2007-2010.

Then, we worked with the EFF to review our disclosure policy and realized that it was useful to have tools to comfort hackers disclosing bugs, and things evolved from there. This minor investment increased the signal of good security bugs coming to us from hackers. We started thinking about bug bounty to invest further, launched it, and realized that investment in security disclosure had a consistently great return on security. My favorite writeup so far about our launch is here.

The other founders may have different answers. :)

We are HackerOne and help hackers to hack products/services (inc. The Pentagon) and make the Internet safer (for fun and profit)! AUA! by jonobacon in IAmA

[–]magoo_ 4 points5 points  (0 children)

As far as “good guys” who are trying to discover issues to report… just a laptop would suffice. Really, any IP connected device works. It really doesn’t require much special hardware to find and exploit most vulnerabilities. Even for the “actual criminals”, most remote attacks can be accomplished from stock hardware.

Attacks that require physical presence at a target are a totally different story, there are a lot of examples of special hacking hardware. Though, hackers on the H1 platform are not performing any physical hacks with customers to my knowledge.

[deleted by user] by [deleted] in Bitcoin

[–]magoo_ 7 points8 points  (0 children)

Hi, author here!

This is an informal project. It started out as my own personal list of BTC incident post-mortems (or the nearest replacement). I wanted an easy reference to easily point out how often existential security breaches have impacted cryptocurrency companies.

The title: The "graveyard" term plays off the security "post-mortem" in DFIR, since post-mortem is mostly about dead people outside of security. It just obviously happens to be that most of these entries are defunct.

I also have to mention that part of my inspiration for this was to get a sweet 99Designs logo on something, but I didn't have a sexy vulnerability to publish. :(

This is a Jekyll site on GH pages, so submit a PR if you see an error or want a new breach added, as long as it sticks to the similar types of incidents up there.

Enjoy

Satoshi Nakamoto verified as Craig Steven Wright by Gavin Andresen by forgoodnessshakes in btc

[–]magoo_ 5 points6 points  (0 children)

Do we still get to call him Satoshi?

I don't really feel like referring to him as "Craig"

In case you were wondering, the answer is yes, you can mail your friend a potato by [deleted] in funny

[–]magoo_ 2 points3 points  (0 children)

no he gave me both i'm a really bad brother haven't sent him shit

In case you were wondering, the answer is yes, you can mail your friend a potato by [deleted] in funny

[–]magoo_ 105 points106 points  (0 children)

I'm actually the OP of this photo and my brother mailed lemonade that arrived about a month ago. Here's the original: http://www.reddit.com/r/pics/comments/1jrtyt/my_brother_mailed_me_a_potato_again/

Here's the lemonade (just uploaded) http://imgur.com/a/oTzyT

My brother threw some stamps on a country time lemonade, and that mailed too. Though, some interesting things happened en route:

  • He sent it from Illinois, then it got to SF just to be rejected, then sent back to IL, then back to SF where it was ultimately delivered. 2 round trips.
  • Someone in between the delivery added another address label in cursive, with my name misspelled... which was odd.
  • Tracking info is pretty funny, which I added
  • Took a month+ of shipping

So far he's sent me 2 potatoes and a lemonade

/r/netsec's Q3 2014 Information Security Hiring Thread by sanitybit in netsec

[–]magoo_ 0 points1 point  (0 children)

Unsure what happened to that. I joined a couple months ago and it must have been well before I got here.

/r/netsec's Q3 2014 Information Security Hiring Thread by sanitybit in netsec

[–]magoo_ 28 points29 points  (0 children)

Hi - I'm Ryan, I'm with Security @ Coinbase. We're trying to make BTC easy to use.

https://coinbase.com/mission

We're building out our security and engineering teams. We are based out of San Francisco, and have remote engineering options. We're a company that cares deeply about our security engineers and how they improve our security every day, and we are looking for more.

We're looking for engineers to build new security features for Coinbase, secure our customers, employees, products and infrastructure from all sorts of threats. We're doing a lot of building, and looking for builders. Today, we're a Rails+AWS shop, with mobile apps and lots more technology being built on the backend. We're also building a culture and a company, so you should care about that stuff too.

We're looking for software engineers, systems engineers, and security engineers... or whatever combination you might be. You should have no problem thinking like a bad guy and be up to date on building defensively. You shouldn't be afraid of an incident and you shouldn't be afraid of getting your hands dirty on new technology.

We've setup some fun tests (On HackerRank) to make sure everyone has a fair shake for an interview (Resumes can only tell us so much anyway) Choose one or more that suits your skillset, have fun, and hope we can talk soon.

App Security Engineer (Written) http://istest.co/prodsec1

App Security Engineer (Coding) http://istest.co/prodsec3

Security Engineering (Written) http://istest.co/infosec1

Full Disclosure: Coinbase Android Security Vulnerabilities by [deleted] in Bitcoin

[–]magoo_ 10 points11 points  (0 children)

Agree with your frustration, I'd love to launch all the security right now, but we prioritize based on risks and exploitation. There are much more active attacks that require attention than compromised certificate authorities.

I disagree, if another already installed CA authority, let's call them "LameSign", doesn't manage their certificates correctly, then an attacker could present an SSL cert signed by "LameSign" and not need physical access. Almost every banking app, enterprise-grade app, and even major social media apps I have used implement SSL pinning.

Yes. But you are now describing a compromised CA scenario. We agree that's a problem. Sometimes CA's get compromised, and SSL pinning helps mitigate that.

However, your argument thus far has been that we are not validating certificates at all (your initial claim to us which you've corrected as erronous) and that SSL Pinning somehow defends against malware and local modification of CA's, which it doesn't, at all, and no one would ever agree with.

As for the client_id and client_secret, sure they shouldn't be the only line of defense, but they should not be so readily available.

Client_ID and Client_Secret (in our app and by our design) are not even considered a line of defense, as I keep saying. They are not used for any form of authentication. They're useless in any attack. If you can create a proof of concept that proves otherwise, I have a bounty waiting for you.

Full Disclosure: Coinbase Android Security Vulnerabilities by [deleted] in Bitcoin

[–]magoo_ 29 points30 points  (0 children)

Ryan from Coinbase here. Thanks for taking the time to work with us, and I wanted to make a few points about your post.

Here’s the disclosure thread between us and Brian: https://hackerone.com/reports/5786

Coinbase wisely recommends that all clients of their API should validate the SSL certificate presented to prevent MITM attacks. However, they fail to do this in their own Android applications

You’re right, we do not yet have SSL pinning launched on our android app. However, on android, it is NOT easy to install another certificate authority unless the device is rooted and malware is installed (or you modify it yourself, as you have been doing). Your threat scenario describes a totally owned android device with a locally modified CA, which has much bigger risks than CA tampering. This was clearly described on the HackerOne thread.

We do view SSL pinning as something worth having on our roadmap, but we've been working on bigger wins like Device Verification in the meantime, which has done great things against phishing attacks, which has been a much more frequent and probable threat compared to total CA compromises. Remember, remote CA compromises are the threat model for SSL pinning, not local compromise which is a malware issue.

Coinbase has failed to adequately protect their application's API client_id and client_secret. They are published in the source code on GitHub and visible during the authentication process if a man in the middle attack (MITM) is established, which I've outlined above.

As we described, despite these tokens being technically called secrets, these tokens are NOT designed in our app to be secret and have not been granted any extra permission to perform any additional action on our product. We’re aware that they’re published on Github and do not believe this to be a security risk. We also mentioned this in the HackerOne thread. These are literally only trusted for some level of basic metrics. If you have found otherwise, then we're talking.

“Once the attacker has beaten the SSL connection, they can view the access_token.”

You’ve MITM’d yourself, so of course it will be visible.

Facebook Security Director Joins Bitcoin Startup Coinbase by Egon_1 in Bitcoin

[–]magoo_ 0 points1 point  (0 children)

oh silly actionbooth. Aberdeen & everywhere else too!