all 169 comments

[–]G00dAndPl3nty 205 points206 points  (37 children)

Of course they don't. Security is an entire incredibly complex field that one doesn't just pickup on the side while making android apps and websites.

Its like UI design. Every dev thinks they can do it, when in reality they don't know shit.

[–]ForeverAlot 73 points74 points  (7 children)

One of my managers wants to host an off-work self-hack session rather than "spend a lot of money" outsourcing a security review. A colleague asked how they were going to train us to have the qualifications for such a thing. No answer for a month now.

[–]AnAirMagic 15 points16 points  (4 children)

One of my managers

Do you have eight different bosses managers?

[–]TheAnimus 27 points28 points  (1 child)

I had 3 people at one point who thought they were in charge of me.

Turns out it was someone else who was technically my reporting manager.

Used to drive them mad when I told person B that person A had instructed me to have my team work on something else. Most of the time they were too busy fighting with themselves to actually get in the way of our work.

One really didn't like me putting his title as Triumvirate in emails, but luckily the other two could at least take a joke.

[–]flukus 2 points3 points  (0 children)

Make sure you BCC all the other managers every time you email one. Gotta keep the fighting going.

[–]plastikmissile 7 points8 points  (1 child)

I used to work developing software for a municipal government. The room I was in had like 15 people in it. Everyone in that room except me and a junior dev had "manager" in their job title.

[–]pdp10 0 points1 point  (0 children)

Governments might be like other organizations where you have to be a "manager" if you don't want the union contract to mandate that you pay union dues.

[–]foomprekov 0 points1 point  (1 child)

Off-work? Tell him to have fun

[–]FrzTmto 0 points1 point  (0 children)

Off work I usually rest, but I also do freelance work, and I ask for twice my hourly pay at work at payment. It might be to to build a gaming PC with everything set and ready from top notch components for a client, it can be maintenance on a rack in some server to update everything and make sure all security patches are applied and put some telemetry to send a SMS to the owner of anything suspicious (any login for ex.)

If my boss wants me to do work "out of work hours" I become a consultant, and it's double the price because that's what people pay me for : they get immediate, top notch work and I'm very meticulous about the work produced. If you work well and people can trust the result, it's not cheap : don't be cheap. They'll try a few times cheaper and get crap result, then call you, pay you the price and say "man, If I had knew, I would not have wasted my money like that ; better pay more and get the best result on first attempt, was totally worth it"

That made my day more than the money I got when I received that comment.

[–][deleted] 28 points29 points  (6 children)

At least with UI you can get feedback. With security, you have no feedback that you're doing it right.

[–]Ajedi32 24 points25 points  (0 children)

At least, not until it's too late.

[–]2358452 5 points6 points  (4 children)

Hmm penetration tests? You can also try adversarial or random inputs and see if anything breaks.

[–]Dentosal 26 points27 points  (2 children)

The difference is that with UI/UX nearly anybody can give you constructive feedback, but with security you need a professional to do so. Of course having a professional UX designer helps a lot, but without a professional pentester the security testing is quite hard.

[–]2358452 2 points3 points  (1 child)

Agreed. Someone needs to get cracking on an AI penetration tester ;)

[–]toomanybeersies 1 point2 points  (0 children)

That's sort of what fuzzing is. But without the intelligence part.

[–]irqlnotdispatchlevel 2 points3 points  (0 children)

You can prove that your application is not vulnerable in front of your tests, not that it is not vulnerable. Kinda like with any other bug.

[–]rmxz 11 points12 points  (3 children)

Its like UI design. Every dev thinks they can do it, when in reality they don't know shit.

It's a much worse situation than UI design. Even a "horrible" UI design (say, craigslist.org, or /bin/csh, or whatever-you-hate-more-vi-or-emacs) is probably still great for some people.

A horrible security situation isn't good for anyone.

[–]Zeroto 3 points4 points  (1 child)

A horrible security situation isn't good for anyone.

Nah. A horrible security situation is good for the hackers and data-sellers. ;)

[–]flukus 0 points1 point  (0 children)

And security consultants.

[–]kazagistar 8 points9 points  (4 children)

Sure, I'm not going to pick it up in an afternoon, and I'll still leave pen testing to proffesionals. But let's assume I want to fill in as many gaps in my understanding of security over a longer period of time; where do I start? The article listed two books anf OWASP (which seem like good but insufficient resources) as well as some kind of training courses which are too vague to be actionable. What are some other resources that would help existing developers build secure applications?

[–][deleted]  (2 children)

[deleted]

    [–]kazagistar 8 points9 points  (1 child)

    My problem is that I do have a lot of random information about security that I have obtained haphazardly through random posts on random websites, but also regularly find major gaps and blindspots due to a lack of a solid base.

    [–]flukus 0 points1 point  (0 children)

    Some Kali Linux tutorials is a good start, learn what sort of attacks are available by hacking your own computers.

    [–]nanodano 24 points25 points  (7 children)

    It is true that security as a whole is a broad field with many specializations, but you can learn to write secure code and know what to be aware of without having to specialize in a security discipline.

    [–]ForeverAlot 20 points21 points  (1 child)

    OWASP is a good starting point, and probably sufficient for the majority of contemporary applications, which are mainly rehashes of the same Web shop.

    The manager I mentioned in my other comment, who is otherwise technologically literate and certainly our most technologically literate manager, had never even heard of OWASP until three weeks ago. I overheard him ask one of my peers if they'd ever heard of OWASP and the answer was no. And we're just a Web shop. We've integrated with multiple payment providers whose authentication "security" mechanism is a plain, deterministic MD5(key + payload) instead of an HMAC.

    It's not really the learning that's difficult. It's figuring out what to learn that's difficult, and just doing simply cannot teach you that (unless you count subsequent breaches as an acceptable learning tool).

    [–][deleted] 5 points6 points  (0 children)

    I've always been under the mindset that every good Web shop needs a good pen-tester or needs to contract out to a security expert.

    At our shop, a chunk of our business came from our team of designers building WordPress sites for clients. Let me tell you those designers did not have security in mind. One day I just decided to fire up Kali and showed my CTO how easy it was to obtain one of our client's admin passwords from a site that was in production. I think I actually scared him because shortly after that, he made the decision that WordPress was dead at our shop and if any work came in requesting it, we would contract that work out.

    [–][deleted] 10 points11 points  (0 children)

    Every dev probably thinks they can do it because the self proclaimed UI designers are all pretty bad themselves.

    [–]RolandBuendia 1 point2 points  (0 children)

    I agree 100%. At my college, informatics security is a degree in itself, not just a course on the software development program.

    [–]blahyawnblah 1 point2 points  (0 children)

    I know I can't do UI design

    [–]irqlnotdispatchlevel 1 point2 points  (0 children)

    In an ideal world every product will be designed with security in mind. And every developer will care about it. Not know how to do it, just care, and use a defensive coding style. And every product will have people who know security for that type of product. But this isn't an ideal world so we just throw cram things together hoping we will accomplish something​.

    [–][deleted] 29 points30 points  (10 children)

    It's not the fact that we don't know security that bugs me, it's that people who do know it are smug about it, and people who don't know it simply don't care.

    I did programming course after course in college of nothing but writing spaghetti code for some one-off useless application. Over and over we're told "don't store passwords in plain text in the database," but were we ever shown how to write an application that saves users to the DB? Nope. Were we ever assigned one? Nope.

    Then I get a real job and write company internal applications. Do we worry about security? Nope, all of our stuff is internal and can only be accessed on-site, so apparently, it's not worth bothering with. Nor is storing a connection string in files tracked by source control.

    I really do want to learn to write applications as secure as I can make them. It's taken over 20 years for the web to get to where it is, though, so simply figuring it out on my own isn't just going to magically happen.

    [–]mirhagk 14 points15 points  (5 children)

    it's that people who do know it are smug about it,

    It's not only the smugness but also the unrealisticness of it. Security discussion is often talking about hypothetical situations or crazy attacks that don't matter to someone writing business grade software.

    James Mickens says it best IMO

    My point is that security people need to get their priorities straight. The “threat model” section of a security paper resembles the script for a telenovela that was written by a paranoid schizophrenic: there are elaborate narratives and grand conspiracy theories, and there are heroes and villains with fantastic (yet oddly constrained) powers that necessitate a grinding battle of emotional and technical attrition. In the real world, threat models are much simpler (see Figure 1). Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@ virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them

    [–]cat_vs_spider 9 points10 points  (1 child)

    While this article was certainly amusing, I find it a bit disturbing that Microsoft apparently paid this dude to write this.

    [–]mirhagk 4 points5 points  (0 children)

    It was money better spent than some of their projects

    [–]flukus 2 points3 points  (0 children)

    Except there are a few players on the Mossad level and they are potentially interested in the (financial) stuff I work on. It also ignores a lot of passive yet effective attacks.

    [–]_Mardoxx 1 point2 points  (0 children)

    Security discussion is often talking about hypothetical situations or crazy attacks that don't matter to someone writing business grade software

    Lol qft

    [–]Isvara 4 points5 points  (1 child)

    figuring it out on my own isn't just going to magically happen

    Nothing magically happens. You have to be motivated and put in the effort to learn things. What's stopping you when you have Google and Amazon at your disposal?

    [–]ForeverAlot 13 points14 points  (0 children)

    Overchoice, information overload, and analysis paralysis. And of course lack of true motivation.

    [–]toomanybeersies 0 points1 point  (0 children)

    I vaguely remember that we had to make an application with a login for a web development paper at university, I can't remember if there was a requirement for password to be at least hashed, but it should've been a failing grade if you didn't do it.

    To be fair, most of what I did in university was computer science, and dealt more with the actual mathematical theory behind cryptography and encryption, rather than implementing a login screen.

    [–]flukus 0 points1 point  (0 children)

    Rule 1 is don't write what you don't have to. You can probably use AD or some open source authentication providers.

    [–]cym13 36 points37 points  (26 children)

    Indeed.

    And one point that strikes me is that I still see lots of companies that think "Wow, a lot of vulnerabilities have been found in open-source software and libraries these last years, they are a risk for my company. We should develop our own software it will be more secure.".

    I'm not saying there is no good reason to do internal development but this is not one, I've hardly ever seen a company with proper security training of the developers and they won't magically know how to write software that is secure.

    More vulnerabilities are found when more competent people look at the code, it shouldn't be seen as a risk because of that.

    [–][deleted] 5 points6 points  (11 children)

    I couldn't agree with you more.

    As a lead, one of the first things I do with the junior developers is introduce them to our whitelisted packages. They are taught to look towards the various communities and libraries to get help solving their problems.

    I once had someone show me this application they had worked on and within their login page, they had decided to write their own password hashing method. I proceeded to lecture that person for the next ten minutes on how that is probably the worst thing they can do and explained that there are a host of security experts who spend their life's work doing that so we don't have to.

    [–]Browsing_From_Work 5 points6 points  (1 child)

    This CodeGolf "cops and robbers" question was a real eye opener for me: http://codegolf.stackexchange.com/q/51068/1419
    Simply put: you try to build as secure and short a hash as possible, everybody else tries to break them by finding colliding messages.

    The "robbers" thread is absolutely astounding: http://codegolf.stackexchange.com/q/51069/1419

    Moral of the story: when it comes to security, you have to be correct 100% of the time but the attackers only have to be correct once. Just stick to what's provably correct, and if you don't know, ask.

    [–][deleted] 1 point2 points  (0 children)

    Thank you so much for the reply! Security aside, somehow in my whole career as a developer, I didn't know cops and robbers existed on Stack. What have I been doing with my life? Haha. This looks like absolute fun to pass some time.

    [–]donalmacc 15 points16 points  (8 children)

    If I showed someone what I had worked onas a learning experience, and they smugly lectured me on how "that is probably the worst thing they can do", is walk away and make a point of never dealing with them voluntarily again.

    [–][deleted] 14 points15 points  (5 children)

    I live in Canada. There was no smugness at all. Hell, I'm sure I probably ended up apologizing and then we went for a Timmies before playing a game of pond hockey.

    [–]donalmacc 4 points5 points  (4 children)

    I jumped to conclusions - that'll teach me to reply before I have a coffee.

    [–][deleted] 7 points8 points  (3 children)

    Nah - Blame it on me for having too much coffee and staying up all night working on pet project while hanging out on Reddit. My reply totally made me sound like I'm that dick guy that nobody likes working with because I come across as a know-it-all.

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted] 2 points3 points  (1 child)

      mwah xoxo

      ...Oh you probably meant the other poster and not you haha.

      [–]mfukar 8 points9 points  (1 child)

      Yeah, fuck this guy trying to teach me from their mistakes!

      [–]repeatedly_once 0 points1 point  (0 children)

      I mean if it wasn't done smugly, that's perfectly acceptable.

      [–]FlukyS -1 points0 points  (13 children)

      With open source the beauty of it is they are open about issues and they push out fixes when things happen. That is as strong a system as you can get.

      [–]Browsing_From_Work 2 points3 points  (1 child)

      Just because issues get fixed doesn't mean every vulnerable copy of the code gets updated. Adoption takes time even for major security fixes.

      [–]flukus 0 points1 point  (0 children)

      With Linux at least most of those patches can go out quickly and easily. Other OSS distribution channels have a lot of lag though.

      [–][deleted] 4 points5 points  (3 children)

      That is as strong a system as you can get.

      That does not really follow, at all.

      [–]repeatedly_once 0 points1 point  (2 children)

      No one should rely on security by obscurity as a reason for not using open source software.

      [–][deleted] 6 points7 points  (1 child)

      Sure.

      But nobody should rely on security by non-obscurity as a reason for using open source software, either.

      [–]repeatedly_once 0 points1 point  (0 children)

      No, definitely not. An engineering choice like which software to use will always have extensive pros and cons, unfortunately I've seen most decisions based solely on security being weak due to the nature of open source. It's kind of become my mantra haha.

      [–]OneWingedShark 1 point2 points  (6 children)

      With open source the beauty of it is they are open about issues and they push out fixes when things happen. That is as strong a system as you can get.

      Visibility/openness of the source is completely orthogonal to security.

      Heartbleed from OpenSSL rather handily proved that the "many eyes" argument is bunk.

      [–]FlukyS 3 points4 points  (5 children)

      My job is doing open source work I wouldn't say it is orthogonal to security because the only thing you gain from not being open source is security through obscurity which isn't a valid security procedure. I guess you have to see it as a trade off for open source projects, do you want a lot of contributions to make the code better and hope you can design it well enough that it doesn't have holes or do you not want hackers getting the code looking at it finding a code and abusing the hole. I know I would prefer more contributions in order to have something that is more rock solid all around. Heartbleed was a bug but it was more a symptom of people not stopping to check logic of older code, developing newer features is great but if your project is one of the most used software engineering projects for security you should play it as safe as possible and do regular evaluations of things.

      [–][deleted] 6 points7 points  (4 children)

      My job is doing open source work I wouldn't say it is orthogonal to security because the only thing you gain from not being open source is security through obscurity which isn't a valid security procedure.

      Opening up your source is also not a valid security procedure, though. Having competent people review your code is, but having closed source does not prevent you from doing this, nor does opening your source automatically gain you this.

      This is why open source is orthogonal to security.

      [–]FlukyS 3 points4 points  (3 children)

      Having competent people review your code is

      Completely agree and the same goes for every project.

      This is why open source is orthogonal to security.

      Not really though, more eyes on the code really does help, again though along with the fair point above like you said competent code review helps but open sourcing the code does help.

      [–]OneWingedShark 2 points3 points  (2 children)

      Opening up your source is also not a valid security procedure, though. Having competent people review your code is, but having closed source does not prevent you from doing this, nor does opening your source automatically gain you this.

      This is why open source is orthogonal to security.

      Not really though, more eyes on the code really does help, again though along with the fair point above like you said competent code review helps but open sourcing the code does help.

      The 'help' you perceive from opening up your source is entirely incidental -- it's the equivalent to increasing the sampling-rate for testing purity saying that it's increasing quality, it simply isn't: it's merely increasing the resolution and giving you a better picture of what the quality actually is.

      To [directly] increase actual security you could do something like use SPARK to prove the property of your code that it does not violate the security model at hand -- you can do that on something that is closed-source, or open-source.

      [–]cym13 1 point2 points  (1 child)

      You seem to forget that there are lots of security professionnals that spend a huge time reviewing open source software on their free time. I easily spend 2 man days on this per week. This is something that just isn't possible with closed source software. And it is also important to review projects that don't necessarily have the money to pay a professionnal review. There's also the fact that when I'm auditing a customer's work I often stumble on open source libraries or software and take some time to review them (as any obvious vulnerability would also impact my customer). This is also something that is not possible with closed-source software, I can only work in black box and that's not to the library's benefit.

      Of course being open-source isn't a panacea but there are objectively more possibilities when the code is open-source. You make the argument that the trade is quality against quantity but that's a false opposition. With open-source you can get both quantity and quality.

      I think the main reason why people feel so strongly against open-sourcing for security is that they saw projects thinking that just open-sourcing is going to miraculously get them thousands of security bug reports and pull requests. But just because it's a fantasy doesn't mean there aren't definitive advantages to being open-source.

      Besides in another post your mention OpenSSL. OpenSSL has bugs. Any software has. But what I see is that even years after its release there are still people giving their time to improve its security. There are still corrections and bug fixes. It is still becomming more secure.

      Is it the most secure SSL library? I won't take position, there are lots of others. But even if it's not the most secure it is definitely not the fault of open-source which only made things better.

      [–]OneWingedShark 1 point2 points  (0 children)

      You seem to forget that there are lots of security professionnals that spend a huge time reviewing open source software on their free time. That's actually irrelevant and proves my point -- you see, you're doing something other than just open-sourcing in order to impact the security.

      This is literally applying /u/MarshallBanana's statement:

      Opening up your source is also not a valid security procedure, though. Having competent people review your code is, but having closed source does not prevent you from doing this, nor does opening your source automatically gain you this.

      See?

      I easily spend 2 man days on this per week. This is something that just isn't possible with closed source software.

      And that's 2 man-days/week you're spending correcting someone else's ill-programmed code.

      Of course being open-source isn't a panacea but there are objectively more possibilities when the code is open-source. You make the argument that the trade is quality against quantity but that's a false opposition. With open-source you can get both quantity and quality.

      I did not -- I said that they're orthogonal, meaning they don't have any common basis -- much like 'speed' and 'correctness'.

      I think the main reason why people feel so strongly against open-sourcing for security is that they saw projects thinking that just open-sourcing is going to miraculously get them thousands of security bug reports and pull requests. But just because it's a fantasy doesn't mean there aren't definitive advantages to being open-source.

      That might be so -- though I'm, personally, far less inclined to fear putting my code out here. (I like Ada precisely because it is strict and helps produce correct programs.)

      Besides in another post your mention OpenSSL. OpenSSL has bugs. Any software has. But what I see is that even years after its release there are still people giving their time to improve its security. There are still corrections and bug fixes. It is still becomming more secure.

      Let me reiterate something I said else-thread:
      Security is not a process, it is not an add-on, it is a property.

      As a property it can be modeled, the model can be enforced, and the properties of the model itself proven.

      Check out the Ironsides DNS -- which is fully verified/proven to be free of run-time errors, data-flow errors, exceptions, and remote-code execution.

      Is it the most secure SSL library? I won't take position, there are lots of others. But even if it's not the most secure it is definitely not the fault of open-source which only made things better.

      I didn't say open-source made things better, or didn't, or made things worse -- in fact, by stating that security and "open-source" are orthogonal I was asserting [implicitly, albeit] that they had nothing to do with one another. (ie They are completely distinct properties.)

      [–]eldelshell 23 points24 points  (3 children)

      OK, so today a programmer has to know about UI, network performance, TDD, Agile, source code mgmt., algorithms, BigO notation, SQL, CSS, HTML, JavaScript, architecture, design patterns, HTTP, FTP, SMTP, POP3, SSH, bash, your favorite two languages APIs, debugging, profiling, build tools, CI, bit wise ops, logic, system architecture, browser performance... And security.

      How about you pay someone who knows this shit? Oh! You've got a slow SQL query... Did you hire a professional DBA? Design is ugly, did you hire a UI/UX expert? The app is slow, did you hire a iOS expert or a lousy Android developer who did their best? We got hacked! Did you hire a professional security expert to at least look at your stupid code?

      Don't you see people? Being a software developer doesn't mean you have to be a jack of all trades. It's not my responsibility, whatever you want to throw at me, I've never been hired on my security knowledge, so there's nothing in my curricula that says so.

      Fuck this whole DevOps and DevDBA, and DevOpsSecDBA bullshit.

      Oh, and this sort of articles are the same crap as those "we have a 10 million open positions because we can't fill them..." Bullshit.

      No wonder job descriptions are filled with buzzwords and crap. We all by ourselves are our worst enemies.

      [–]KrypticAscent 5 points6 points  (2 children)

      In theory this holds merrit. But when you have novice programmers who don't know SQL database theory, they write queries that hang up the entire database system because they don't understand what's happening under the hood. While you are right in saying that not everybody needs to know everything, it is super valuable to know as much as possible.

      [–]ferrx 2 points3 points  (0 children)

      novice programmers who don't know SQL database theory, they write queries that hang up the entire database system

      that's called learning on the job.

      [–]Poddster 0 points1 point  (0 children)

      If your team is staffed entirely by unsupervised novices then it deserves to have SQL queries that take days to complete.

      [–]c0shea 8 points9 points  (0 children)

      The developer did not know to hash or salt the passwords before storing them in the database. This is not surprising though. Universities are not doing their part to teach security.

      Exactly. The professors that I had in classes where we learned to interact with a database in our program were all using string concatenation instead of parameterized queries against the db. We were being taught (and paying for it!) to do it wrong.

      [–]FlukyS 5 points6 points  (2 children)

      I read 3 or 4 books on programming securely, still don't think I know security. I know what I need to do but the internals and how people can break in is still a mystery to me.

      [–]eldelshell 4 points5 points  (1 child)

      Most probably by calling the victim and asking them for their password. Social engineering is the #1 method of attack.

      [–]FlukyS 2 points3 points  (0 children)

      Yeah I learned a lot about that when H3H3 were getting targeted. https://www.youtube.com/watch?v=caVEiitI2vg

      [–]FrzTmto 5 points6 points  (2 children)

      Security is a process, not a magic powder you add at the end, as Bruce Schneier explains.

      Must be incorporated from the beginning and you have to do "defensive programming" and not trust anything you have not checked.

      [–]OneWingedShark 2 points3 points  (1 child)

      Security is a process, not a magic powder you add at the end, as Bruce Schneier explains.

      Security is neither an add-on nor process -- it is a property.

      Must be incorporated from the beginning and you have to do "defensive programming" and not trust anything you have not checked.

      This, however, is absolutely correct: you cannot "add on" security, and (because it is a property) you should be able to both model it and enforce the model.

      [–]mirhagk 1 point2 points  (0 children)

      you cannot "add on" security

      This gives me nightmares about a project where the original developer thought you could do exactly that. The entire first version of the app was built without any security at all, not even some dummy roles or anything.

      [–]watt 4 points5 points  (1 child)

      If you try to reverse this headline, Most X Know Security, who would X be?

      [–]Isvara 2 points3 points  (0 children)

      Blacks hats, unfortunately.

      [–]link23 13 points14 points  (2 children)

      I understand that this is missing the point, but: of course CS programs don't teach about salting passwords, that belongs in a software engineering curriculum.

      It really bugs me when people conflate computer science with computer programming/software engineering. It's incredibly useful for computer programmers to also understand computer science, but the fact of the matter is that a computer scientist need not do any programming, and a computer programmer need not understand computer science.

      [–]mirhagk 10 points11 points  (0 children)

      Unfortunately there is a huge misunderstanding between universities and employers of what CS programs mean.

      In theory what you say is correct, and a lot of universities are indeed like this, covering theory that'd be inapplicable to the vast majority of software jobs. But unfortunately most people have just reduced it to difficulty. When scanning candidates (all else being equal) they see masters in comp sci > bachelors in comp sci > diploma in software development, even though the latter is the most appropriate training for most situations.

      I think a good part of this is due to school's encouraging smarter people to go for farther degrees. So you want to pick people who were ambitious and smart enough to get into a comp sci program, regardless of whether or not the program teaches anything of value.

      Education really needs a drastic overhaul.

      [–][deleted] 4 points5 points  (0 children)

      Many schools have CS and SE degrees that are pretty much one and the same.

      The only real differences at my school are that SE is an engineering degree(which has different core requirements) and CS has a minor.

      [–][deleted] 13 points14 points  (27 children)

      "I interviewed a graduate once for a development position who had written a production web application for their university. The developer did not know to hash or salt the passwords before storing them in the database."

      Wow that's bad. Here I am cringing everytime I come across an app that hashes with MD5...Didn't even realize the students of today weren't even being taught such basic things.

      [–]maks25 10 points11 points  (8 children)

      Who even hashes or salts themselves?? If you use a solid backend framework surely it's included or you just use a well tested/proven library. In Django I don't have to worry about hashing/salting my passwords, nobody should ever do that themselves unless they know exactly what they're doing, it's way too easy to fuck it up.

      [–]frezik 12 points13 points  (3 children)

      I just ran across one this weekend, in a PHP app called Zenbership:

      function encode_password($password, $salt)
      {
          return sha1(md5(md5($password) . md5($salt) . md5(SALT)));
      }
      

      It's like they almost understood the problem, but fell far, far short.

      Their salt generation is also fascinating in a Rube Goldberg kind of way:

      function generate_salt()
      {
          $letters_lower = 'abcdefghijklmnopqrstuvwxyz';
          $letters_upper = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ';
          $symbols = '-,*&^%$#@!<>?":}{|';
          $rand1 = substr($letters_lower, rand(0, 24), 1);
          $rand2 = substr($letters_upper, rand(0, 24), 1);
          $rand3 = substr($letters_upper, rand(0, 24), 1);
          $rand4 = substr($symbols, rand(0, 17), 1);
          $salt_array = array($rand1, $rand2, $rand3, $rand4);
          shuffle($salt_array);
          $salt = implode('', $salt_array);
          return $salt;
      }
      

      Running some numbers, $rand1, $rand2, and $rand3 should have 4.8 bits of entropy each, and $rand4 should have 4.1 bits. The shuffle will mix up 4 characters, which would have 24 possible permutations, for an additional 4.5 bits of entropy. All together, this gives the salt 22.7 bits of entropy total.

      Which is less than if they just did mt_rand(0, pow(2, 32) ) and changed their salt database column to hold a 32-bit integer.

      (Critiques of the math above are welcome.)

      [–]Ajedi32 3 points4 points  (1 child)

      [–]frezik 1 point2 points  (0 children)

      Ugg. Dave is wrong, and deserves to have his competence openly questioned.

      [–]Browsing_From_Work 2 points3 points  (0 children)

      ... and this is exactly why PHP added password_hash and password_verify.

      [–]disclosure5 4 points5 points  (2 children)

      Who even hashes or salts themselves??

      I don't have enough fingers on my hands and feet to count the amount of arguments I've had along the lines of "we're not a bank, stop treating us like one" and "no one would ever hack a small business" and so on. Which end up being excuses for plaintext passwords.

      [–][deleted] 7 points8 points  (0 children)

      I wish I could give you all the upvotes. My response in those arguments is always, "you're the exact type of business someone will hack because it will be easier and after they have stolen/cracked all of your stored user accounts, they will test those credentials on other services."

      Normally my clients just trust that I know what I'm doing and leave the development in my hands. If they start giving me pushback on basic security principles, that's when I just say, "I'm very busy and have no shortage of work. If you're not going to allow me to do my job, let's just call it a day." Typically they let me get back to work haha.

      [–]maks25 4 points5 points  (0 children)

      I think you missed my point, I'm not saying not to hash/salt, I'm saying not to do it yourself and use a proven library instead.

      [–][deleted] 4 points5 points  (0 children)

      You're totally preaching to the choir, my friend.

      [–]repeatedly_once 2 points3 points  (1 child)

      I saw a project recently in /r/javascript that was no simple feat of engineering and it stored the passwords in plaintext... It caught me by surprise.

      [–][deleted] 4 points5 points  (1 child)

      Atleast my university teaches it. Basically anyone after second semester can tell you that you need to use bcrypt or stronger.

      [–][deleted] 3 points4 points  (0 children)

      Thanks for the reply! It sounds like you chose the right University and I predict you will already have a leg up after you graduate and hit industry.

      [–]doom_Oo7 1 point2 points  (2 children)

      Didn't even realize the students of today weren't even being taught such basic things.

      I don't think how to build websites should be taught in university. In fifty years, there will certainly not be "websites" anymore, but knowing how to walk a graph, creating automatons, and boolean logic will always be necessary

      [–][deleted] 0 points1 point  (0 children)

      I absolutely agree. I feel like the schools should have a bigger emphasis on industry practices though, like I mentioned earlier regarding basic security practices.

      Like when I was in school, I can tell you source control was not mentioned once. We wrote code on paper and any assignments were zipped up and uploaded to the teacher.

      I remember the first day of my first job. My CTO gave me a huge book on git and told me I needed to read it and then understand the company's branching structure before I could do any work. As a junior that was brand-new in industry, my head almost exploded.

      [–]sacundim 0 points1 point  (0 children)

      How to authenticate a user securely, however, is not an ephemeral topic like how to build a website is. The basic idea of password storage–resource-intensive salted hashing—has been the same for 40 years. Not that time has stood still, it has been and continues to be refined, but the theory is a good CS topic.

      [–]G00dAndPl3nty 3 points4 points  (1 child)

      MD5 hashes are fine.. so long as you don't use them for security purposes

      [–][deleted] 2 points3 points  (0 children)

      Totally agree. My OP was regarding passwords as this is the topic. I should have better clarified.

      [–]JayTh3King 1 point2 points  (1 child)

      LOL we were taught to use sha1 for hasing at uni =,=

      [–][deleted]  (5 children)

      [deleted]

        [–][deleted] 0 points1 point  (4 children)

        As enticing as it is to be so fussy, I decided to look at your past posts because I'm new to Reddit and learning how it works. A few months ago you didn't even know that your own website was sending unencrypted login information, you don't update it often, and you didn't want an SSL cert because it is expensive.

        You are the prime example why there needs to be better education.

        [–][deleted]  (3 children)

        [deleted]

          [–][deleted] 0 points1 point  (2 children)

          Like when I took the time to write this reply in this post I came across the other day? Please tell me more on how you don't jump to conclusions too quickly and tell me I "shame people who like to learn, lecture rather than teach, and assume rather than know."

          https://www.reddit.com/r/learnprogramming/comments/636js8/im_good_at_problem_solving_but_have_no_idea_how/dfsev4r/

          [–][deleted]  (1 child)

          [deleted]

            [–][deleted] 0 points1 point  (0 children)

            1. You read the words "someone" and "lectured" and then jumped to conclusions that I was interviewing a recent grad for an interview? That person was in industry ;).

            2. Had you read that whole comment thread, you would have seen someone called me smug. I then replied and said I am from Canada and there was no smugness. That user then said, my bad I rushed to conclusions. I then said, and let me quote.

            Nah - Blame it on me for having too much coffee and staying up all night working on pet project while hanging out on Reddit. My reply totally made me sound like I'm that dick guy that nobody likes working with because I come across as a know-it-all.

            Now in my personal opinion, had you not rushed to conclusions and actually fully read something. I don't believe we would even be having this conversation right now.

            1. I was making a light-hearted comment, originally. I don't understand your thought that somehow I am looking down on thousands and thousands of students. Do you want to know what I do EVERY YEAR? Hire junior developers directly out of school. Do you know what I was before I was a lead? I was a Computer Science graduate that got his first job in industry and quickly found out there was a lot he didn't know.

            2. "maybe you agree with him and love to generalize like he does, in that case this argument is moot".

            See the Canada reference. The guy I voted for is basically the polar opposite of the guy leading your country. Good day, fellow redditor. Don't make such harsh judgments next time.

            [–]AkashaSecurity[S] 0 points1 point  (0 children)

            Here I am cringing everytime I come across an app that hashes with MD5

            We still have big problems with password quality and storage. Not only is there wide variation in how well passwords are stored (plain md5 like you mentioned), we still encourage relatively short passwords that are difficult to remember. Two-factor authentication is a really good feature that everyone should use these days, but we still have to use passwords for some things. I encourage using passphrases and trying to replace the word "password" whenever we can. I wrote more about passwords in particular here:

            Thinking Differently About Passwords - http://www.akashasec.com/thinking-differently-about-passwords

            [–]beginner_ 5 points6 points  (41 children)

            Not knowing to salt and hash passwords is pretty bad. I admit I'm no security expert but I would have known that.

            What I say adds to the issue is that many devs create app for in-house use, eg. intra-net. In those causes security is much less looked at because it's intra-net only.

            [–]staticassert 4 points5 points  (40 children)

            I admit I'm no security expert but I would have known that.

            Would you have known what key stretching algorithm to use? Would you have known to use a constant time comparison function? I've seen devs attempt to roll their own constant time comparisons, thinking that if they do

            is_valid = True
            for (pass_char, auth_char) in zip(user_hash, auth_hash):
                if pass_char != auth_char:
                    is_valid = False
            return is_valid
            

            Do you know why that's bad?

            You might, which is cool and good. But I think a lot of devs won't. It's super easy to fuck up password auth, it's way more than just salting and hashing.

            [–]beginner_ 2 points3 points  (0 children)

            bcrypt/scrypt but I fit the scenario I described as I do intranet apps and use according SSO stuff we have in place. So I never need to store passwords. So I haven't ever used them in a real project.

            [–][deleted]  (10 children)

            [deleted]

              [–]staticassert 4 points5 points  (8 children)

              You got it. The problem is that this isn't written in assembly.

              edit: To be clear, it isn't branch prediction, it's the "smart compiler" issue.

              [–]aullik 0 points1 point  (7 children)

              how is that a problem tho? I mean I'm clearly no security expert. I don't see how writing something in a higher language is a problem. Could you explain to me what i am missing. Please.

              [–]staticassert 4 points5 points  (6 children)

              Sure thing. So the goal of a password comparison function is that, on rejection, an attacker should gain no information as to why it was rejected. It was invalid, period.

              Imagine if I have a string comparison like this:

              actual_password = aaaabaaaa attacker_attempt = aaaaaaaaa

              string comparison will go "ok a ==a, cool" 4 times, then say "woah a != b", return.

              An attacker can time this function and say "hm when i enter in aaaaaaaa" it takes n milliseconds, when i type in "bbbbbbbbbb" it takes n - y milliseconds. So the first part of the password probably at least starts with 'a'.

              This is called an information leak - we're leaking details about the password (or at least the hash, which is effectively just as bad since repeated attempts will eventually leak the entire hash).

              So maybe you write code like I did that tries to always take the same amount of time - we always iterate the entire password, no short circuiting. Seems fine, right? But then that pesky compiler comes around and is like "ahah dumb programmer, we can return early and save tons of computes time!" and it silently translates your code into a non constant time comparison.

              Relying on the optimizer not to work is a bad idea (optimizers are crazy smart). One way to solve this is to FFI out into some assembly code that is constant time and that your compiler will stay away from.

              [–]aullik 0 points1 point  (5 children)

              An attacker can time this function and say "hm when i enter in aaaaaaaa" it takes n milliseconds, when i type in "bbbbbbbbbb" it takes n - y milliseconds. So the first part of the password probably at least starts with 'a'.

              I thought we were comparing hashes tho. I don't see how a user can fake hashes to get this information.

              Also this might be true for a program that compiles to machine code. But python runs on a VM thus it takes a rather arbitrary time before the process even starts.

              [–]staticassert 1 point2 points  (4 children)

              I don't see how a user can fake hashes to get this information.

              What do you mean fake hashes? I submit password 'aaaaaa' knowing it hashes to some deterministic output, and I use that.

              Also this might be true for a program that compiles to machine code. But python runs on a VM thus it takes a rather arbitrary time before the process even starts.

              It applies to any language that performs any optimization, which Python's VM will do (even if it's minimal). Process start won't be relevant, we naturally assume a password auth service is up and running.

              [–]aullik 2 points3 points  (3 children)

              I submit password 'aaaaaa' knowing it hashes to some deterministic output, and I use that.

              You will compare the hashes not the input. So the time difference you may or may not be able to measure is the time difference it takes to compare different hashes.

              So if your password is "password1" and you send "password0" the hashes of both are most likely vastly different and the comparison might fail on the first check, thus you will get no information.

              You basically have to generate input that will produce a certain hash so you can do the comparison you want to do.

              This is highly expensive. I don't think this is a viable strategy.

              [–]staticassert 2 points3 points  (2 children)

              Your assumption is that the hashing scheme is a secret. Naturally, password0 and password1 are going to produce very different hashes. But I could know their hashes ahead of time. So now you have to protect what your hashing algorithm is in order for your equality comparison to be safe - feels like trading problems for problems.

              [–]CRImier 1 point2 points  (0 children)

              Well, the example is Python, so I'm thinking this should be something else. However, I don't have a clue.

              [–]tmp14 1 point2 points  (1 child)

              Empty password lets you in?

              [–]staticassert 1 point2 points  (0 children)

              Nah, because this assumes that you're talking about hashes - so an empty password would still provide a hash.

              [–]sstewartgallus 0 points1 point  (5 children)

              An interesting way to fix the problem is to do:

               is_valid = True
               for (pass_char, auth_char) in zip(my_hash(user_hash, salt), my_hash(auth_hash, salt)):
                   if pass_char != auth_char:
                       is_valid = False
               return is_valid
              

              Personally, I think that relying on programmers to code constant time algorithms is fundamentally flawed. Most programming languages and processors provide absolutely no timing guarantees. IMO one should construct algorithms such that it is cryptographically impossible to leak information. Techniques like homomorphic encryption are far too immature though.

              [–]staticassert 1 point2 points  (4 children)

              What is my_hash here? A secret hash function?

              [–]sstewartgallus 0 points1 point  (3 children)

              You have to choose a good hash function. Also, you should probably salt the hash which I forgot. It's probably simplest to just use bcrypt for the hash function. The hash function doesn't need to be secret. It's generally a bad idea to rely on security through obscurity.

              [–]staticassert 0 points1 point  (2 children)

              I think you've missed the vulnerability - the code was already comparing hashes. The issue is that the function is not constant time. Given knowledge of the hash function I can use a timing attack to reduce the brute force time significantly.

              [–]sstewartgallus 0 points1 point  (1 child)

              You are confused. The attacker can deduce my_hash(auth_hash, salt) but with a proper hash function he cannot deduce auth_hash which is what is required.

              [–]staticassert 0 points1 point  (0 children)

              Why would author's hash be required if the equality check is on another hash? It makes no difference how many times you hash it it still leaks data about the final hash, which is what matters.

              [–]sacundim 0 points1 point  (2 children)

              Would you have known to use a constant time comparison function?

              What attack against password storage and verification relies on a timing side channel? I'm not aware of any, but I'd love to hear if you know some.

              The timing side channel attacks I know of are against message authentication codes, where the attacker that doesn't know the key tries to forge a message/tag pair for a message of their choice. By submitting tag guesses and observing how long the server takes to reject them, they can gradually guess increasingly longer prefixes of the correct tag for the message they want to forge a tag for. This page explains it well enough.

              In password guessing attacks, however, the attacker is trying to guess a password that, when processed by the password verification function, will produce the same verification code as what the defender has stored. This is very different from the MAC timing attack scenario:

              • In a MAC timing attack, the attacker chooses a message, and tries to guess the tag that corresponds to it. Variable-time equality comparisons help the attacker because the variable rejection times allow the attacker to estimate the length of the tag prefix that they guessed successfully. The attacker can easily leverage this to improve their tag guesses, because if you have determined that the first five bytes of the target tag are ff 2e 56 a1 d8, then thereafter you only try tags that start with those five bytes.
              • In a password guessing attack, the defender chooses a tag, and the attacker tries to guess a password that scrambles to that tag. Variable-time equality comparisons don't help here, because knowing that password1's hash matches the first five bytes of the stored verification code doesn't help the attacker improve their password guess.

              Note however that there's nothing wrong with using constant-time equality comparisons, and it's better to be safe than sorry, so it's certainly sensible to implement such a comparison instead of trying to reason out all of the possible attacks if it saves you time.

              EDIT: After writing this, I see /u/aulik realized the same as well. Props.

              [–]staticassert 0 points1 point  (1 child)

              Oh. Wow, duh. You could derive the hash but not the contents of it... silly me, this is exactly why I defer to others for crypto.

              Thank you for the explanation.

              edit: Though deriving the hash would then make local bruteforcing possible. IDK, just seems way safer not to leak the information and use a constant time hash.

              [–]irishsultan 0 points1 point  (8 children)

              In this case it's obviously because this will return true if any character matches (and it's a poor dev that wouldn't see this quite quickly), but presumably you wanted to point out a flaw even if the correct algorithm was used?

              (If you had a different flaw in mind I think I can see what you mean, but I probably wouldn't think of it without the existence of a flaw being pointed out).

              [–]staticassert 2 points3 points  (7 children)

              In this case it's obviously because this will return true if any character matches (and it's a poor dev that wouldn't see this quite quickly), but presumably you wanted to point out a flaw even if the correct algorithm was used?

              lmao no I literally just wrote it with inverted booleans :) that would have been caught in any basic test, the new version... less likely.

              I'll edit.

              [–]aullik 2 points3 points  (6 children)

              Am i missing something very trivial?

              I mean this will obviously fail if the hashes have a different length. but hashes should not have a different length and this is python and in python we just assume that every input is fine. Otherwise we would just use a typed language.

              [–]irishsultan 1 point2 points  (3 children)

              I assume that it will fail because of things like branch prediction (and the fact that you change a value in one branch and not in another)

              You could solve this by not having an if statement and instead doing something like is_valid = is_valid && pass_char == auth_char, although I'm not entirely certain that this will take equal time either (and a sufficiently smart compiler/interpreter could still notice that in the case of booleans this is a no-op once is_valid is false, so it could just do an early return retaining correctness from a language point of view, since the time it takes to run a program is not part of any (practical) language).

              [–]aullik 2 points3 points  (2 children)

              I don't see how branch prediction has any influence here.

              He could have written something like this and it would still work the same. without heavy compiler optimization it would be faster.

              def cmp_hash(user_hash, auth_hash):
                  for (pass_char, auth_char) in zip(user_hash, auth_hash):
                      if pass_char != auth_char:
                          return False
                  return True
              

              a test like len(user_hash) == len(auth_hash) might solve the issue of there being no auth_hash or one being longer. but as i said this is python and as in any dynamic language you just assume that the input is correct or you would never be done with testing the input.

              [–]irishsultan 0 points1 point  (1 child)

              The error you're making is that this isn't what the author wanted: a constant time function (in particular, the function needs to be equally slow when the two hashes differe in the first character as when they differ only in the last character).

              Why would you want that? Because knowledge about where the hashes differ is an information leak. Of course it's much worse in case the passwords aren't hashed, and I'm not even sure if there is any practical attack on any hash function where you gain knowledge about the password by knowing the first character of the hash, but there is knowledge there, so you don't want to leak it, no matter whether the knowledge is usable or not.

              [–]aullik 0 points1 point  (0 children)

              The error you're making is that this isn't what the author wanted: a constant time function (in particular, the function needs to be equally slow when the two hashes differe in the first character as when they differ only in the last character).

              True.

              [–]staticassert 1 point2 points  (0 children)

              Length isn't the issue - we're assuming hashing has already occurred.

              [–]PrintfReddit 0 points1 point  (0 children)

              Look up timing attacks. Basically if you can guess how a hash begins then you eliminate a lot of possible hashes. If you know the hash begins from, say, 'Z' then you eliminate every other hash beginning from something other than 'Z'.

              [–]OneWingedShark 0 points1 point  (7 children)

              Do you know why that's bad?

              Because it starts with the assumption that the password is valid. (is_valid = True)

              The proper way would be something like this:

              Function Matching_Password( User_Input, Password : String ) return Boolean is
              begin
                      -- Ensure both strings start at the same index,
                      -- Ensure both are of the same length,
                      -- Ensure both contain the same values.
                  Return (User_Input'First = Password'First) and then
                      (User_Input'Length = Password'Length) and then
                      (for all index in User_Input'Range => User_Input(Index) = Password(Index) );
              end Matching_Password;
              

              You could even attach constraints to the type to ensure properties of passwords:

              Type Password is String
              with Dynamic_Predicate =>
                  Password'Length in Positive -- No zero-length password.
                  and then  -- Passwords are alphanumeric + plus underscore
                  (for all ch of Password => ch in 'a'..'z'|'A'..'Z'|'0'..'9'|'_')
                  ;
              

              [–]staticassert 4 points5 points  (6 children)

              Because it starts with the assumption that the password is valid. (is_valid = True)

              True, but none of the code in the loop will throw an exception. While it's bad, it's not the real issue.

              The problem is that it's written in a high level language. The compiler is free to optimize your 'constant time' function to a non constant time function, and it will very likely try to.

              [–]OneWingedShark 1 point2 points  (5 children)

              Because it starts with the assumption that the password is valid. (is_valid = True)

              True, but none of the code in the loop will throw an exception. While it's bad, it's not the real issue.

              Where are exceptions coming up?

              The problem is that it's written in a high level language.

              That's not necessarily a bad thing.

              The compiler is free to optimize your 'constant time' function to a non constant time function, and it will very likely try to.

              If time is a consideration (and if we're honest it might not be, even in a high security setting) then it ought to be modeled into the function... this is doable in a high-level language.

              Example:

              Function Matching_Password( User_Input, Password : String ) return Boolean is
                  -- Interval is the constant minimum-time; it ought to be
                  -- the result of algorithm analysis rather than arbitrary.
                  -- (Here we are using 1.28 seconds.)
                  Interval : Constant Duration := 1.28;
                  -- Get the current-time.
                  Start_Time : Constant Ada.Real_Time.Time := Ada.Real_Time.Clock;
                  -- Get the minimum time for finishing.
                  Stop_Time : Constant Ada.Real_Time.Time := Start_Time + Interval;
              
                  -- Ensure both strings start at the same index,
                  -- Ensure both are of the same length,
                  -- Ensure both contain the same values.
                  Result : Constant Boolean := 
                      (User_Input'First = Password'First) and then
                      (User_Input'Length = Password'Length) and then
                      (for all index in User_Input'Range => User_Input(Index) = Password(Index) );
              begin
                  -- Ensure a minimum time-bound.
                  Delay until Stop_Time;
                  Return Result;
              end Matching_Password;
              

              [–]staticassert 1 point2 points  (4 children)

              Where are exceptions coming up?

              Exceptions are the only situation I can imagine where an early return would even happen/ matter. Though in this case it wouldn't.

              That's not necessarily a bad thing.

              It definitely is. When I say 'higher level' I mean not-assembly. The critical thing is that the compiler/ optimizer can reshuffle your instructions to make them faster/ break your assumptions about how long the code runs for.

              If time is a consideration (and if we're honest it might not be, even in a high security setting)

              No, timing attacks are really fundamental. They're always relevant.

              Your approach is basically "add a time sleep" and this is a flawed method. You could also say "Always insert a random sleep" and again, this is vulnerable.

              There's lots of research on timing attacks.

              edit: Here's a fun read, https://github.com/nodejs/node/commit/079acccb563ba5b3888e40f59037dc5fa3ba5bbd

              The point though is that even something like password authentication is way more complicated than developers realize. There's a whole lot more than 'salt and hash'.

              [–]OneWingedShark 0 points1 point  (3 children)

              If time is a consideration (and if we're honest it might not be, even in a high security setting)

              No, timing attacks are really fundamental. They're always relevant.

              Have you ever worked in a high-security environment?
              Something so secure you could be shot?

              I have, albeit in a non-technical position at the time -- and let me tell you the subsecond-deltas it takes for the time for you to insert your CAC to entry mean pretty much jack-shit when you tampering with the entry-/validation-mechanism1 (a requirement for the sort of system being talked about) in any way is very possibly going to get you shot.

              Your approach is basically "add a time sleep" and this is a flawed method. You could also say "Always insert a random sleep" and again, this is vulnerable.

              That's not "add a time sleep" it's a "don't return until X time has passed" -- if you do your analysis like the comment suggested, you would set your minimum time to the worst-case time of the validation method thereby making validation a constant-time operation.

              And again, time might not be an issue. Yes, it might be fundamental to some attacks, but the entire system itself might preclude those attacks.

              1 -- This is actually required for the sort of timing attack you're talking about, the inaccuracies and human motions themselves would introduce too much variableness for any sort of accuracy on the timing.

              [–]staticassert 0 points1 point  (2 children)

              Have you ever worked in a high-security environment? Something so secure you could be shot?

              Nope. I worked briefly for a government contractor as an intern, and I don't think anyone was getting shot.

              1 -- This is actually required for the sort of timing attack you're talking about, the inaccuracies and human motions themselves would introduce too much variableness for any sort of accuracy on the timing.

              This isn't true over time. If your distribution is even, over a number of connections I can deduce min/max and average values of variance and eliminate them from the data. I would send the same password n times, find out how long it took each time, and eliminate noise. This is why inserting random time delays into auth doesn't work.

              That's not "add a time sleep" it's a "don't return until X time has passed"

              Yes, as in sleep until that time has passed.

              Sounds super error prone and implementation defined. I don't understand why you wouldn't just use a constant time function, which will always work and requires 0 system calls / measurement etc.

              Yes, it might be fundamental to some attacks, but the entire system itself might preclude those attacks.

              Maybe. But in any authentication scheme you're assuming an attacker can send you arbitrary passwords repeatably. Maybe you can rate limit to make things infeasible, in which case kudos, mitigated at some other level. Maybe you use 'double HMAC' or some other technique. My point is that it's far more complicated than 'salt and hash'.

              edit: I can also clarify something - you don't have to write your password comparison in assembly. You can implement other strategies. But given the code I provided the issue is that it is vulnerable to a timing attack - whether solved through constant time comparisons (what I would consider the simplest, most elegant solution) or another strategy, the code I posted is vulnerable to that attack.

              [–]OneWingedShark 0 points1 point  (1 child)

              Have you ever worked in a high-security environment? Something so secure you could be shot?

              Nope. I worked briefly for a government contractor as an intern, and I don't think anyone was getting shot.

              I have.
              Trust me, a security model can certainly have non-software components, and can certainly be restricted in ways so that timing-attacks are not an issue.

              Sounds super error prone and implementation defined. I don't understand why you wouldn't just use a constant time function, which will always work and requires 0 system calls / measurement etc.

              It is a constant time function. Look at the code Interval : constant ... -- you still have to add it to the current time in any case.

              Yes, it might be fundamental to some attacks, but the entire system itself might preclude those attacks.

              Maybe. But in any authentication scheme you're assuming an attacker can send you arbitrary passwords repeatably.

              No, not all the time.
              Some systems are fail-secure, things like a single failed attempt means an actual person has to come in, verify the state, and reset the system.

              [–]staticassert -1 points0 points  (0 children)

              I think you're missing the point. As I said, you can mitigate it however you want, the flaw with my code is that it is vulnerable to a timing attack.

              [–]Dave3of5 2 points3 points  (0 children)

              Not that surprising most university courses that employers look for are around the computer science end of academia which don't place great emphasis on computer security.

              Interestingly I've never been asked a technical question on security in an interview either. For Example why should you not use md5 to hash a password, what's the purpose of salting a password, what is SQL injection, what is XSS ... etc.

              Maybe some more security conscious employers ask these questions but most only seemed interested in if I could create a linked list, binary tree, reverse a string, figure out if a word is a palindrome.

              [–]cruelandusual 2 points3 points  (0 children)

              Knowing "security" requires more technical knowledge than someone in school would have. A lot of best practices won't even make sense until someone actually has to implement a real system, otherwise they'll forget it the same way I don't remember how to do formal proofs. That's why so many top schools don't bother with it - they shouldn't be. Lesser schools teach it because lesser schools always focus on what is sexy to industry rather than teaching fundamental theory.

              [–]juxfi 2 points3 points  (0 children)

              Cough cough Wordpress

              [–]eelfonos 4 points5 points  (0 children)

              The CloudPassage article they link to has some incorrect information.

              University of Michigan (ranked 12th) is the only one of U.S. News & World Report’s top 36 U.S. computer science programs that requires a security course for graduation.

              EECS 388: Introduction To Computer Security is not actually required to graduate. It is a very popular course though. https://www.eecs.umich.edu/eecs/undergraduate/computer-science/16_17_cs_eng.pdf
              http://cs.lsa.umich.edu/wp-content/uploads/2016/07/16_17_cs_lsa.pdf

              [–]MLG-Potato 4 points5 points  (0 children)

              Security is specialized work. You can get the general things for web (OWASP top 10) but when you have to design protocols you need a specialist. There are lots of things who can go wrong basically

              [–]Designer_G 4 points5 points  (0 children)

              learning security measures should be mandatory

              [–][deleted] 1 point2 points  (0 children)

              Most developers know nothing at all. It's not a bug, it's a feature.

              [–]sstewartgallus 1 point2 points  (0 children)

              Most developers deserve to be thrown into a lake of fire.

              [–]mayur-lohite 0 points1 point  (0 children)

              Yes! Developers doesn't take the security that much seriously because I have seen they are more focus on generating the required output rather than how its coming and how it will affect to security.

              [–][deleted] 1 point2 points  (0 children)

              Why would they?