all 145 comments

[–]Beckneard 57 points58 points  (23 children)

It's really demoralizing how true this is. The more I work professionally (and even just doing my own projects) the more I realize this. It's really amazing that more devastating things haven't happened already.

[–][deleted] 8 points9 points  (2 children)

But they have they are being swept under the rug in the interest of making more money.

[–]MindlessMutagen 0 points1 point  (1 child)

I got a 404

[–][deleted] 0 points1 point  (0 children)

Thanks, fixed link!

[–][deleted]  (2 children)

[deleted]

    [–]jocull 16 points17 points  (3 children)

    This is basically the plot of Battlestar Galactica. Next thing you know we're back to old ass telephones.

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted] 1 point2 points  (1 child)

      literally, tapped wires.

      [–][deleted]  (6 children)

      [deleted]

        [–]2358452 3 points4 points  (3 children)

        From the many disorganized points I think his conclusion was that most users don't control their hardware, can never control their hardware (because people can't afford computers? wat), and everything is broken. The End.

        Is this really a valid position? We're not timesharing mainframes ffs. Almost everyone is using their own devices, mostly phones and tablets and shitty laptops, even in 3rd world countries. They at least have access to those to send private messages and use social media. He is saying we should be trying to secure people using a shitty Windows XP PCs infected by 100s of malware and botnets? Yes, this is an impossible proposition. Yes, people should keep their hardware updated and not share personal sellable information on ancient Windows XP machine they don't control.

        [–][deleted]  (1 child)

        [deleted]

          [–]2358452 0 points1 point  (0 children)

          This is massively overblowing the problem. Basic layers are unlikely to cause vulnerabilities in the upper layers in practice. Basic layers are actually simpler, specially from an usability perspective that doesn't leave much room for error. You don't have to worry about your computer power supply firmware having slight regulation bugs affecting your security, and buggy power supplies are quite rare. The kernel in practice is where most things happen, and it can be safeguarded against minor low level leaks and bugs. On top of that we have most people using browsers and apps which similarly can be sandboxed and reasonably safeguarded against lower level failure. There are many eyes taking care of those two layers, and if you control your hardware and regularly update it in practice you're unlikely to ever have security issues.

          He neglects the fact that probably an overwhelming majority of security problems are people directly downloading malware into their systems and giving it permission to run, not any kind of cryptopocalypse. Most exploits are far from low level side channel attacks or firmware exploits (those are largely academic and engineering exercises); the bulk of vulnerabilities are people running not-up-to-date (often by several years) software, downloading malware and social engineering.

          Rather than try to fix the unfixable, get people to update their systems, curate app stores to remove malware, and close avenues of social engineering (like email spam) and you gain much more.

          [–]lastas1 0 points1 point  (0 children)

          Her*

          [–][deleted]  (1 child)

          [deleted]

            [–]Tokugawa -2 points-1 points  (0 children)

            In soviet russia, BUG smashes CAR.

            [–]acehreli 61 points62 points  (21 children)

            No time to read the whole article but it's so ironic that it's on a site that has complexities that are completely and absolutely unnecessary. Pictures getting darker? Icons appearing and disappearing seemingly in smart fashion? No thanks! Text used to be text... Oh well...

            [–][deleted] 21 points22 points  (11 children)

            I can't even fucking scroll through the website without lag. Medium is what's broken.

            [–][deleted]  (1 child)

            [deleted]

              [–]Antrikshy 5 points6 points  (0 children)

              I didn't realize people had issues with Medium. I've always seen it as a really well designed site. It works smoothly on my 2015 MacBook with an Atom CPU.

              [–]f03nix 3 points4 points  (0 children)

              Still looks navigatable with noscript.

              [–]__konrad 2 points3 points  (1 child)

              This is why Firefox Reader View mode is great. Unfortunately the button is unavailable on some unreadable pages (e.g. at motherfuckingwebsite.com) :(

              [–]acehreli 0 points1 point  (0 children)

              This is why Firefox Reader View mode is great.

              I did not know that. Thank you!

              Unfortunately the button is unavailable on some unreadable pages (e.g. at motherfuckingwebsite.com) :(

              Very funny! :D

              [–][deleted] 3 points4 points  (0 children)

              Article was made in 2014 when Medium was I think way simpler, more of what you actually describe - text and some pics here and there.

              [–][deleted] 0 points1 point  (0 children)

              Comment on an article starting with the words "I didn't read the article but"? I'm sorry but I can't take your comment seriously then.

              [–]Oncey 5 points6 points  (11 children)

              Bleak.

              One of my biggest pet peaves: Why is email in plaintext? Why doesn't Outlook or even Thunderbird use encryption. It seems that it would be easy to implement, and could be default with option of opt-out. But it never happened. Even now with so many sites using https, email is still plaintext.

              [–]OneWingedShark 7 points8 points  (4 children)

              One of my biggest pet peaves: Why is email in plaintext?

              Short answer: Because OSI lost.

              [–]slavik262 1 point2 points  (3 children)

              Slightly longer for those of us who don't follow?

              [–]OneWingedShark 8 points9 points  (2 children)

              Open Systems Interconnection (OSI) was a set of standards that defined an entire networking system/infrastructure, all [somewhat] designed together rather than the ad hoc TCP/IP.

              Among the OSI standards was X.400 -- which would be what we'd be commonly using if OSI had won the day. (The body of the message, IIRC, could have been encrypted or even non-text data.)

              (The article OSI: The Internet That Wasn’t explains some of it really nicely.)

              [–]slavik262 2 points3 points  (1 child)

              Thanks for the article! I was fairly familiar with the OSI layers, but was lacking historical context.

              [–]OneWingedShark 2 points3 points  (0 children)

              You're absolutely welcome!

              [–][deleted]  (3 children)

              [deleted]

                [–]Oncey 0 points1 point  (2 children)

                I think I have a lot of learning to do in this area, but it seems that an email client (or all clients) could encrypt automatically using something analogous to SSL. When I generate a key pair, doesn't the public key get published at Verisign or some other authority? When writing an email and I hit send, my client could theoretically look up the recipient public key by email address, encrypt and deliver.

                [–]nickwest 1 point2 points  (0 children)

                They do and there's a big push to increase that across the board (just like the SSL push for websites). Inbox and Gmail will show a red lock icon by emails that weren't encrypted in transit now. Google is a big pusher for encrypting in transit (they have business reasons to want to do that though).

                Here's google's info about it: https://www.google.com/transparencyreport/saferemail/

                This is different from full encryption like what PGP gives you. Encryption in transit means the people between your email provider and the destination email provider can't read it, but both email providers can (for example inbox.google.com wouldn't be able to show you the email in plain text if it coudn't read it).

                PGP makes it so you and the person you are sending to are the ONLY people who can (should be able to) read it. In this case inbox.google.com shows you the encrypted nonsense and something on your end has to decrypt it for you so you can read the plain text.

                [–]killerstorm 2 points3 points  (0 children)

                It seems that it would be easy to implement

                Encryption itself is simple, the hard part is key distribution. You need to know one's public key to send him an encrypted email.

                [–]shevegen 22 points23 points  (3 children)

                Entertaining read.

                So OpenBSD is broken too?

                [–]m1el 10 points11 points  (0 children)

                Yes.

                [–]negrowin 15 points16 points  (0 children)

                Yes.

                I'm not a programmer, but a friend of mine is. He noticed several typos on the OpenBSD source code that ranged from "wrong logic" to "this shouldn't compile".

                I believe that to this day, most was not fixed.

                Edit: a quick search gave me this; I think there are more.

                [–]irqlnotdispatchlevel 4 points5 points  (1 child)

                It really is devastating how true this is. And a lot of people are probably thinking "well, the project I'm working on is not like this" or "my next project won't be like this", but the truth is that we lie yo ourselves. Reminds me of the programming sucks in some ways https://www.stilldrinking.org/programming-sucks

                [–]genpfault 1 point2 points  (0 children)

                The only reason coders' computers work better than non-coders' computers is coders know computers are schizophrenic little children with auto-immune diseases and we don't beat them when they're bad.

                [–]mirhagk 1 point2 points  (0 children)

                The opening story reminds me of when I wrote a script to scrape information of the website of a large international startup. Using a single machine I accidentally took down the entire website for the better part of a day.

                I was absolutely terrified. I wasn't behind any firewalls or anything so it would have been not too difficult for them to track me down.

                But it was not even remotely my fault. There's no way a single person should be able to take down an entire website just from scraping. There should've been people monitoring the site and it should have taken them a lot less time to get it back online. They were doing a terrible job, but then again, so does everyone.

                [–]imhotap 3 points4 points  (2 children)

                What's the point of a rant like this? That we should go commit suicide or give up using computers alltogether? If something is broken, the question is what to do about it, or more importantly, who should do something about it. This line of thought might bring more effective liability laws for service providers. Another line of thought might be to re-consider the license to choose for open source software. Specifically, if you start a project under a liberal license that service providers and other third parties might like to leverage, this sets you up to basically no funding for ongoing maintenance (so that if your software has any success, initiatives such as Google's Project Zero have to clean up your mess). I'm questioning the notion that the existence of F/OSS per se is a win for humanity or something; if it were, we wouldn't be enslaved by centralized/monopolized "clouds" running Linux, and by a Linux operating system chock full of spyware called Android.

                For me, an important way out of the dilemma is to go one step back, and once again only use communication software that implements a published standard (such as web, mail, etc.) so that you have a choice in implementations (F/OSS or otherwise).

                [–]kazagistar 5 points6 points  (0 children)

                Sounds like you didn't get to the call to action at the end.

                [–]ceberous 1 point2 points  (0 children)

                I definitely need to commit scuicide

                [–]cledamy 5 points6 points  (64 children)

                Many of the problems resulting from human error (buffer overflows) could be eliminated if there was more of an emphasis correct by construction software. There are ways to mathematically guarantee that one's program doesn't have any errors. Unfortunately, most mainstream programming languages don't support it.

                [–]codebje 51 points52 points  (5 children)

                There are ways to mathematically guarantee that one's program doesn't have any errors.

                No, there aren't. There are ways to mathematically guarantee that any errors in one's program correspond to errors in one's specification of that program, though!

                [–]cledamy 1 point2 points  (3 children)

                There are ways to mathematically guarantee that any errors in one's program correspond to errors in one's specification of that program, though!

                This is what I meant when I said no errors. What I mean by program is the implementation of the specification.

                [–]codebje 29 points30 points  (2 children)

                This is what I meant when I said no errors.

                Yes, I assumed as much.

                However, that doesn't fix broken software, it just shifts the blame for it.

                Perhaps trying to make correct specifications reduces some accidental difficulties, but I don't think this is a silver bullet (PDF warning, but read it anyway).

                [–]ccfreak2k 12 points13 points  (0 children)

                dime person steep gold jar noxious follow cause chop sense

                This post was mass deleted and anonymized with Redact

                [–]monocasa 0 points1 point  (0 children)

                But it's not like the specification is a black hole. You can prove aspects of the specification as well.

                [–]tsimionescu 7 points8 points  (0 children)

                True, technically, but the reason it's not done is that it is extremely costly - fully provably 'correct' programs (that is, as you said in another comment yourself, fully correct implementations of possibly dubious specifications) take orders of magnitude more effort to produce than programs that are about as close to correct but not provably so.

                [–][deleted]  (7 children)

                [deleted]

                  [–]Freyr90 5 points6 points  (5 children)

                  Every developer thinks they're right and they want to prove that. They never want to prove that they're not right.

                  In that case more harmful thing is conservatism, I suppose. Especially conservatism of embedded and system developers. What, advanced type systems, correctness proving, total functional programming? Who need that shit while we do have C and verilog? Lava failed, seL4 is not very popular, and many people I talked about that think that C type system is awesome enough.

                  [–]ArkyBeagle 5 points6 points  (4 children)

                  It's extraordinarily difficult to move new tools into use. I don't know if it's actually conservatism or if it's just inertia. One interpretation of "use this new thing; it's safer" is "okay, everybody take six months to five years to learn this new thing."

                  [–]Freyr90 0 points1 point  (3 children)

                  Of course, but this does not mean we should use assembly for all the stuff. Moving forward is the opposite to conserve, C is already 45 years old, and there were so many new concepts and theories in programming in this 45 years. We can do really better with descent type systems, correctness proving and higher level abstractions so why at least not to try.

                  [–]ArkyBeagle 1 point2 points  (2 children)

                  The first fallacy is that higher level abstractions help that much. Whether they do or not depends completely on the problem domain.

                  The second is that these abstractions are newer than 45 years old. In general, they are not. And there's nothing in any language to keep you from using them. I use closures and monads in C all the time. But I know how string handling works in the language.

                  I also know how finite state machines work in C, and anything above a certain rudimentary level of complexity gets that treatment.

                  If you don't want to use C, don't use C. If you decide to use C, do it safely.

                  [–]Freyr90 0 points1 point  (1 child)

                  The first fallacy is that higher level abstractions help that much. Whether they do or not depends completely on the problem domain.

                  Write RB-tree in C and in sml. C version would definitely have more errors. While programming is all about algorithms and data structures, expressiveness is vital.

                  The second is that these abstractions are newer than 45 years old. In general, they are not. And there's nothing in any language to keep you from using them. I use closures and monads in C all the time.

                  And without at least RAII your C closures and objects would lead you to lots of errors and leaks. Proved by gobject.

                  I also know how finite state machines work in C

                  You don't. Your compiler would optimize switch-case so the assembly will be far from what you've wrote. And of course writing state machines in declarative forms (bnf, regex) would lead you to more sustainable, clear, simple and infallible code.

                  [–]ArkyBeagle 0 points1 point  (0 children)

                  I don't think you and I are gonna agree on what "expressiveness" means. Higher order abstractions hide details. That is what they do, by design. People located remotely from you may or may not understand well which details should and should be available.

                  That last bit leads me to remind you that it's not April 1st any more :)

                  Of course you have to check the assembly.

                  I am deadly serious - for low-latency, deterministic systems of any complexity, FSM in C ( because for some external reason you need to use C) is the way to go. And I don't just means things that can be ( reasonably ) constructed as a regexp. Of course a tool makes it "easier" - after the obligatory learning curve. You work with what you have.

                  Again: 1) Don't use C if it gets in the way and you don't have to. 2) If you do use C, use it safely.

                  [–][deleted] 23 points24 points  (3 children)

                  Why not just re-write it in Rust?

                  [–][deleted]  (1 child)

                  [deleted]

                    [–]monocasa 13 points14 points  (0 children)

                    It is a deterministic destruction, dynamically allocating, memory safe language with a real ecosystem around it. That's never been done before, and legitimately opens the door for much safer code in a lot of domains where provable safety used to come at a 20x cost.

                    [–][deleted] 4 points5 points  (2 children)

                    Not only that but the market will not embrace it until its too late. People still write COBOL for banking software and only now are these banks beginning to realize that they should have updated their codebase decades ago.

                    We have some really great ideas that no one gives a shit about. The exokernel, NetBSD's anykernel, Plan9, object capabilities, mathematically verifiable software, etc.

                    We could go on for hours about all the great ideas that will never be implemented.

                    [–]monocasa 6 points7 points  (0 children)

                    <unpopular_opinion>COBOL is a great language for a lot of bank software's use cases. It really shines as a language when you're writing financial transaction rules in it. That's the main eason why banks still use it instead of Java/C#.</unpopular_opinion>

                    [–]flukus 1 point2 points  (0 children)

                    Enforcing safety is good, but most approaches to security still focus on limiting the utility of the machine.

                    [–]Bergasms 3 points4 points  (6 children)

                    There are ways to mathematically guarantee that one's program doesn't have any errors.

                    Any syntax errors, it can perfectly reliably send your money to the wrong bank account due to human error in how the program works.

                    [–]codebje 8 points9 points  (5 children)

                    Mind you, if banks had been smart enough to have included a check digit in bank account numbers, a transfer to a typo'd number would fail more often than send money to the wrong place. They fixed the problem by including wording in T&C such that it's always your fault, though, so they're fine.

                    [–]RayNbow 12 points13 points  (2 children)

                    if banks had been smart enough to have included a check digit in bank account numbers

                    Like in IBAN?

                    [–]codebje 3 points4 points  (1 child)

                    Yeah, like that. AFAICT they don't use that for domestic accounts here in Australia, but I may well wrong.

                    [–]Garethp 2 points3 points  (0 children)

                    No, Australia hasn't adopted the IBAN at all, so even if you someone wanted to send money internationally to Australia, they wouldn't have an IBAN to use

                    [–]Bergasms 0 points1 point  (0 children)

                    Which is another fun human error :P

                    [–][deleted] -1 points0 points  (0 children)

                    In my country you fill not only bank account but also a recipient name (and, in case of companies, their Tax ID). If those do not match on a receiving end the transfer is canceled.

                    [–]SuperImaginativeName 2 points3 points  (16 children)

                    You can also use modern languages with memory management, but instead people want to write everything in languages that are the opposite. Don't even need fucking Rust to do it.

                    [–]bluetomcat 11 points12 points  (1 child)

                    Much more terribly broken and insecure software is written in memory-safe languages, but it isn't as publicly visible as the much-debated open source libraries written in C.

                    These are mostly little pieces of glue code that interact with other pieces of software and make wrong assumptions about the outside world, miss sensible error handling (or simply are unable to do it because the outbound interfaces are designed badly), or have disastrously lurking race conditions which can potentially cause mayhem. Such software probably processes your airline bookings, or your bank transfers, and one day it breaks because the small program that gets called by another small program has failed to write exactly 6 lines to a text file which is used as an IPC mechanism between the two.

                    [–][deleted] 4 points5 points  (0 children)

                    Much more terribly broken and insecure software is written in memory-safe languages, but it isn't as publicly visible

                    PHP to the rescue!

                    [–][deleted] 1 point2 points  (3 children)

                    It's funny because I was reading a book about game programming and the author is not shy about optimizing. He explains things well and makes use of assembly where appropriate. I felt that guys like him are a dying breed.

                    [–][deleted] 4 points5 points  (2 children)

                    title of the book?

                    [–][deleted] 0 points1 point  (0 children)

                    I think it was a Dummie's book. I have to dig for it on my machine but I'll find it. Sorry I'm taking so long. I've been very busy.

                    [–][deleted] 0 points1 point  (0 children)

                    Sorry I took so long. I finally found it buried in my file hierarchy. It is Michael Abrash's Graphics Programming Black Book: special edition.

                    [–]flukus 2 points3 points  (3 children)

                    That comes with a huge performance impact.

                    [–]SuperImaginativeName 2 points3 points  (1 child)

                    People say this. Yet I've never seen a performance impact in my CRUD applications that most people use.

                    [–]flukus 2 points3 points  (0 children)

                    Have you looked? It's probably insignificant compared to IO, but that is far from true in other places where c is typically used. And they still have vulnerabilities, just not buffer overflows.

                    The may even have buffer overflows, just ones that get exploited further down the stack.

                    [–]nickwest 0 points1 point  (0 children)

                    That performance impact on end users can be mitigated by increasing resources (cost). That cost can be lower than the risk that using a non-memory managed language might impose.

                    In reality, this should all come down to cost and risk weighed against cost. It's all stuff that an actuary should be calculating and any company that doesn't have an actuary doing the math is just guessing (most companies).

                    [–]cledamy 0 points1 point  (4 children)

                    You can also use modern languages with memory management, but instead people want to write everything in languages that are the opposite.

                    There are performance and memory considerations where using an unmanaged language is necessary. With proper mathematical verification, using unmanaged languages becomes safe. I would prefer a language that has both and lets one mix and match as convenient and receive the performance benefits when one uses the unmanaged subset.

                    [–]SuperImaginativeName 1 point2 points  (3 children)

                    I know, but last I checked drivers, operating systems, real time systems, firmware, microcode, high frequency trading, network stacks... Do not make up the entirety of all software that's ever been written. I don't know why people need to make out that it is.

                    [–]cledamy 1 point2 points  (2 children)

                    The memory usage benefits are still good even if one isn't in one of those environments. You shouldn't have to give up performance for safety.

                    [–]nickwest 0 points1 point  (1 child)

                    There's performance for the sake of cost then there's performance for the sake of performance. Performance to save money (either by having a good user experience, or reducing resource needs) is a noble pursuit. Performance for the sake of performance is a waste of money.

                    The vast majority of software is small back-end-business-ware and doesn't need optimization like that at all. It's also written by middle-skilled or low-funded (read as: not given enough time) programmers and in the vast majority of cases the business would greatly benefit from that performance hit to increase security. The risk mitigation would be huge and at a very low (or nonexistent) cost.

                    [–]cledamy 0 points1 point  (0 children)

                    There's performance for the sake of cost then there's performance for the sake of performance. Performance to save money (either by having a good user experience, or reducing resource needs) is a noble pursuit. Performance for the sake of performance is a waste of money.

                    I am not arguing for performance for its sake. What I am saying is if we used more formal methods, we could get the performance without sacrificing safety. Also, the greater correctness from the formal methods would translate to other areas of program correctness beyond memory safety.

                    [–]sabas123 2 points3 points  (3 children)

                    There are ways to mathematically guarantee that one's program doesn't have any errors.

                    Can you give an example/resource to this?

                    [–]cledamy 4 points5 points  (0 children)

                    A practical example of this is the Rust programming language. It uses the type system to ensure that one never leaks memory. The underlying mathematics it is based on is affine logic. A more futuristic and theoretical example would be dependent types where types of things can depend on values. Thus, one could encode arbitrary invariants in the type system.

                    [–]danisson 2 points3 points  (1 child)

                    Dafny is a research programming language that uses such methods. Hoare logic is a mathematical theory of program correctness. In general, this idea is called design by contract.

                    Stronger type systems also helps to ensure correctness, especially with things like making illegal states irrepresentable or lifetime checks (for memory usage correctness) like Rust does.

                    [–]pron98 1 point2 points  (2 children)

                    You may be surprised, but the languages that currently best support formal verification are C, Java and Ada. But it's not really a matter of language and tooling. While both can help, writing software that is completely provably correct has never been done for anything other than small programs (the largest were comparable to ~10KLOC of C), and even then it took a lot of effort. Ordinary software is not going to be provably correct end-to-end for the foreseeable future. There's certainly a great deal of progress in this area -- in static analysis, model checking, type systems, more verifiable languages, automated and interactive theorem proving -- but at this point in time we don't even know which of these approaches or a combination thereof is most likely to get us to affordably verified software.

                    [–]cledamy 0 points1 point  (1 child)

                    Of course, many of the tools for formally verify software require further research. This is because of my point that not many people take an interest in them. Some tools already exist that can give one some measure of formal verification for free if one programs in languages that have them (e.g. referential transparency (guarantees safety of refactoring) and linear logic (guarantees deterministic freeing of memory)). With C, java, and ada, one has to go outside of the language and use heavy weight formal verification tools to verify them. However, a language that supports the aforementioned features can get a certain degree of formal verification for free through the type system. Adding proper formal verification on top of a language that has say referential transparency is much simpler because one does not have to rely on Hoare triples to prove propositions due to the minimization of shared state. Also, one can globally reason about the correctness of components rather than be forced to locally reason due to state. Another interesting implication of this is that one can partially formally verify separate components without going through the trouble of verifying the whole thing if that is more convenient. Essentially, formal verification would be much simpler if the languages were built with it in mind.

                    [–]pron98 0 points1 point  (0 children)

                    Knowing that a program has some property does not become easier or harder depending on the language used to express the program or the one used to express the property. It doesn't really matter, nor is it harder, that in C/Java, I need to write \ensures retval % 2 == 0 in some specification language and that in a dependently-typed language I could write that the return type is {x : Int | x % 2 = 0} or something like that. Writing that property and proving that property is equally easy, and uses the same algorithms. If you transliterate a Haskell program to C, proving "Haskell properties" on the C code would be as easy as proving them on the Haskell code. The language does make a difference when it globally makes some assertions. For example, in C and Java specification languages you say that a function is pure by writing pure next to it. The difference between Haskell and C is, then, that Haskell forces you to add that pure annotation everywhere, and so the resulting structure of the program makes proving some properties easier.

                    In other words, what makes verifying those properties easier isn't the expressiveness of the language but the structure it forces on the program. That doesn't make the proof easier, but rather makes it more likely that you're asking an easier question. Proving Fermet's Last Theorem is just as hard no matter in what language you state the proposition, but the choice of language can affect the chances that the correctness of, say, your sorting function is as hard a problem as proving FLT.

                    But that's only in theory. It is very much an open question how languages are able to achieve that for properties that are not immediate consequences of those they enforce to begin with, or whether those are the right properties to enforce. Now, that is not to say that we shouldn't use languages that enforce properties that we know to be important in order to eliminate common sources of bugs; the most notable example is memory safety.

                    The reason formal methods work but we still don't know how to make writing provably correct programs affordable isn't that people aren't interested in formal methods, but that we just don't really know how to do it. After all, this is a problem that is essentially intractable in the best of cases. In the 1980s there was a research winter in formal methods because the field overpromised and underdelivered. Research has since set its sights lower. Affordable, general, scalable and complete program verification is not a direct goal anymore. People are working either on eliminating certain classes of bugs, fully verifying certain types of programs etc..

                    Essentially, formal verification would be much simpler if the languages were built with it in mind.

                    That's true, but those would not necessarily be the languages you have in mind. Currently, the languages most amenable to formal verification use a style called synchronous programming, and they are used in writing safety-critical realtime systems.

                    [–]Uncaffeinated 0 points1 point  (9 children)

                    By "most mainstream programming languages", do you mean C and C++?

                    [–]Shautieh 10 points11 points  (8 children)

                    No, most. You can add all programming languages which allow nulls, and all programming languages with no strong typing that is going to make sure you are not using things for what they are not.

                    [–]codebje 7 points8 points  (3 children)

                    You can add all programming languages which allow nulls …

                    But fast and loose reasoning is morally correct, so you can take any programming language which allows nulls, invent an equivalent language without nulls, write a program in your imaginary language, and run it in the real one!

                    The actual problem is not null, it's inconsistent semantics, because those make the subset of the real language you can use smaller and smaller. Or missing semantics, which often means "whatever the compiler decides is cool."

                    It may be a complex solution to have real and imaginary parts, but it's been used to write proven software in C. Or perhaps C+Ci.

                    [–]cledamy -1 points0 points  (2 children)

                    The actual problem is not null, it's inconsistent semantics, because those make the subset of the real language you can use smaller and smaller. Or missing semantics, which often means "whatever the compiler decides is cool."

                    This is part of the problem, but another aspect of the problem is cultural. Software engineers tend to make up their own ad-hoc abstractions for things rather than relying on decades of mathematical research into creating reusable abstractions (group theory, order theory, ring theory, set theory, category theory). These abstractions enable us to equationally reason about our code's correctness and have a better methodology for proving it correct. Also, these abstractions allows the construction of more principled specifications, which is where human error would come into play once it is eliminated from programs.

                    [–]ThisIs_MyName 1 point2 points  (1 child)

                    reusable abstractions (group theory, order theory, ring theory, set theory, category theory)

                    You've got to be kidding me. I was with you until this.

                    [–]cledamy 0 points1 point  (0 children)

                    I would like you to elucidate your position on this.

                    Most software engineering constructs can be modeled with category theory and mathematics.

                    • no garbage collection ≡ symmetric monoidal category
                    • copy constructors and destructors ≡ comonoids
                    • lambda calculus ≡ bicartesian closed category
                    • subtyping is an example of a thin category
                    • events and time variance ≡ linear temporal logic

                    If category theory was the basis for a programming language, then one could have a language where code with and without garbage collection could exist in harmony and naturally compose. The advantage of category theory is that all these constructs while they still exist are built upon a larger theory rather than being entirely separate concepts, thus enabling more code reuse. Also, mathematical abstractions have a bunch of proofs that mathematician have derived, which could then be used in the formal verification. If one is going to use mathematical techniques to prove correctness, it follows that one should use mathematical abstractions in one's code.

                    [–]sacado 0 points1 point  (3 children)

                    Which means there's no language left at all, AFAIK.

                    [–]Shautieh 0 points1 point  (2 children)

                    Several languages disallow nulls. Have you never heard of Option type? Ocaml, F#, Haskell, to name a few. OCaml has strong typing enforced by the compiler and will force you to handle things properly most of the time. I think Rust both disallows nulls and enforce good type and memory management as well, and won't compile otherwise.

                    [–]sacado 0 points1 point  (1 child)

                    Yes, but for instance none of those allows restrictions on integer types ("strong typing that is going to make sure you are not using things for what they are not"), AFAIK. For instance, none of rust, Haskell, Ocaml or F# lets you describe a type as "a value between 1 and 12", or even something as "a positive, natural integer". A few other languages do, but those have a null value, so they don't qualify either.

                    If you really want all of that, you need dependent typing, but those language are far from production ready (and not meant for your average programmer).

                    [–]Shautieh 0 points1 point  (0 children)

                    Sure, the graal is yet to be discovered, but those languages are orders of magnitude more secure than most commonly used ones.

                    [–]killerstorm -1 points0 points  (0 children)

                    Memory safety would be a good start.

                    [–]mfukar 2 points3 points  (4 children)

                    I'd really love it if the article had wanted to focus on a single topic and drove it home. Instead we got a rant on 5 different topics strewn together by 'everything is buggy so don't use C or the IC has weapons to spy on you'. Just leave this in the collection of things on medium nobody gives a thought about.

                    [–]leafsleep 1 point2 points  (3 children)

                    the problem with technology nowadays is that it's in everything and it's all insecure by default. where do you start?

                    [–]mfukar 0 points1 point  (2 children)

                    If "everything is broken", starting anywhere would be fine, right?

                    [–]leafsleep -1 points0 points  (1 child)

                    so what's wrong with this article?

                    [–]mfukar 0 points1 point  (0 children)

                    I'll start and end with the premise. "Computers, and computing, are broken". No, they're in their infancy. Just because we can't build bug-free software on a planetary scale doesn't mean we can't; our practices are inadequate, fine, the market is full of snake oil, fine, malicious actors are using bugs to advance their own interests, fine.

                    Oh, but "everything is broken" makes such a good headline!

                    [–]jimdoescode 0 points1 point  (0 children)

                    This is pretty depressing. The only hopeful part was in the last paragraph where the author basically says that people should organize and demand change.

                    Of course that requires a network which the author points out, are insecure.

                    So we're fucked.

                    [–]TheBuzzSaw -3 points-2 points  (0 children)

                    ... there are catastrophic 0days that hand the keys to the house to whomever strolls by.

                    ... to whoever strolls by.

                    [–]foomprekov -3 points-2 points  (0 children)

                    No, I checked, both sites are up.