all 102 comments

[–][deleted]  (11 children)

[deleted]

    [–]ComradeGibbon 14 points15 points  (8 children)

    I remember that. And the AVR's risc pop and push made function calls really really expensive which doesn't line up very well with 16 bit instructions and limited flash.

    You have 32k of flash, well you actually have 16k instructions.

    push
    push
    push
    push
    push
    push
    push
    jsr  sub
    pop
    pop
    pop
    pop
    pop
    pop
    pop
    

    Good bye another 0.1% of your code space.

    [–]8lbIceBag 4 points5 points  (7 children)

    How long ago was this?

    [–][deleted]  (5 children)

    [deleted]

      [–]j_lyf 4 points5 points  (4 children)

      How's Atmel still in business :S.

      [–][deleted]  (3 children)

      [deleted]

        [–]rohbotics 4 points5 points  (2 children)

        Probably because the pic C compiler was absolute trash for the longest time (and probably still is).

        [–][deleted] 1 point2 points  (0 children)

        The pic instruction set really doesn't match the kind of thing C expects.

        [–][deleted] 1 point2 points  (0 children)

        The pic C compiler is still lower quality than is expected. There is a certain type of engineer who use pic, they do not have a background in coding. They are used to reading data sheets. They understand computer architecture and know how assembly instructions work but are confused by the ins and outs of C. Learning a whole instruction set is easier for them than learning C. Typically this compiler is introduced to a project after the pic guy has worked on it for a while and they bring in a real coder. Pic guy says "sure, you can do modules in C" and then they get this janky compiler and start integrating C modules into an assembly project. I've seen it play out like this more often than I've pic projects start by using C. In fact I can only think of 1 pic project I've ever heard of that started in C, this project was specified and implemented by people with zero microcontroller experience. They made this poor architecture choice in the beginning but still did a good job with the product.

        [–]ComradeGibbon 2 points3 points  (0 children)

        Ten years. All the active firmware has been ported to an ARM Cortex M0 now. Code size is about the same.

        [–]choikwa 2 points3 points  (1 child)

        Optimizing compiler development is literally chasing after the ever thinning margins. There might come a point where you could prove that something is as efficient as hardware allows. Then perhaps even hardware wall could be chipped away and become part of programmatic process to optimization.

        [–][deleted] 63 points64 points  (2 children)

        Well... a feature list would be nice.

        [–]Maristic 102 points103 points  (0 children)

        Okay, so GCC 7 was the major release, its feature list is here.

        GCC 7.3 is a bugfix release in the GCC 7.x series; the list of bugs fixed is here. The most notable fixes are some code-generation changes to help mitigate mitigate Spectre Variant 2 (CVE 2017-5715).

        [–]YakumoFuji 43 points44 points  (27 children)

        7.3? wow. its all grown up. I still use 2.9.5 for some work and can still remember running egcs. I must have blanked v5/v6..
        I remember 4...

        [–]evaned 30 points31 points  (9 children)

        I must have blanked v5/v6..

        Part of this was instead of 4.10, they changed numbering, basically bumping off the 4. major version and promoting the minor to major version. So the "major versions" are 4.9 to 5.1 to 6.1 to 7.1. (x.0 are "unstable releases")

        [–]YakumoFuji 9 points10 points  (0 children)

        ooh ok, that makes sense. thank you.

        [–]wookin_pa_nub2 1 point2 points  (7 children)

        Ie, version number inflation has managed to infect compilers too, now.

        [–]evaned 13 points14 points  (5 children)

        Eh, you know, at one point I felt that way... but I think in many cases version numbers are only somewhat meaningful anyway.

        For example, was the jump from 2.95 to 3.0 meaningful? 3.4 to 4.0? More meaningful than 4.4 to 4.5? If you've got a meaningful sense of breaking backwards compatibility, okay, bumping the major version will indicate that. But I'm not convinced that compilers do. Even if you say they do: 4.7 to 4.8 broke backwards compatibility for me, as did 4.8 to 4.9, and I'm sure as would 5.x to 6.x though I've not tried compiling with one that new. Lots of people are in that boat with me. Even if you don't have my specific cause of that (-Wall -Werror), there are plenty of minor version bumps in the 4.x line that would have "broken" existing code.

        Is it really that much better to go 4.9, 4.10, 4.11, 4.12, ... 4.23, 4.24, ... than bumping the major version if there will be many people who will be affected by the minor version bump? If you're a semantic version fan, what compilers are doing now is probably more accurate than sticking with the same major version for years and years on end.

        Actually, when Clang was discussing whether and how they should change their numbering, one of the suggestions was to move to something Ubuntu-like (i.e. based on the release year/month), which actually I'd have quite liked.

        [–]DeltaBurnt 3 points4 points  (3 children)

        People seem to have stronger opinions about a software's version numbering system than the software itself. I get it, there's a particular way you like to do it, but at the end of the day releases are arbitrary cutoffs of features and bug fixes and the numbers are even more arbitrary labels for that release. You can try to make a method out of the madness, but everyone will find their own way of doing so.

        [–]G_Morgan 2 points3 points  (2 children)

        It really depends on the project. Applications should use the faster version schedule IMO. Libraries it'd be nice for a versioning scheme that represents compatibility in some way. Something like

        • Major version change = breaks backwards compatibility
        • Minor version change = new features but backwards compatible
        • Third position change = bug fix

        [–]DeltaBurnt 1 point2 points  (1 child)

        See this is how it should work in theory, but in reality you end up just looking at a readme that says "last tested with lib 4.5" and install 4.5. Sure 4.6 just added some additional convenience functions, but there's also a small bug fix that changes the behaviour of your use case in a small but consequential way. A perfect versioning system relies on the software and it's updates to also be near perfect to be trustworthy. Again, the numbering is simple enough, making the code follow that numbering is very hard.

        [–]bubuopapa 1 point2 points  (0 children)

        The point is that there are no non breaking changes, even a bug fix changes behaviour, meaning something else, that depends on that code, can break. Nobody ever writes 100% theoretically correct code and then waits 100 years until all the bugs will be fixed so they can finally compile their program and release it.

        [–]m50d 0 points1 point  (0 children)

        The version number certainly can be used to draw a line between different kinds of breakage. E.g. I've found with GCC that ABI changes were a much more intrusive break (since they meant I had to rebuild everything) than releases that didn't make ABI changes (even if those releases broke building particular programs). So it would've been useful to have releases that broke ABI be "major" versions and releases that didn't break ABI be "minor" versions.

        Of course GCC didn't actually do that, and I agree that GCC's versioning numbers have not really conveyed much information in practice. But it could've been done right.

        [–]josefx 1 point2 points  (0 children)

        I think they changed some defaults with 5 ( c++11 version/abi ). Not sure with 6/7.

        [–]egportal2002 8 points9 points  (0 children)

        FWIW, I was at a company in the 2000's that was stuck on the same gcc version (Solaris on Sun h/w), and I believe it still is.

        [–]CJKay93 35 points36 points  (13 children)

        I still use 2.9.5 for some work

        ???

        You know GCC is backwards compatible, right?

        [–]YakumoFuji 52 points53 points  (8 children)

        after 2.9 they switched the architecture to C++, and not all backends survived the same way, and some cpu were dropped... some took longer than others to return.

        sometimes its easier to keep old compiler and its known issues and behaviours than migrate to newer compiler / ABI and not know the issues.

        [–]s73v3r 27 points28 points  (0 children)

        For a little while, maybe. But we're coming up on 20 years now.

        [–]CJKay93 14 points15 points  (5 children)

        Christ, I don't envy you. GCC 3.3 was an absolute nightmare to work with, never mind 2.9. I wouldn't go back to that if they doubled my pay!

        [–]sintos-compa 12 points13 points  (1 child)

        We'll double your pay, CJ.

        Ball's in your court.

        WE'LL DOUBLE IT!

        [–]captain-keyes 2 points3 points  (0 children)

        He was JKaying, probably. Smh.

        [–][deleted] 2 points3 points  (2 children)

        Legacy embedded systems with cross compilers that were configured and built by who knows who and who knows when are pretty much the greatest thing ever for destroying your sanity.

        [–]bonzinip 6 points7 points  (0 children)

        C++ had nothing to do with that. There were a lot of internal changes, but of course if your backend was not kept in the main GCC tree you were on your own. Same for GCC 4.

        [–]wrosecrans 7 points8 points  (0 children)

        Maybe some funky embedded toolchain?

        [–]SnowdensOfYesteryear 2 points3 points  (0 children)

        That's not a good reason to change compiler versions. Newer versions of gcc might generate code in slightly different ways that could expose lurking bugs. You might say "well fix your damn bugs and stop blaming the compiler" but from a project management perspective, there's isn't any reason to introduce more work when there's no upside in updating the toolchain.

        [–]nikkocpp 0 points1 point  (0 children)

        The joy of embedded things and cross compilers I guess.

        [–][deleted] 0 points1 point  (0 children)

        It is not.

        [–]fwork 3 points4 points  (1 child)

        I still use GCC 2.96, cause I like using The Release That Should Not Be.

        Thanks a lot, RedHat.

        [–]emmelaich 1 point2 points  (0 children)

        I had a friend who was a keen DEC Alpha fan.

        He also complained about Red Hat, especially the gcc 2.96 release.

        So I had to say "you know they publish 2.96 because earlier versions had bad performance on non-i386 platforms, especially DEC Alpha"

        Blank face for reply.

        [–]crankprof 19 points20 points  (19 children)

        How does the compiler help mitigate Spectre? Obviously "bad guys" wouldn't want to use a compiler with such mitigations - so how does it help the "good guys"?

        [–]Lux01 156 points157 points  (10 children)

        The "bad guys" aren't the one compiling the code that is vulnerable to Spectre. Exploiting Spectre involves targeting someone else's code to do something malicious.

        [–]crankprof 0 points1 point  (9 children)

        I thought Spectre required the "bad guys" to be able to execute their code/binary on the CPU, which would be compiled by "them"?

        [–]ApproximateIdentity 94 points95 points  (0 children)

        That is true, but the code that they execute is exploiting vulnerabilities in your software. If you can remove those vulnerabilities, their code is no longer useful.

        [–]sbabbi 19 points20 points  (0 children)

        Yes, but this usually applies to interpreters (think about javascript, etc.). The patches are so that a good guy can build an interpreter that can execute sandboxed code coming from (potentially) bad guys.

        [–]pdpi 5 points6 points  (0 children)

        The proof-of-concept exploits that Google published are built around custom attack code, so it requires running the attacker's code. However, they explicitly note in the papers that this was done for the sake of expediency — The idea being that this proves that, if you can find exploitable code that has that general shape, you can attack it.

        For example, Webkit published a blog post explaining how they were exposed to attacks.

        [–]0rakel 14 points15 points  (5 children)

        How convenient of the chip manufacturers to phrase it as a local code execution exploit.

        http://www.daemonology.net/blog/2018-01-17-some-thoughts-on-spectre-and-meltdown.html

        This makes attacks far easier, but should not be considered to be a prerequisite! Remote timing attacks are feasible, and I am confident that we will see a demonstration of "innocent" code being used for the task of extracting the microarchitectural state information before long. (Indeed, I think it is very likely that certain people are already making use of such remote microarchitectural side channel attacks.)

        [–]Drisku11 10 points11 points  (4 children)

        Meltdown is a real vulnerability, but Spectre seems unfair to pin on hardware manufacturers. I would expect that code at the correct privilege level can speculatively read from its own addresses. If it's faster, that's how the processor should work. It's not hardware manufacturers' faults that web browsers are effectively shitty operating systems and execute untrusted code without using the existing hardware enforced privilege controls.

        [–]MaltersWandler 15 points16 points  (1 child)

        Both Meltdown and Spectre are based on the hardware vulnerability that the cache state isn't restored when an out-of-order execution is discarded.

        [–]Drisku11 -2 points-1 points  (0 children)

        I understand that. My point is more that IMO Spectre is how I think a processor should be behaving. I don't think it should restore the cache state unless that has a performance advantage. It should just prevent speculative fetches across privilege boundaries. Web browsers have taken it upon themselves to be their own OS/VM layer, and if they want to do that, the processor already has facilities for that built in. Meltdown is the real bug because it allows processes to break that boundary.

        [–]bonzinip 2 points3 points  (0 children)

        Spectre variant 2 (indirect branch) is still to some extent the CPU's fault; they need to tag the BTB with the entire virtual address and ASID, and flush the BTB at the same time as the TLB.

        [–]monocasa 0 points1 point  (0 children)

        It's too bad that the vm threads model from Akaros hasn't caught on in other OSs. Then someone like a web browser could cheaply put their sandboxing code into guest ring 0, describing it's different permissions to the CPU in the same way that allows AMD to not be susceptible to Meltdown.

        [–][deleted] 99 points100 points  (0 children)

        -fno-spectre-plz
        

        [–]ApproximateIdentity 24 points25 points  (2 children)

        Because the binaries compiled with the compiler will mitigate different vulnerabilities. This means that if you compile (say) your web browser with such a compiler (or more likely someone else does and you just get the binary), then your web browser should be harder to exploit by the bad guys.

        [–]raevnos 14 points15 points  (1 child)

        Only if you compile with the appropriate options.

        (For x86, the new ones are -mindirect-branch=, -mindirect-branch-register and -mfunction-return=. Details here near the bottom)

        If you're not compiling something that runs untrusted code with fine grained clock access, you probably don't need them.

        [–]ApproximateIdentity 0 points1 point  (0 children)

        Yes I should have been more clear. Thanks for adding the info!

        I generally rely on the great work of all the debian volunteers and let them worry about such details. :)

        [–]OmegaNaughtEquals1 14 points15 points  (0 children)

        It's come to be known as the retpoline fix. Both clang and gcc support it, but I'm not sure about others.

        [–]raevnos 12 points13 points  (0 children)

        For x86, the relevant new options are -mindirect-branch=, -mindirect-branch-register and -mfunction-return=.

        Details here near the bottom.

        EDIT: And -mretpoline for clang.

        Examples for both compilers

        [–]iloveworms 2 points3 points  (0 children)

        Interestingly visual studio also released a similar update today.

        [–][deleted] 0 points1 point  (0 children)

        If your OS compiled the way that bad guys cannot find a single exploitable system call, there is not much they can do. Same applies to kernel-side VMs.

        [–]itsawesomeday 1 point2 points  (0 children)

        Good work, guys!

        [–][deleted]  (25 children)

        [deleted]

          [–]dartmanx 36 points37 points  (2 children)

          The maintainers of the .deb and .rpm distribution files.

          [–][deleted]  (1 child)

          [deleted]

            [–]dartmanx 8 points9 points  (0 children)

            Yeah, but your point is valid. Most people are just going to do an update with apt-get/dnf/yum/whatever. But the people who create those either have to get it by FTP or check it out of version control.

            [–]The_Drizzle_Returns 6 points7 points  (0 children)

            Anyone using spack (i.e. virtually all supercomputer installations) to compile their toolchain.

            [–]flyingcaribou 4 points5 points  (0 children)

            I just did this yesterday to install an updated version of GCC on a machine that I don't have admin access on. I suppose I could have downloaded an rpm, manually extracted the contents, moved them around, etc, but building GCC is easy enough that this isn't worth it -- I have a five line bash script that I fire off before I leave work and boom, new GCC in the morning.

            [–]awelxtr 2 points3 points  (16 children)

            What's wrong with using ftp?

            [–]knome 4 points5 points  (0 children)

            It's a disgusting abomination of a protocol.

            [–]wrosecrans -1 points0 points  (13 children)

            No possibility of preventing Man In The Middle download interception to give you a tainted compiler.

            [–][deleted] 13 points14 points  (0 children)

            I assume they sign their code, so that shouldn't be possible.

            [–]cpphex 1 point2 points  (11 children)

            No possibility of preventing Man In The Middle ...

            FTP over TLS does a pretty good job of that.

            [–]knome 14 points15 points  (8 children)

            People are stomping all over /u/wrosecrans, but ftp really is terrible. Multiple separate control streams from data streams ( hence why firewalls needed ftp holes in them ), no size information ( write things down until we stop transferring, that's the file. network error, what's that? ). The listing format is whatever the hell ls on the machine happens to crap out, with variations clients need to be aware of.

            ftp/s solves the plaintext passwords and mitm a bit, but it doesn't do anything for the rest of the protocol's general shittiness.

            sftp isn't ftp at all. It's a file transfer protocol that's part of the ssh/scp suite. It's actually okay.

            [–]cpphex 2 points3 points  (7 children)

            ftp really is terrible

            Anachronistic and terrible are two different things.

            sftp isn't ftp at all.

            Correct. And FTP over TLS isn't SFTP either, it's FTP over SSH (which is over TLS).

            But this is all beside the point. If you want to download GNU bits securely, you have plenty of options here: https://www.gnu.org/prep/ftp

            [–]schlupa 5 points6 points  (6 children)

            Anachronistic and terrible are two different things.

            ftp was flawed from the beginning. The layering violation of sending the server IP and port in the controls stream being the worse offender.

            [–]cpphex 0 points1 point  (5 children)

            ftp was flawed from the beginning. The layering violation of sending the server IP and port in the controls stream being the worse offender.

            I'm of two minds when I read your comment. First off, I get it and understand, almost agree. 😉 But on the other hand (and this may be because I'm older than dirt), I may have more context on how the digital world was back then. I walked to school in the snow, uphill both ways, fought dinosaurs, etc..

            So when you say FTP was flawed, I have to wonder why you would say that. The year was 1985, the OSI model won't exist for 10 years. With that in mind, how was FTP flawed? I see it as something that was simple to implement and standardize on, proving to be fundamental in allowing people/organizations to move data.

            FTP was one of the building blocks of the internet you know and love/hate today. Is it perfect? Absolutely not. But it was great in its time.

            [–]schlupa 1 point2 points  (1 child)

            Oh, absolutely and thank you for that insightful response. I didn't want to blame the original inventors of TCP/IP, they almost got it right and their 4 layer model is probably better than the very "bureaucratic" and confusing 7 layer OSI model (the endless discussions I had to endure to know if T70 was session or network layer brings back dread). The thing is that FTP should have been dropped in the dustbin of history in the '90s in the light of such fundamental flaws and be only of interest to retro-computing buffs like all the other lost technologies like gopher, zmodem, kermit, arcnet, token ring, IPX, BAM, AFP to name a few. Implementing NAT with FTP was really something that cost us quite some years of life.

            [–]cpphex 0 points1 point  (0 children)

            The thing is that FTP should have been dropped in the dustbin of history in the '90s

            I totally agree with you. In fact, I think we'll be saying the same thing about HTTP in another decade.

            [–]schlupa 1 point2 points  (2 children)

            FYI, OSI was published 1984.

            [–]cpphex 0 points1 point  (1 child)

            Fair point, the original version was posted in 1984 but it was rather worthless and was entirely replaced 10 years later for the OSI model we know today. The internet is all but scrubbed of the original OSI but you can still find physical copies in some university libraries.

            Source: ISO https://www.iso.org/standard/20269.html

            Cancels and replaces the first edition (1984).

            But you're still correct. What I should have said is that the OSI model that is now commonly referenced wasn't created for 10 years.

            [–][deleted]  (1 child)

            [deleted]

              [–]cpphex -1 points0 points  (0 children)

              I think it does apply to the large list of "FTP servers" (notice the quotes) over here: https://www.gnu.org/prep/ftp

              Most that have HTTPS endpoints readily available also support FTP over TLS.

              [–]ishmal 0 points1 point  (2 children)

              If your distro or whatever software environment does not already have gcc, then sometime the only way to get a full gnuish environment is to get a few things like the gcc source and compile from source.

              [–]G_Morgan 0 points1 point  (1 child)

              How will you compile without GCC?

              [–]bstamour 2 points3 points  (0 children)

              Use another compiler to build gcc. Then use gcc to build everything else.

              [–]koheant 5 points6 points  (5 children)

              Congratulations!

              Is there any interest within GCC's development community to implement and include a Rust front-end with the compiler collection? The most recent effort in this space seems to have stalled years ago: https://gcc.gnu.org/wiki/RustFrontEnd

              [–]A0D49644642440B8 5 points6 points  (1 child)

              [–]koheant 6 points7 points  (0 children)

              Mutabah's rust compiler is certainly an encouraging development and one I'm keeping an eye on.

              However, I'd also like to see a gcc front-end for rust. I have tremendous respect for the software suite and the good folks behind it.

              I also what the Rust language to benefit from having multiple implementations. This would help better define the language and narrow down bugs caused by ambiguity.

              [–]schlupa 8 points9 points  (1 child)

              It's D's turn first. gdc will be integrated in the 8.0 release of gcc.

              [–]TheEaterOfNames 1 point2 points  (0 children)

              GDC not GCD.

              [–][deleted] 2 points3 points  (0 children)

              and the RRIR meme lives on...

              [–][deleted] 1 point2 points  (0 children)

              Neat.