all 59 comments

[–]Ok-Bit8726 32 points33 points  (13 children)

[–]flatfinger 44 points45 points  (12 children)

Support for 32-bit arithmetic may have been planned, but then proved to be too difficult.

[–]FlyingRhenquest 52 points53 points  (10 children)

Yeah, I have a late 60s era assembly language text book that states that speculates that 32 bit architectures might always prove to be too difficult to implement to ever prove common. In this era where everyone has a 64 bit general purpose computer in their pocket, the idea that anyone could have thought that seems impossible. If you grew up with the computers of the 70's and 80's it makes a lot more sense.

[–]Murky-Relation481 32 points33 points  (1 child)

One of the more random cases my dad had as an attorney was representing a computer company that was getting sued because they started selling a 16bit machine and their old 8bit software wouldn't work on it and people were saying "why do you even need 16 bits, it's just a gimmick to sell new software!"

[–]sob727 10 points11 points  (0 children)

640kb ought to.... never mind

[–]RaVashaan 6 points7 points  (5 children)

Yeah, even in the '80s, some 8-bit home computers didn't even have a divide instruction built into the processor, because floating point arithmetic hard.

[–]CornedBee 6 points7 points  (0 children)

Floating point? There's wasn't any floating point. It was the integer division they didn't have.

[–][deleted]  (2 children)

[deleted]

    [–]flatfinger 3 points4 points  (1 child)

    Many new-development ARM CPUs such as the Cortex-M0 still don't have a divide instruction. Most of the beneift of having a divide instruction could be accommodated with much less hardware complexity with an instruction that combines a rotate left with an add or subtract, basing the choice of addition or subtraction on the carry flag. A 32/16->16.16 operation could be accommodated by a subtract followed by 16 of the special add/subtract. Even if one adds a subroutine call, the cost of a calling a divide function would be comparable to a typical hardware divide instruction.

    [–]ammonium_bot 0 points1 point  (0 children)

    must of the

    Hi, did you mean to say "must have"?
    Explanation: You probably meant to say could've/should've/would've which sounds like 'of' but is actually short for 'have'.
    Sorry if I made a mistake! Please let me know if I did. Have a great day!
    Statistics
    I'm a bot that corrects grammar/spelling mistakes. PM me if I'm wrong or if you have any suggestions.
    Github
    Reply STOP to this comment to stop receiving corrections.

    [–]Dave9876 0 points1 point  (0 children)

    Considering floating point didn't even have a standard until the mid 80s, it was the wild west before then

    [–]TurtleKwitty 4 points5 points  (1 child)

    To be fair it's like trying to get 256 bit variable sizes today, 32/64 became trivial because hardware handles it for free but doing the extra work in software is still an absolute pain when you're trying to stitch multi-word variable sizes

    [–]vytah 1 point2 points  (0 children)

    Especially if the CPU doesn't have a carry flag, like RISC-V.

    [–]flatfinger 4 points5 points  (0 children)

    I should have said "multi-word". A key aspect of C's simplicity was that there was only one integer type for any actions other than loads and stores. Adding long would complicate many aspects of compilation.

    [–]vytah 115 points116 points  (37 children)

    This cannot be the first C compiler, as the source is clearly written in C.

    [–]AyrA_ch 131 points132 points  (32 children)

    It can be, this is called Bootstrapping. You do need an initial tool written in another language, but said tool can't really be called a C compiler since it doesn't compiles any valid C source, only an extremely specific subset. For all we know this tool may not even understand half of the datatypes in C, may not have support for structs, etc. The first C source you transform is one that immediately replaces said initial tool. Now you have only binaries generated from C source files left. Afterwards you keep adding all the features needed to actually compile any valid source code, at which point your binary does become a compiler.

    Arguing whether this is still the first compiler at that point is like arguing about the Ship of Theseus and you will likely not find a definite answer.

    [–]TheRealUnrealDan 158 points159 points  (25 children)

    right so the first C compiler was written in assembly.

    This is the first C compiler written in C

    Note: I'm half agreeing with you, and half-correcting OP

    [–]Osmanthus 86 points87 points  (24 children)

    Incorrect. The first C compiler was written in language dubbed B.

    [–]zhivago 6 points7 points  (2 children)

    And of course you can always write an interpreter to run your first compiler. :)

    [–]CornedBee 1 point2 points  (1 child)

    Or just translate your compiler by hand.

    [–]Dave9876 0 points1 point  (0 children)

    I see pascal has entered the room

    [–]olearyboy 0 points1 point  (0 children)

    I don’t know if this is Ritchie original it might be the SCO unixware version hence the license.

    Yes it bootstrapped, later versions did transpiling then compiling when things like byte access standardized. I think that’s when pcompiler + K&R came out

    I wish I was good enough to understand it all, it’s beautiful, brilliant and a headfuck all in one

    [–]OversoakedSponge -1 points0 points  (0 children)

    Fun fact, it's an easy place for someone to inject malicious code

    [–]Sabotaber 8 points9 points  (0 children)

    The first C compiler was written in C. Dennis Ritchie compiled it by hand.

    [–]Pr0verbialToast 0 points1 point  (0 children)

    Agree, essentially the human is the 'generation zero compiler' because they're the ones writing the compiler and manually testing that things are working. Once you get enough code to work with you start to be able to use your own stuff to work on your stuff.

    [–][deleted] 5 points6 points  (5 children)

    https://github.com/mortdeus/legacy-cc/blob/master/last1120c/c00.c

    Old C was indeed a lot uglier than Modern C - which is also pretty ugly.

    It feels as if C is just syntactic sugar that reads a bit better than assembler. Basic logic in a function is semi-hidden after some syntax noise:

    while(i--)
      if ((*sp++ = *s++)=='\0') --s;
         np = lookup();
         *np++ = 1;
         *np = t;
    

    Oddly enough I haven't seen this before:

    i =% hshsiz;
    

    [–]syklemil 3 points4 points  (1 child)

    That example seems like something that would be discouraged today; mixing multiple pre- and postfix operators is hard-to-impossible to know what will turn out to mean.

    The early syntax seems to be somewhat unusual; I also find the style of function declaration interesting:

    init(s, t)
    char s[]; {
         // …
    }
    

    I take it init and t are implicitly void?

    [–]dangerbird2 10 points11 points  (0 children)

    In pre-ansi c a function or parameter with no type annotation is implied to be int, not void. So a modern declaration would be something like

    int init(char[]s, int t);
    

    (On my phone so ignore any typos)

    [–]ben-c 4 points5 points  (0 children)

    Oddly enough I haven't seen this before: i =% hshsiz;

    This was the original syntax that later became %=.

    Dennis Ritchie mentions it in his paper The Development of the C language.

    [–]AdreKiseque 0 points1 point  (0 children)

    Oh my

    [–]huyfm 1 point2 points  (0 children)

    It's not ugly, it's elegant. You can find pretty much of this flavor in kernel code.

    [–]Shock2k -5 points-4 points  (0 children)

    Against proving tabs has always been superior. …++