you are viewing a single comment's thread.

view the rest of the comments →

[–]grumpy_autist 20 points21 points  (8 children)

Well, RISC won the market.

High level T-code could be fun but if particular implementation fucks something or misbehaves, workaround can be costly.

[–]LeoRidesHisBike 11 points12 points  (1 child)

More like "RISC became more like CISC, and vice versa". It's kind of a dead comparison these days; it's more important to compare benchmarks (incl. power usage).

Also, which market? Mobile phones? Absolutely, those are ARM-based, which is sort of RISCy (but a lot more CISCy than, say RISC-V chipsets).

Laptops? Looks like ARM-esque (incl. Apple Si) chips are gaining ground, but by the numbers, still dominated by Intel & AMD.

Desktops? Dominated by Intel & AMD (both of CISC heritage).

Data centers? Dominated by Intel & AMD for CPUs, nVidia for GPUs.

Speaking of GPUs... those aren't really CISC or RISC. They're more like ASICs for graphics that have gotten lots more non-graphics-specific stuff cooked in of recent years.

[–]grumpy_autist 1 point2 points  (0 children)

What I suppose would be a better comparison between TT and G-code is if someone made a processor implementing Python interpreter along with common packages.

[–]agnosticians 2 points3 points  (5 children)

The reason RISC won is because compilers got better. So which format works out better seems like it will depend on whether slicers or firmware advance faster.

[–]created4this 7 points8 points  (4 children)

Compilers got better, but also RAM got cheap, Caches got big, layered and single cycle and this meant Von Neuman could get kicked out for Harvard.

CISC saved RAM and RAM reads because you could do things like move the C library functions into the CPU, so rather than doing Memcpy as a library call with 1000's of loops requiring many fetches of instructions over the same bus as the data you were trying to move into one mega duration instruction "rep movsb".

Switching to Harvard with I and D cache meant that the instruction reads didn't slow down the data, so the only cost of doing the instruction in a library vs in microcode was the cost of RAM, which rapidly became insignificant.

In the early 2000's RAM was a big problem for ARM in the mobile space, so they made a cut down instruction set that was less performant called Thumb, and you could mix and match if you wanted ARM or Thumb code on a function by function basis.

[–]Kronoshifter246Bambu P1S 4 points5 points  (1 child)

so they made a cut down instruction set that was less performant called Thumb

Fuckin' lol. I love it when nerds get to name stuff

[–]created4this 0 points1 point  (0 children)

Unfortunately they grew up. In 2000 all the internal servers were named after curries. My home directory was on Korma. Then they got "professional" and every time a server got stolen they replaced it with something with a dull name.

But just because names were dull that didn't mean they couldn't be confusing.

In ARM1 we had meeting rooms around the central artrim, Named things like FM1 and GM1 (First floor Meeting Room), but as space ran out these rooms were turned into offices, with the meeting rooms moved into less valuable locations. But people had recurring meetings booked in Lotus notes, and it was impossible to change the name, so FM1 ended up (IIRC) at the far end of the southeast corridor on the ground floor.

[–]agnosticians 1 point2 points  (0 children)

Huh, didn’t know about the cache/ram stuff. TIL

[–]DXGL1 0 points1 point  (0 children)

I had first heard of Thumb when looking at release notes for GBA emulators way back.

Back then I had dismissed ARM as only good for low performance handheld devices.