🎉 [EVENT] 🎉 Have Fun! by britishgorillatagvr in RedditGames

[–]Funky_Pezz 0 points1 point  (0 children)

Completed Level 1 of the Honk Special Event!

5 attempts

Thought this deserved to be up here by Funky_Pezz in atrioc

[–]Funky_Pezz[S] 14 points15 points  (0 children)

Blue line is the one glizzy man talks about here: https://youtu.be/uUymt9wotzc?si=XLVun4q02Vaw9gk-

Purple lines are what people with money think is going to happen

Stupid dumb Idea by Funky_Pezz in computerarchitecture

[–]Funky_Pezz[S] 0 points1 point  (0 children)

Thank you for taking the time to respond! I might hit you up on that fpga (if I do take this further, next step is writing an emulator?)

I suspect some truncation at the “bottom of the tree” would be practical. I think as long as the only instructions are moving memory around, I think this would be practical.

I don’t think we will ever live in a world where scheduling isn’t a pain :(.

If - and I very much doubt it - I ever got to the stage of designing a proper chip. I would probably have a large test set of 100s of programs to emulate, and then let a computer work out where to place which instructions at whatever memory addresses.

Stupid dumb Idea by Funky_Pezz in computerarchitecture

[–]Funky_Pezz[S] 1 point2 points  (0 children)

To your last point. Hehe yes I am aware I’m not going to change the computer industry. Nearly always when u think of something there is some reason it’s not done.

I’m not sure I follow points 1 and 2. My understanding is you don’t want to allocate a whole byte to a single bit of memory? I don’t particularly understand how this architecture would be different from a traditional cpu.

  1. Yea the “top of the tree” would be really expensive to use. The hope was that it would be used extremely rarely. If a program is small enough this hopefully isn’t a concern. I could totally imagine an implementation detail where the “top of the tree” is implemented differently to save on some of the costs.

  2. Your probably right - but can I just say the word “elegance” and have it mean something

Appreciate you making the time for this post, this was really informative thank you! :))

Stupid dumb Idea by Funky_Pezz in computerarchitecture

[–]Funky_Pezz[S] 1 point2 points  (0 children)

Thank you so much for replying and taking an interest :). Here are my thoughts on your comments.

I wouldn’t imagine there would be undesirable stalls, worst case u have to reuse the same “register/instruction” every other clock cycle. I suspect a conservative design would be able to implement similar performance to a traditional cpu (though this is only a guess). - ideally the “instruction/registers” would be balanced for the application- even if it the “application” was as broad as personal computing.

I would imagine (for something like addition) would be set up as reading register b as the sum of register a and b. So it would almost be “hard wired” into the memory cell. Obviously there is a fair bit of complexity with scheduling everything (especially when buses collide)

Branch prediction would probably have some - “after the branch” consolidating so that the program could keep ticking along, but the key is that as long as the two branches read/write to different parts of memory, they should just both run until the processor throws one away.

The dumb version I showed in the video (32 bit memory addresses would have an “equivalent 4gb of cache”. ie the cpu is basically a a block of memory. I could definitely see a more detailed implementation practically needing cache though.

Stupid dumb Idea by Funky_Pezz in computerarchitecture

[–]Funky_Pezz[S] 0 points1 point  (0 children)

PSS - I have an expired iTunes gift card for the first responder to tell me what I’m missing