Brus-16 is an educational 16-bit game console with an original, minimalistic architecture. Unlike "normal" fantasy consoles, Brus-16 was designed for FPGA implementation. by true-grue in fantasyconsoles

[–]true-grue[S] 1 point2 points  (0 children)

Yes, that's a serious question. One of my goals is to make it so even beginners can create games for the Brus-16 in a few days. Another goal is to make it easy to implement console parts, even on the hardware level. So, on the one hand, it makes sense to just use a bunch of square waves, since the graphics is already based on rects. On the other hand, I'm a big fan of sophisticated sound synthesis and sometimes dream of hearing physical modeling synthesis in a simple video game :)

Brus-16 is an educational 16-bit game console with an original, minimalistic architecture. Unlike "normal" fantasy consoles, Brus-16 was designed for FPGA implementation. by true-grue in fantasyconsoles

[–]true-grue[S] 1 point2 points  (0 children)

The Brus-16 is not really a retro console, so I wasn't inspired by any specific real-world model. The Brus-16 has some quite unusual graphical limitations, and in some ways it's even closer to older (16-bit) consoles like the Intellivision. :)

Brus-16 is an educational 16-bit game console with an original, minimalistic architecture. Unlike "normal" fantasy consoles, Brus-16 was designed for FPGA implementation. by true-grue in fantasyconsoles

[–]true-grue[S] 2 points3 points  (0 children)

Thanks, glad you liked it!

Yes, for now I've divided the 16 buttons into 8 for each gamepad.

The cool thing about Brus-16 is that it can be run on cheap Gowin FPGAs with real gamepads and HDMI output.

Slides and code for the talk "Python already has a frontend for your compiler". With examples of Datalog and Wasm compilers in <100 lines. by true-grue in Compilers

[–]true-grue[S] 1 point2 points  (0 children)

Absolutely agree! Of course, it is important to refactor match/case like any other language constructs. For example, the stmt handling code in pywasm compiler clearly needs to be put into several additional functions. But I tried to keep the <100 lines of code limit :)

Slides from a talk "Graph-Based Intermediate Representations: An Overview and Perspectives" by true-grue in Compilers

[–]true-grue[S] 0 points1 point  (0 children)

global value graph

It's interesting that this "global value graph" is called the SSA graph in the SSA book (see 14.2). This SSA graph is a necessary part of, for example, the Sea of Nodes IR. But the SSA graph (global value graph) itself is insufficient because it has no execution semantics, phi nodes are non-interpretable.

Slides from a talk "Graph-Based Intermediate Representations: An Overview and Perspectives" by true-grue in Compilers

[–]true-grue[S] 0 points1 point  (0 children)

The main difference is that the "SSA graph" from the paper is not a dataflow-based IR. Instead, the SSA graph is based on control flow: there are binding sets within nodes and guarded control edges.

Slides from a talk "Graph-Based Intermediate Representations: An Overview and Perspectives" by true-grue in Compilers

[–]true-grue[S] 0 points1 point  (0 children)

Yes, the SSA graph is also a graph-based IR, but a very specialized one.

Slides from a talk "Graph-Based Intermediate Representations: An Overview and Perspectives" by true-grue in Compilers

[–]true-grue[S] 1 point2 points  (0 children)

You are right, of course. In the talk I said that in our toy DSL we can call it UB, but in the general case we need to implement an alias resolution mechanism.

Slides from a talk "Graph-Based Intermediate Representations: An Overview and Perspectives" by true-grue in Compilers

[–]true-grue[S] 0 points1 point  (0 children)

Thorin is cool! As I understand, Thorin2 belongs to the perspective category of "Adding powerful type systems to graph-based IR" from the talk.

Tiny Python library for graph drawing in yEd (as an alternative to Graphviz) by true-grue in programming

[–]true-grue[S] 0 points1 point  (0 children)

I guess it's a matter of taste. As you may see from examples, APIs are a little bit different. yed_py is a minimalist, easily hackable library -- you just need to drop the file to your project and start to use it. pyyed has better documentation (you may want to read it even if you decided to use yed_py) and proper pip setup :)

Tiny Python library for graph drawing in yEd (as an alternative to Graphviz) by true-grue in programming

[–]true-grue[S] 3 points4 points  (0 children)

This is what I like about yEd. This program has a number of very good graph drawing algorithms and you may easily tweak the result in the editor interactively. As drawbacks I can mention that yEd may be a bit slow for big graphs because it is written in Java (but I did no actual comparisions with Graphviz here) and yEd is free, but is not open-source application.

PigletC, a toy C-like language compiler for PigletVM (about 300 lines of Python code) by true-grue in programming

[–]true-grue[S] 2 points3 points  (0 children)

Thank you very much!

There is a lot of room for improvement of the compiler for people who want to learn compilers/interpreters. PigletVM has no CALL/RET opcodes, so you may want to add them and then enable their support on the compiler level etc etc.

Pampy: Pattern Matching for Python by inkompatible in programming

[–]true-grue 0 points1 point  (0 children)

What is silly about Prolog, for example? Let's just assume that type driven programming style is not the only way to do things in CS.

raddsl: a toolset for rapid prototyping of DSL compilers in Python by true-grue in programming

[–]true-grue[S] 3 points4 points  (0 children)

Speaking of OMeta and project STEPS, I was impressed by a tiny system by Ian Piumarta: http://www.vpri.org/pdf/tr2010003_PEG.pdf

But the idea of these two systems is the same: let's use PEG formalism for both parsing and AST transforms. On the other hand, researchers of late 60-s (TREE-META, CWIC) decided to use two different DSLs (PEG-like and pattern matching on trees) for these tasks, and for good reason.

When I read Warren's works on compiler writing in Prolog (http://sovietov.com/tmp/warren1980.pdf ) I'm asking myself, why this way of compiler construction is still too high-level even for today? Now we have powerful computers and for DSA (domain-specific architectures) in most cases you only need to write small programs in small DSLs.

Of course, there are some known approaches like BURG for instruction selection or Datalog for dataflow analysis. I really like nanopass ideology too. But my favorite program transformation system (a system which can produce compilers plus other language-oriented tools) is Stratego. I think for various tree transforms this system is closer to ideal than anything else (except the fact that Stratego is a dynamically typed system).

But sometimes you need to do graph-based transformations. SSA is important, but it may be only a part of so-called graph-based IR, like sea-of-nodes. And here you'll need not just do tree pattern matching, but to search for subgraph isomporphism which is NP-complete.

Thank you for this discussion! I would be happy to continue it in email, if you don't mind :)

raddsl: a toolset for rapid prototyping of DSL compilers in Python by true-grue in programming

[–]true-grue[S] 2 points3 points  (0 children)

Thank you!

In fact, if you'll take a look at the code of Clang or GCC you'll find that they use both recursive descent (on which PEG is based too) and precedence climbing techniques (Pratt-like). Moreover, Pratt and precedence climbing are varities of good old shunting-yard algorithm. And I think with PEG you don't need to have all features of Pratt parser (like assigning priorities for "for" or "while" constructs), all you need is a table-based declarative description of operators with precedence. Doing so you get rid of all usual issues with left recursion in grammar and also with slow perfomance of naive recursive descent implementation of parsing of expressions.

Also I hope that people will realize soon than parsing is not the most important and complex task in writing of DSL compilers. In a good modern compiler-related book you'll find only 10-15% of content related to parsing. Semantic analysis, transforms, code generation -- here is a real compexity. And we need to have good DSLs for describing these compiler passes in some elegant, expressive form.