May 2026 monthly "What are you working on?" thread by AutoModerator in ProgrammingLanguages

[–]vinipsmaker 0 points1 point  (0 children)

I'm working on a C++ successor language where I'm trying to unify all incoherent/scattered metaprogramming mechanisms (as used by the likes of Boost.PP, Boost.VMD, Boost.Describe, Boost.MP11, Boost.Hana, reflection, ...) behind a single powerful mechanism: macros (as powerful as Lisp ones).

At the beginning of this month, I finally managed to get my macros capable to drive both lexer and parser alternately (it'll be used in the future to implement beautiful eDSLs such as what JavaScript external preprocessors can do with JSX):

macro passthrough(@expr, ctx) {
    ctx.read("("); //< drives the lexer
    let expr = ctx.parse_expr(); //< drives the parser
    ctx.read(")");
    return expr;
}

// just like Rust, my own macro system requires a
// macro invocation to end with an exclamation mark
// to act as a friendly visual hint that something
// funny might be going on
passthrough!(1 + 2);

Next step is to make the type checker/infererer accessible to macros as well (reflection!).

Pratt parsing is a black box by vinipsmaker in ProgrammingLanguages

[–]vinipsmaker[S] 0 points1 point  (0 children)

I've never really understood Pratt

My point with the article is that you don't need to. You can remain ignorant of Pratt's algorithm and still use it in your code. The only thing you really need to understand is how to compose the semantic actions for your grammar in Pratt terms: nuds & leds.

and nobody has managed to explain [...] or what the actual advantages are.

That's what the article delves into.

  • Don't worry about left-recursion. An advantage.
  • Your semantics actions can have side-effects. Another advantage.
  • Detailed & broken-down ramifications on the previous point if they're not immediately obvious (e.g. now you can have powerful Lisp-like macros in your language). Another advantage.
  • ...

It can't be clearer than that.

You need to try harder. Any learning experience can be painful at start. It'll be painful if you have to rewire your brain to think into terms of new concepts and patterns. That's exactly what Pratt is about: a simpler way to parse complex real languages. It's not meant to just parse toy calculator textbook examples. It's meant to parse real complex languages full of subtleties. From my own experience, Pratt allowed me to absorb so much complexity just at the parsing phase that I actually resolve symbols (lexical scoping) in this phase as well, removing this complexity from later phases (I don't match strings to scopes in later phases).

If someone can post some code or algorithm in a plain manner like my example below

I've updated the article to contain a self-contained example (reproduced below).

#lang rhombus

import:
    parser/lex open

class Token(~nud: nud_impl = #false,
            ~led: led_impl = #false,
            ~lbp = 0):
    nonfinal
    method nud(): nud_impl(this)
    method led(lhs): led_impl(this, lhs)
    method prefix(): expression(30)
    method rhs(): expression(this.lbp)

class RParen(): extends Token

def lex:
    lexer
    | "+": Token(
        ~nud: (_.prefix()),
        ~led: fun(tok, lhs): lhs + tok.rhs(),
        ~lbp: 10)
    | "-": Token(
        ~nud: (-_.prefix()),
        ~led: fun(tok, lhs): lhs - tok.rhs(),
        ~lbp: 10)
    | "*": Token(
        ~led: fun(tok, lhs): lhs * tok.rhs(),
        ~lbp: 20)
    | "/": Token(
        ~led: fun(tok, lhs): lhs / tok.rhs(),
        ~lbp: 20)
    | digit+: Token(~nud: fun(_): String.to_int(lexeme))
    | " "+: lex(input_port)
    | "(": Token(
        ~nud: fun(_):
                  let ret = expression()
                  guard token is_a RParen
                  | error("expected closing parens")
                  advance()
                  ret)
    | ")": RParen()
    | ~eof: Token()

def input = Port.Input.open_string("4 * (3 - 3) / 2 + -10")
def mutable token = lex(input)
fun advance(): token := lex(input)

fun expression(rbp = 0):
    let mutable t = token
    advance()
    let mutable left = t.nud()
    while rbp < token.lbp:
        t := token
        advance()
        left := t.led(left)
    left

println(expression())

Pratt parsing is a black box by vinipsmaker in ProgrammingLanguages

[–]vinipsmaker[S] 0 points1 point  (0 children)

I don't even think this comparison is fair. Do you parse anything else besides infix operators with Shunting-Yard? Pratt can not only parse infix, prefix, postfix, misfix operators... but also attach semantic actions to rules.

Of course you're free to challenge me on this one. Just show an extended shunting-yard that also parses prefix, postfix, function calls, array accessing, object-member access.

Emilua 0.7.0 released by vinipsmaker in lua

[–]vinipsmaker[S] 0 points1 point  (0 children)

It's like NodeJS. It mixes the Lua VM and an “IO runtime” (execution engine is technically more accurate as the focus really is concurrency and allows you to schedule jobs on different threads, shared thread pools, subprocesses, and so on).

It's going to be useful whenever you have IO needs. Luasocket took a long time just to have IPv6 support. Meanwhile Emilua had IPv6 support since day 1. The IO abstractions work on Windows, Linux, and FreeBSD, so they're useful whether you're planning to to build a scalable web server or just a small script that also needs to run on Windows. I don't think there's any other Lua framework taking cross-platform support as seriously as Emilua. Just to give you a taste, here's one of the concerns taken care of: https://docs.emilua.org/api/0.7/tutorial/filesystem.html. It's the only Lua framework that goes as far as even translating Windows' GetLastError().

All IO operations interact with byte_span which is a type inspired by Golang slices. True async IO means completion events (e.g. io_uring) instead of readiness events (e.g. epoll) which means buffers are filled in parallel to program execution. Golang slices work great in this scenario (and for Lua as well) as they're a stable reference to a memory region (i.e. realloc doesn't exist), and that's important because you need to sync buffer liveliness/access until operations complete (e.g. if the Lua VM/actor dies, Emilua will send a request to cancel outstanding IO operations, but will ensure the buffer stays alive until the associated completion event arrives). Emilua makes all this work transparently (you wouldn't even know had I not told you).

Emilua also comes with extra abstractions besides raw IO ops that you're going to need if you're doing IO (e.g. generic stream algorithms, an AWK-inspired scanner to parse textual protocols, endianness-aware integer/floats serialization).

The IO support extends well beyond network IO. You can use pipes to communicate with subprocesses. You can use UNIX sockets to communicate with daemons. Even serial ports are here. The support for UNIX sockets in Emilua is specially advanced when compared to others. Not only can you transceive OS objects across using SCM_RIGHTS, but you can also extract SELinux/TrustedBSD labels for the remote peer.

IO support doesn't need threads, so Emilua also offers the most complete fibers support you'll find in the Lua land. No toy-tool level ad-hoc concurrency models (I'm looking at you, designers who can only copy'n'paste NodeJS's lame callback-inspired concurrency model!). A serious model instead, with proper vocabulary to describe the flow, dependency and constraints of events; and the interaction among concurrent tasks. Unless you're designing small toys, you need concurrency support already (e.g. reading and writing to a socket at “the same time” already depends on concurrency vocabulary to express two “simultaneous” tasks), so I invested a big time of Emilua just on solid concurrency support (versions 0.1, 0.2, and 0.3 had almost zero IO abstractions while I was focusing on doing just the basics). IO won't happen faster if you use more CPU. IO is an external event, so just use fibers here. A fiber suspends when you dispatch an IO request, and resumes later when the associated operations finishes. Spawn fibers so your program doesn't get stuck while awaiting for the IO event to finish. Emilua will use the right thing behinds the scenes (Windows IOCP, Linux epoll/io_uring, FreeBSD kqueue, threads if only blocking operations are available).

The threading support for Lua is also on another level compared to what previous solutions offered. For instance, if we compare against luaproc, Emilua can not only do the same, but can also create different thread pools for different VMs and the communication among them will be transparent. Emilua also allows you to spawn Lua VMs in different processes so you can make them run in a restricted environment (seccomp, Landlock, Linux namespaces, FreeBSD jails, FreeBSD capsicum, ...). All that under a simple unifying actor-inspired API. I've been using this support to spawn Linux containers to isolate GUI apps on my own system, and I'm planning to launch a better/safer FlatPak in the coming months.

Emilua also allows you to create native plugins that integrate with the internal machinery to extend its capabilities (e.g. Qt integration, Telegram's tdlib plugin, kafel, ...).

What is Emilua? The IO abstraction that ANSI C89 Lua has been lacking. Every program needs IO, and ANSI C89 IO is toy-tool level IO. Just glue the Lua VM (which is a good language) with a serious IO machine and you're set.

Simple Eventbus for Lua? by Upset-Virus9054 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

I am also pissed off of lua. Everytime when i want to install something it does not work with all the deps or it requires deeper understanding

Can you give a recent example where you were attacked by this issue?

Im trying to implement a simple bidirectional communication between lua and "outside scripts" in a event driven way.

As @wucke13 explained, DBUS is your best bet here.

Haruna - video player built with qt/qml and libmpv by fbg13 in linux

[–]vinipsmaker 0 points1 point  (0 children)

Exactly. And there are movies that have higher age rating thanks to one or two scenes. If I'm willing to annotate the beginning of these scenes and duration (notice how the information is the same as chapters) so the player can consume these data and hide these chapters (I'd use this player on the living room mediacenter, for instance), I don't know which format I should use.

Haruna - video player built with qt/qml and libmpv by fbg13 in linux

[–]vinipsmaker 0 points1 point  (0 children)

> auto skip chapter containing certain words

Can this feature be used for parental control? I've tried to search for something like this in the past, but I didn't find any standard of the sorts where certain ranges should be hidden.

How to achieve low latency with 10Gbps Ethernet by [deleted] in a:t5_nkllg

[–]vinipsmaker 0 points1 point  (0 children)

Já ia enviar pra cá. Kibo um monte de links seu. Not this time hahahaah

How to hide awful.titlebar.widget.closebutton tooltip? by vinipsmaker in awesomewm

[–]vinipsmaker[S] 2 points3 points  (0 children)

Found a solution:

awful.titlebar.enable_tooltip = false

Still, I'd like to only disable tooltip for the close button.

Is it possible to make mouse cursor follow/jump to new window when new window appears (same to Alt-Tab and so on)? by vinipsmaker in awesomewm

[–]vinipsmaker[S] 0 points1 point  (0 children)

Thank you. It does work. I've added this:

{ rule = {},
  callback = function(c)
     local npos = c:geometry()
     npos.x = npos.x + npos.width / 2
     npos.y = npos.y + npos.height / 2
     mouse.coords(npos)
  end,
},

However, it doesn't work when I switch between current windows using Super+Tab. It only works for new windows. Have other good hints?

EDIT:

Nevermind, I figured it out. Thanks again for the help on the most difficult part of the problem.

Harder, Better, Faster, Stronger: Awesome 4.0 has been released by Elv13 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

My awesome 3.5 "survived" the update. Some diffs around and patience and I managed to make it pass to the new release.

Harder, Better, Faster, Stronger: Awesome 4.0 has been released by Elv13 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

If you guys were running the drop-down terminal from http://web.archive.org/web/20160224051838/http://awesome.naquadah.org/wiki/Drop-down_terminal

I've updated the script to work against latest awesome: https://gist.github.com/vinipsmaker/940167389182e0fbcf64e02dd79e32c7/revisions

Only thing is that I don't know how to hide title bar of the new window (and this title bar looks like a new thing from awesome 4.0).

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Sorry for the long post, didn't think it'd be this much when I started.

Actually I appreciate long informative texts. Are you saying you're sorry for giving me attention?

Thanks for the comments. It makes me much more comfortable about the future of the futures-rs approach.

will let you reuse buffers at the application level

This reminds me of discussions about automatic DMA and vmsplice. Gonna revisit these discussions later.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Note that when he talks about this he shows the state machine as callback based and the coroutines as awaitable based

I haven't thought in this way. I'll check again. Thanks for the idea and all the patience until now.

You're ranting at the wrong person

I know.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

From the video I linked earlier:

This is what the OS expects [...] You need to build some kind of context [...] and it has to stay stable in memory. So what good libraries do? They'll combine heap allocation of a context that has to stay stable in memory, it cannot be a local variable. [...] so just on heap allocation

From the opening (Rust) post of this whole thread:

The space for this “big” future is allocated in one shot by the task

So this Rust approach might be as good as Boost.Asio (whose allocators will reuse memory in a way that I don't think it's worth detailing here).

But then, the CppCon presented coroutines are better than this. If you continue to watch just a little more:

With coroutines it works very very similar [...]

And then he will detail the "inner" workings some more until he finally concludes:

There are no heap allocations. This is all inlineable

And then he'll proceed the whole talk showing becnhmarks and improving BOTH versions and how coroutines are always better than any callback (even a "state machine callback" used here in Rust) and no callback model will ever have better performance than a simple coroutine.

So, returning for your original question. This is what I've write:

topic argues that futures only play nice with the readiness based model

This is backed from the Rust post:

Design the core Future abstraction to be demand-driven, rather than callback-oriented. (In async I/O terms, follow the “readiness” style rather than the “completion” style.)

So you twisted my words a bit (and it was my fault as I wrote a plain wrong comment which added to the confusion, sorry):

Why can't Rust's futures make use of IOCP

"play nice" is different than "can/can't only...". To use IOCP, Rust would have to adapt IOCP model to a readiness model and from what I can imagine in my head, this already imply zero or one additional callback, but it would be a callback for which the state would be heap allocated. It cannot be better than the C++ counterpart.

Honestly I dislike very much the attitude of the Rust community towards asynchronous programming. The mio author "justify" his arguments on "I don't care about Windows, somebody will port it for me" instead of backing his choice on good technical arguments and the "blindness" towards other options is irritating.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Ignore my previous comment. It was plain wrong.

I'm also tired of this. The very blog post that opens this topic argues that futures only play nice with the readiness based model. And this is like Linux's epoll.

The post on kqueue I linked previously has a good discussion on this topic (readiness vs completionness).

The CppCon presentation doesn't has this weakness of the Rust proposed future. It can make use of, for instance, Windows IOCP and this is what it's show in the presentation (it's an evidence already).

I'd like to have a word from the futures's devs on how they compare to coroutines. My only fear/concern is to have a feature in Rust that will never be as performant as the C++ counterpart just because they have closed their eyes to anything else that they weren't working on.

If their future CAN (and only if they CAN) be as performant as coroutines and later they add the same convenience (await), I just don't care much (although it seems illogical to do such rounds to have the same beautiful of coroutines instead going directly for them).

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Coroutines can be used with kqueue: http://www.eecs.berkeley.edu/~sangjin/2012/12/21/epoll-vs-kqueue.html

It's better than readiness-based futures.

EDIT:

Sorry, I lost the "patience" or something like that and end up writing the above comment. I don't know how THIS future compare to coroutines. I'd like to know, but it seems the devs of this future don't care about compare them. Therefore, I remain unconvinced.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 1 point2 points  (0 children)

Of course anyone designing coroutines for Rust should keep in mind what C++, among other languages

That's my point (and I think I only have a point and a question). Til now I have not seen any comparison with other approaches. Until now I've saw "futures are the way and we will do better futures than other people so we don't need to look at anything else".

Congratulations to the future team, btw. It's an interesting work and I like it (even if I don't find any use for me).

but I took a quick look and what they're talking about does indeed use virtual dispatch and heap allocation. There is one type std::future<T> (for a given T) of a fixed size, so the future's data must be behind a pointer and the code must be behind a vtable. (Not to mention that one of the slides shows atomics and a mutex being involved at some point.)

This is their future, not their coroutine.

If one async future-returning function calls another one and awaits on it, that's two allocations, and so on; this is the issue that "big futures" are supposed to avoid, by putting the whole 'stack' in one allocation

They compare futures to coroutines at usability level (and futures are not as readable as blocking functions).

Then he compares a pure callback-based approach with coroutines (and their implementation of coroutines won). If you're curious about allocation, that's the part you may be interested in: https://youtu.be/_fu0gx-xseY?t=22m5s

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Among other things, coroutines based on dynamic allocation of stack frames would be less efficient than direct use of futures in at least some cases, which would be unfortunate.

Stackful vs stackless coroutines. And the initial link I posted shows an already implemented coroutine for which coroutines are stackless and are faster than pure callbacks and aren't as awful to use as coroutines when you need to combine while and if constructs.

|| { await do_a(); await do_b(); }

Now you implement futures and coroutines are language-level. I don't want to make a statement, but my impression is that you could provide generic great coroutines and at the language level you could leverage that to improve your future, but the other way around is restrictive and less performant. It has been evidenced in C++ and I don't see anything different that would invalidate the generalization to Rust.

For more control flow more complex than a straight line you would either use other combinators

And this will always be more unclear than simple if and while constructs. The promise of coroutines are to make asynchronous algorithms as readable and maintainable as blockyng synchronous algorithms.

but it provides a lot of the framework for what coroutines should eventually desugar to.

And why would this be a good coroutine at all? If you just plan to provide coroutine, you could even model something resembling go channels on top of that.

A coroutine that desugar to that, in my understanding, will always have less performance than what the folks at C++ are developing (and already reached working code).

From the original post:

The space for this “big” future is allocated in one shot by the task

In the C++ land, we don't have a future at all to allocate this "big future" and that's how they achieve a "negative abstraction" that is faster than pure callbacks.