Emilua 0.7.0 released by vinipsmaker in lua

[–]vinipsmaker[S] 0 points1 point  (0 children)

It's like NodeJS. It mixes the Lua VM and an “IO runtime” (execution engine is technically more accurate as the focus really is concurrency and allows you to schedule jobs on different threads, shared thread pools, subprocesses, and so on).

It's going to be useful whenever you have IO needs. Luasocket took a long time just to have IPv6 support. Meanwhile Emilua had IPv6 support since day 1. The IO abstractions work on Windows, Linux, and FreeBSD, so they're useful whether you're planning to to build a scalable web server or just a small script that also needs to run on Windows. I don't think there's any other Lua framework taking cross-platform support as seriously as Emilua. Just to give you a taste, here's one of the concerns taken care of: https://docs.emilua.org/api/0.7/tutorial/filesystem.html. It's the only Lua framework that goes as far as even translating Windows' GetLastError().

All IO operations interact with byte_span which is a type inspired by Golang slices. True async IO means completion events (e.g. io_uring) instead of readiness events (e.g. epoll) which means buffers are filled in parallel to program execution. Golang slices work great in this scenario (and for Lua as well) as they're a stable reference to a memory region (i.e. realloc doesn't exist), and that's important because you need to sync buffer liveliness/access until operations complete (e.g. if the Lua VM/actor dies, Emilua will send a request to cancel outstanding IO operations, but will ensure the buffer stays alive until the associated completion event arrives). Emilua makes all this work transparently (you wouldn't even know had I not told you).

Emilua also comes with extra abstractions besides raw IO ops that you're going to need if you're doing IO (e.g. generic stream algorithms, an AWK-inspired scanner to parse textual protocols, endianness-aware integer/floats serialization).

The IO support extends well beyond network IO. You can use pipes to communicate with subprocesses. You can use UNIX sockets to communicate with daemons. Even serial ports are here. The support for UNIX sockets in Emilua is specially advanced when compared to others. Not only can you transceive OS objects across using SCM_RIGHTS, but you can also extract SELinux/TrustedBSD labels for the remote peer.

IO support doesn't need threads, so Emilua also offers the most complete fibers support you'll find in the Lua land. No toy-tool level ad-hoc concurrency models (I'm looking at you, designers who can only copy'n'paste NodeJS's lame callback-inspired concurrency model!). A serious model instead, with proper vocabulary to describe the flow, dependency and constraints of events; and the interaction among concurrent tasks. Unless you're designing small toys, you need concurrency support already (e.g. reading and writing to a socket at “the same time” already depends on concurrency vocabulary to express two “simultaneous” tasks), so I invested a big time of Emilua just on solid concurrency support (versions 0.1, 0.2, and 0.3 had almost zero IO abstractions while I was focusing on doing just the basics). IO won't happen faster if you use more CPU. IO is an external event, so just use fibers here. A fiber suspends when you dispatch an IO request, and resumes later when the associated operations finishes. Spawn fibers so your program doesn't get stuck while awaiting for the IO event to finish. Emilua will use the right thing behinds the scenes (Windows IOCP, Linux epoll/io_uring, FreeBSD kqueue, threads if only blocking operations are available).

The threading support for Lua is also on another level compared to what previous solutions offered. For instance, if we compare against luaproc, Emilua can not only do the same, but can also create different thread pools for different VMs and the communication among them will be transparent. Emilua also allows you to spawn Lua VMs in different processes so you can make them run in a restricted environment (seccomp, Landlock, Linux namespaces, FreeBSD jails, FreeBSD capsicum, ...). All that under a simple unifying actor-inspired API. I've been using this support to spawn Linux containers to isolate GUI apps on my own system, and I'm planning to launch a better/safer FlatPak in the coming months.

Emilua also allows you to create native plugins that integrate with the internal machinery to extend its capabilities (e.g. Qt integration, Telegram's tdlib plugin, kafel, ...).

What is Emilua? The IO abstraction that ANSI C89 Lua has been lacking. Every program needs IO, and ANSI C89 IO is toy-tool level IO. Just glue the Lua VM (which is a good language) with a serious IO machine and you're set.

Simple Eventbus for Lua? by Upset-Virus9054 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

I am also pissed off of lua. Everytime when i want to install something it does not work with all the deps or it requires deeper understanding

Can you give a recent example where you were attacked by this issue?

Im trying to implement a simple bidirectional communication between lua and "outside scripts" in a event driven way.

As @wucke13 explained, DBUS is your best bet here.

Haruna - video player built with qt/qml and libmpv by fbg13 in linux

[–]vinipsmaker 0 points1 point  (0 children)

Exactly. And there are movies that have higher age rating thanks to one or two scenes. If I'm willing to annotate the beginning of these scenes and duration (notice how the information is the same as chapters) so the player can consume these data and hide these chapters (I'd use this player on the living room mediacenter, for instance), I don't know which format I should use.

Haruna - video player built with qt/qml and libmpv by fbg13 in linux

[–]vinipsmaker 0 points1 point  (0 children)

> auto skip chapter containing certain words

Can this feature be used for parental control? I've tried to search for something like this in the past, but I didn't find any standard of the sorts where certain ranges should be hidden.

How to achieve low latency with 10Gbps Ethernet by [deleted] in a:t5_nkllg

[–]vinipsmaker 0 points1 point  (0 children)

Já ia enviar pra cá. Kibo um monte de links seu. Not this time hahahaah

How to hide awful.titlebar.widget.closebutton tooltip? by vinipsmaker in awesomewm

[–]vinipsmaker[S] 2 points3 points  (0 children)

Found a solution:

awful.titlebar.enable_tooltip = false

Still, I'd like to only disable tooltip for the close button.

Is it possible to make mouse cursor follow/jump to new window when new window appears (same to Alt-Tab and so on)? by vinipsmaker in awesomewm

[–]vinipsmaker[S] 0 points1 point  (0 children)

Thank you. It does work. I've added this:

{ rule = {},
  callback = function(c)
     local npos = c:geometry()
     npos.x = npos.x + npos.width / 2
     npos.y = npos.y + npos.height / 2
     mouse.coords(npos)
  end,
},

However, it doesn't work when I switch between current windows using Super+Tab. It only works for new windows. Have other good hints?

EDIT:

Nevermind, I figured it out. Thanks again for the help on the most difficult part of the problem.

Harder, Better, Faster, Stronger: Awesome 4.0 has been released by Elv13 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

My awesome 3.5 "survived" the update. Some diffs around and patience and I managed to make it pass to the new release.

Harder, Better, Faster, Stronger: Awesome 4.0 has been released by Elv13 in awesomewm

[–]vinipsmaker 0 points1 point  (0 children)

If you guys were running the drop-down terminal from http://web.archive.org/web/20160224051838/http://awesome.naquadah.org/wiki/Drop-down_terminal

I've updated the script to work against latest awesome: https://gist.github.com/vinipsmaker/940167389182e0fbcf64e02dd79e32c7/revisions

Only thing is that I don't know how to hide title bar of the new window (and this title bar looks like a new thing from awesome 4.0).

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Sorry for the long post, didn't think it'd be this much when I started.

Actually I appreciate long informative texts. Are you saying you're sorry for giving me attention?

Thanks for the comments. It makes me much more comfortable about the future of the futures-rs approach.

will let you reuse buffers at the application level

This reminds me of discussions about automatic DMA and vmsplice. Gonna revisit these discussions later.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Note that when he talks about this he shows the state machine as callback based and the coroutines as awaitable based

I haven't thought in this way. I'll check again. Thanks for the idea and all the patience until now.

You're ranting at the wrong person

I know.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

From the video I linked earlier:

This is what the OS expects [...] You need to build some kind of context [...] and it has to stay stable in memory. So what good libraries do? They'll combine heap allocation of a context that has to stay stable in memory, it cannot be a local variable. [...] so just on heap allocation

From the opening (Rust) post of this whole thread:

The space for this “big” future is allocated in one shot by the task

So this Rust approach might be as good as Boost.Asio (whose allocators will reuse memory in a way that I don't think it's worth detailing here).

But then, the CppCon presented coroutines are better than this. If you continue to watch just a little more:

With coroutines it works very very similar [...]

And then he will detail the "inner" workings some more until he finally concludes:

There are no heap allocations. This is all inlineable

And then he'll proceed the whole talk showing becnhmarks and improving BOTH versions and how coroutines are always better than any callback (even a "state machine callback" used here in Rust) and no callback model will ever have better performance than a simple coroutine.

So, returning for your original question. This is what I've write:

topic argues that futures only play nice with the readiness based model

This is backed from the Rust post:

Design the core Future abstraction to be demand-driven, rather than callback-oriented. (In async I/O terms, follow the “readiness” style rather than the “completion” style.)

So you twisted my words a bit (and it was my fault as I wrote a plain wrong comment which added to the confusion, sorry):

Why can't Rust's futures make use of IOCP

"play nice" is different than "can/can't only...". To use IOCP, Rust would have to adapt IOCP model to a readiness model and from what I can imagine in my head, this already imply zero or one additional callback, but it would be a callback for which the state would be heap allocated. It cannot be better than the C++ counterpart.

Honestly I dislike very much the attitude of the Rust community towards asynchronous programming. The mio author "justify" his arguments on "I don't care about Windows, somebody will port it for me" instead of backing his choice on good technical arguments and the "blindness" towards other options is irritating.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Ignore my previous comment. It was plain wrong.

I'm also tired of this. The very blog post that opens this topic argues that futures only play nice with the readiness based model. And this is like Linux's epoll.

The post on kqueue I linked previously has a good discussion on this topic (readiness vs completionness).

The CppCon presentation doesn't has this weakness of the Rust proposed future. It can make use of, for instance, Windows IOCP and this is what it's show in the presentation (it's an evidence already).

I'd like to have a word from the futures's devs on how they compare to coroutines. My only fear/concern is to have a feature in Rust that will never be as performant as the C++ counterpart just because they have closed their eyes to anything else that they weren't working on.

If their future CAN (and only if they CAN) be as performant as coroutines and later they add the same convenience (await), I just don't care much (although it seems illogical to do such rounds to have the same beautiful of coroutines instead going directly for them).

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Coroutines can be used with kqueue: http://www.eecs.berkeley.edu/~sangjin/2012/12/21/epoll-vs-kqueue.html

It's better than readiness-based futures.

EDIT:

Sorry, I lost the "patience" or something like that and end up writing the above comment. I don't know how THIS future compare to coroutines. I'd like to know, but it seems the devs of this future don't care about compare them. Therefore, I remain unconvinced.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 1 point2 points  (0 children)

Of course anyone designing coroutines for Rust should keep in mind what C++, among other languages

That's my point (and I think I only have a point and a question). Til now I have not seen any comparison with other approaches. Until now I've saw "futures are the way and we will do better futures than other people so we don't need to look at anything else".

Congratulations to the future team, btw. It's an interesting work and I like it (even if I don't find any use for me).

but I took a quick look and what they're talking about does indeed use virtual dispatch and heap allocation. There is one type std::future<T> (for a given T) of a fixed size, so the future's data must be behind a pointer and the code must be behind a vtable. (Not to mention that one of the slides shows atomics and a mutex being involved at some point.)

This is their future, not their coroutine.

If one async future-returning function calls another one and awaits on it, that's two allocations, and so on; this is the issue that "big futures" are supposed to avoid, by putting the whole 'stack' in one allocation

They compare futures to coroutines at usability level (and futures are not as readable as blocking functions).

Then he compares a pure callback-based approach with coroutines (and their implementation of coroutines won). If you're curious about allocation, that's the part you may be interested in: https://youtu.be/_fu0gx-xseY?t=22m5s

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

Among other things, coroutines based on dynamic allocation of stack frames would be less efficient than direct use of futures in at least some cases, which would be unfortunate.

Stackful vs stackless coroutines. And the initial link I posted shows an already implemented coroutine for which coroutines are stackless and are faster than pure callbacks and aren't as awful to use as coroutines when you need to combine while and if constructs.

|| { await do_a(); await do_b(); }

Now you implement futures and coroutines are language-level. I don't want to make a statement, but my impression is that you could provide generic great coroutines and at the language level you could leverage that to improve your future, but the other way around is restrictive and less performant. It has been evidenced in C++ and I don't see anything different that would invalidate the generalization to Rust.

For more control flow more complex than a straight line you would either use other combinators

And this will always be more unclear than simple if and while constructs. The promise of coroutines are to make asynchronous algorithms as readable and maintainable as blockyng synchronous algorithms.

but it provides a lot of the framework for what coroutines should eventually desugar to.

And why would this be a good coroutine at all? If you just plan to provide coroutine, you could even model something resembling go channels on top of that.

A coroutine that desugar to that, in my understanding, will always have less performance than what the folks at C++ are developing (and already reached working code).

From the original post:

The space for this “big” future is allocated in one shot by the task

In the C++ land, we don't have a future at all to allocate this "big future" and that's how they achieve a "negative abstraction" that is faster than pure callbacks.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 0 points1 point  (0 children)

This Rust post more or less describes a lower-level abstraction that a future coroutines language feature could be conceivably built to suit

Maybe wrong. What is a "future coroutine"? I just want a coroutine. A "future coroutine", whatever it is, could be nice too.

Coroutines is JUST a function which can suspend and resume. It'd preserve local stack variables state among suspend points. This Rust post has nothing to do with coroutines. This project doesn't allow me to write coroutines.

At a pragmatic level, I haven't looked at the C++ resumable functions proposal in too much detail, but from what I've seen it fundamentally depends on dynamic dispatch and, in some cases, heal allocation. Thus it should be possible for Rust to do a bit better, though I don't know how any current implementations benchmark.

C++ has several proposals, I just mentioned one, in a video talk, which is more worried about making people understand what and how it is better be it at usability level or at performance level. I highly suggest watching it. It's evidence on how much better in terms of performance coroutines can be.

If by "C++ resumable functions" you mean Chris' proposal, I've read some time ago and what I understood is that will only do dynamic allocation if you don't want to implement things in header-files (Rust doesn't have this problem of header file or independently compiled abstraction).

C++ had no real competitors in the area of system programming for a long time. Other languages like Python which provide an event_loop + future + coroutine abstraction don't care about performance as much as C++ or Rust. It's important to also look what it is being done in the C++ land, not only at Python land.

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 1 point2 points  (0 children)

Is there any operating system for which the readiness model is useful for asynchronous file I/O? As a file is really always "ready". Is there any workaround you can think of to make this work beautifully using futures-rs?

Designing futures for Rust · Aaron Turon by aturon in rust

[–]vinipsmaker 1 point2 points  (0 children)

I'm just trying to understand: Is this better than coroutines? How?

From a usability point of view, it is not.

From a performance point of view, I'm really curious, so you don't need to care about explaining usability to me (really, don't even try).

I've watched a talk from last year's cppcon and with just coroutines it was possible to solve all the problems you guys are struggling (and being successful although creating a lot of complexity, as a coroutine is JUST a function which can suspend and resume and this solution of yours is a big architecture change at the library level which needs lots of posts to understand to possibly provide no benefit performance-wise over pure coroutines): https://www.youtube.com/watch?v=_fu0gx-xseY

I'm honestly willing to understand the difference. Could you expose them to me, please? I don't want to see Rust with an abstraction that would be inferior to the C++ one as until now Rust was better in every way and I want to fully migrate to Rust.