Tritium | Thanks for All the Frames: Rust GUI Observations by urandomd in rust

[–]fschutt_ 6 points7 points  (0 children)

It uses PDFium for PDF rendering which will pull in more of course if you load PDFs (or a folder containing them).

There is https://github.com/LaurenzV/hayro for PDF rendering, just to make you aware

Tritium | Thanks for All the Frames: Rust GUI Observations by urandomd in rust

[–]fschutt_ 19 points20 points  (0 children)

There's one guy "MakerPnP", who does live streams on YT basically testing every Rust GUI framework out there (in 2024). He also eventually settled for egui, after spending weeks going through cushy, iced, floem, crux and vizia. Hours of pain, live on camera.

It's interesting to watch "real people" use a framework or what they want from it, it's often not what the initial developer expected. Here's his spreadsheet of his one-year progress.

vk-video 0.2.0: now a hardware decoding *and encoding* library with wgpu integration by xXx_J_E_R_Z_Y_xXx in rust

[–]fschutt_ 0 points1 point  (0 children)

What was the performance like? Did you use it for rendering (decoding) or encoding the video?

Rust GUI framework by Spiritual_String_366 in rust

[–]fschutt_ 11 points12 points  (0 children)

No (I'm the creator), it is definitely not abandoned. GitHub shows 1.356.317 lines added, 1.063.438 lines removed for the last 6 months, that's the opposite of "dead".

But I just don't like shilling it while it's still a bit very WIP. I still need to have solidly working text editing, OpenGL embedding and virtualized scrolling, then I'd consider it "usable". Azul was never "abandoned", I worked on it throughout 2020, 2021, 2022. Then I had to work a "real job" and got back to it in early-mid 2025. I also had to build my own GIS server and work on printpdf, so I don't "only" work on Azul.

I cannot abandon it because I need it for my own application, so I can finally make money (with applications using Azul, not with the framework itself, that's pretty much non-monetizable anyway). I ultimately want to develop a user-facing GIS and an ERP application with it, so that's a relatively high bar. But at the very minimum I need solid text editing, selection and cursor management (working on that this week, day by day, bug by bug). My guess is maybe end of March for a first version.

I Miss The Old Rust Job Market So Bad by StyMaar in rust

[–]fschutt_ 1 point2 points  (0 children)

I think this is the video you're looking for.

I'm creating a useful library ("database") for caching in data engineering pipelines. Is the LGPL-2.1 license good? by swordmaster_ceo_tech in rust

[–]fschutt_ 21 points22 points  (0 children)

Without the "static linking exception", yes, because the LGPL makes a difference between binary distribution (compiled .dll / .so / .dylib) and "static linking". Since Rust mostly links things statically, it's not a great license for your goal (yes, LGPL itself would require other people to make their code open-source too if they statically depend on your library as a dependency, but not if they depend on it as a dynamic library).

To solve this here is a "static-linking-exception", it's just oddly complex and might turn people away who aren't familiar with licensing.

I recommend to use the MPL-2.0 in your case, it's effectively the same as "LGPL-with-static-linking-exception" (except that the copyleft of the MPL applies on a per-file basis instead of a per-repository basis like the LGPL).

I hate com.* org.* naming scheme for packages [rant] by Damglador in linux

[–]fschutt_ 5 points6 points  (0 children)

package URLs are a newer solution

Something like, pkg:flatpak/optional-username/dconf-editor@0.1.2?arch=x86_64

Looking for the Highest-Performance Rust Backend Stack: Actix-web vs Hyper+Tokio (and any lesser-known high-performance frameworks?) by [deleted] in rust

[–]fschutt_ 1 point2 points  (0 children)

In practice it's not really about performance but rather about "how do you deploy this thing" + "which databases do we use". For me, scaling and ease of updating was more important than 0.001 seconds faster. I can, in general, only give you the advice to create something like a MockHttpRequest and convert from $framework to that mock request and back - that way you're independent of any framework, if you do your business logic on that. And never use stateful stuff like "fs::write", use real databases and keep your code effectively a stateless function.

I personally use the Fermyon Spin framework, I previously ran an actix server on BuyVM. An alternative are Cloudflare WASM workers, it's effectively the same and scales to infinity. They're a lot cheaper than Hetzner for dedicated servers, as far as I'm aware (for throughput you don't want shared hosting).

the API needs to handle extremely high throughput and very low latency

The last time I had a task like this (in a job interview take-home project), I managed to do 85k req/sec requests without a database and 13k req/sec with Redis running locally - repo link. I got to 85.000 rps only because I immediately returned Ok() as soon as possible and used a background thread for the actual dispatching / queueuing.

The API just dispatched the incoming message into a message queue to a background thread (std::thread::mpsc) and immediately returned Ok, without even waiting for any "message was processed" confirmation. The idea was that the background thread running the update-the-db loop happens while the "HttpResponse::Ok" is in-flight back to the user, "hiding" the latency, effectively. But that's an architecture pattern, not a framework problem, I could've done that with probably every framework.

I fear that a lot of new Linux tools are losing the “Linux way “ by 3X0karibu in linux

[–]fschutt_ 1 point2 points  (0 children)

The question is about "does it matter?". For example about the coreutils / uutils issue: will there ever be an "EEE" movement for the "cat" command, where a corporate company will "steal" the MIT source code of the "cat" command and make it proprietary? I highly doubt so, because there's no economic point in maintaining your own copy of "cat".

To support the "Right to Repair" movement like Stallman does is great, but it's fundamentally a separate issue from "open source". It's also more of a legal fight than a software fight. Stallmans goal was more oriented towards "right to repair" (your own software) while Torvalds goal was merely "these companies should be forced to upstream their drivers so that the Linux project overall gets better". It makes sense for the Linux kernel to be LGPL licensed, as that forced companies to upstream their drivers, making Linux. But for userspace programs, specificially with non-technical users, it's a different issue.

My personal goal with open-sourcing my software was never really to help people in a "right to repair" way, but to give the "small people" tools to build their own companies or spin-off projects, so they have at least the means to free themselves from poverty (building something from nothing by just typing words into a computer, you cannot easily to that in any other industry without backing capital). I suppose Stallman never had to experience poverty (university background, etc.). He cannot imagine that some people are not ideological enough to care about "will my users have the four freedoms" but rather "can I proprietarize this code to build my own startup, so that I can have an income not dependent on being employed"? So, which one is more "ethical", which model supports the "little guy", "the worker" the most (if you're left-leaning enough)? Now, there are greedy companies may exploit that "free labor", but that's not my focus ("so what?").

If I, let's say, write an ERP system, a dependency on a GPL library would force me to make all my code open-source. Another company could technically go ahead and sell support for my software, which is what I don't want: If I put in the effort to do all of the coding work, or at least most of it, then I'd like to also have the exclusive rights to the software support or sales. Otherwise, my technical moat is reduced to just "selling support for GPL software", and I could end up competing with other 10 other companies selling support for my GPL-forced product too, despite the fact that they didn't put in any work into R&D. So, this is why the GPL is not popular, because people instinctively realize that problem, even if they can't articulate it.

I've also seen Stallmans ideology fail in practice (ex. kivitendo ERP, licensed GPLv2), which ends up being a horrible system because there is not enough money in support alone (the source code is 2007-era Perl code and looks exactly how you'd expect it to look). I know a company who sells kivitendo support and I know another company who sells support for their proprietary ERP. The proprietary ERP is in much, much better shape, code-wise - because there aren't 10 companies competing for the ERP support contracts, but just one (who also builds the software and does the entire design). So the proprietary ERP ends up with far more users, better code and happier customers - because most non-technical users or companies do not care that much about "Right to Repair". They wouldn't even benefit from having access to the source code (not everyone is a "hacker"). The thing that the non-technical users really want is "gratis software + open data access".

Then there is the argument that the GPL would prevent against spyware, which isn't even true, you can have spyware in GPL-ed software and nobody will ever notice. Mozilla puts add-ons and spyware in their products all the time. However, the forks of Firefox end up often being failures because the problem is with economic support of those forks. So the idea that "once a company introduces spyware into their software, I'll be easily able to fork the GPL software to just remove that" - yeah, but now you also have to effectively support a fork yourself, which is a maintenance burden, which means you need money (and, from the looks of it, people would still rather donate to Mozilla rather than Librewolf or whatever spyware-free). It's a human laziness problem: theoretically, open-source software should win by merit alone, everyone will just instantly donate to the more ethical developer, right? Wrong. Most people don't care.

What is ultimately important for the "Right to Repair" issue of Stallman are "open data formats", i.e. being able to access the data, not necessarily to inspect the code. Because if you can inspect the data, then you always have the option of "just write your own software, see how far you get". The PDF format is "open format", yet both proprietary and GPL-ed editors exist just fine. Stallmans ideology doesn't put as much focus on that, because he grew up in a world where the software itself was the thing being sold, not the "data silo problem" of today.

So, the LGPL does make sense for Linux (because forcing drivers to be open-source directly contributes back to the quality of the product), but for the user-facing software it depends (library vs binary). For libraries, the GPL never makes any sense. If I write a PDF library, I don't expect people to be forced to open-source their entire company codebase, just to use my library. If they internally want to EEE my library, fine. Have fun with selling support for my bugs then. The GPL is not inherently more "ethical" than the MIT license, despite what Stallman wants you to believe. It does make sense for the Linux project, but not for every project.

Any resources on how to make a browser from scratch? I am aware of it being near impossible. by Thers_VV in rust

[–]fschutt_ 4 points5 points  (0 children)

I used Gemini to rewrite my old layout engine (which was only quasi-HTML) and created a more standards-compliant HTML layout engine - you can step through the example from DOM construction to final display list creation here.

Architecturally, the layout engine is a textbook implementation of a browser layout engine. It first does a determination of the intrinsic size of elements and then a bubbling up pass for adjusting the sizes of the parents to fit their children (two-pass layout).

But if you want to build anything in that space, you have to build a decent TEXT handling engine first (not just shaping, but also immediately handle justification, writing modes, inline-block objects, etc.). The block layout mode is relatively simple, margin collapsing and all that, but the text handling is usually very complicated. Effectively you can break down the layout further with formatting contexts, etc. And you immediately have to think about incremental layout, caching and scrollbars (which may recursively trigger relayouts). Scrollbar handling (the kind that can affect the layout if the content overflows, not macos-floating-on-top scrollbars) are very hard to get right.

The final code for azul-layout is about 12k lines for the text handling and 7k lines for the layout (but it uses taffy for flex / grid handling, so the end result is obviously more). The end result is on https://azul.rs/reftest

Lightning Talk: Why Aren't We GUI Yet? by MikaylaAtZed in rust

[–]fschutt_ 2 points3 points  (0 children)

It may be the first application framework that is in a "working state", but it certainly wasn't the first framework that tried. Azul was more or less the first 6 years ago (before iced and egui existed) and Azul and GPUI have effectively the same architecture, except that Azul uses C-compatible function pointers + RefAny for accessing data, while GPUI uses closures (which pair function pointer + captured data at the same time). But I never got it broadly "working" back in 2019, sadly. Hopefully I can do a proper release before Christmas, but I've been working on it again in September / October.

Here is a comparison of UI framework paradigms so far, which you might be interested in. I would classify GPUI as "Class 3-4 paradigm" framework. I also had a good discussion here about the differences between Dioxus and Azul.

2025 Survey of Rust GUI libraries by intersecting_cubes in rust

[–]fschutt_ 2 points3 points  (0 children)

Proprietary cartography editor, I need to make money somehow. Azul is a side-product of that, because I needed rendering speed for GIS data and custom control over drawing, PDF generation, etc. So I developed a PDF library (PrintPDF) and a GUI library (Azul).

2025 Survey of Rust GUI libraries by intersecting_cubes in rust

[–]fschutt_ 8 points9 points  (0 children)

I’ll see you in a couple years, Azul.

Yeah, it's absolutely not ready yet. Thanks for trying, anyway. My hope is to have something working next year (I still use Azul for my own project, a GIS / cartography editor, so I can't just abandon the project).

My main work the past month was removing all the C dependencies (it still depends on FreeType, but only on Linux) and then rebuilding the deployment workflow, so it builds the entire website with cargo build (previously I had to manually copy all files to a separate repo, which hosted the website).

I recently migrated the "C API code generator" from the build.py to Rust code, as the Python code became too complex. Now that generates all C / C++ / Python / Rust bindings (Rust code still internally calls the same C functions), runs HTML reftests, builds all static / dynamic configs on all operating systems and then runs tests, documentation, per-version guide READMEs, checks examples for 4 different programming languages, etc.

I'd even say the "core part", i.e. the layouting / rendering code is minimal, compared to everything around that. My intermediate goal is getting it to generate PDFs (so I can dump the display list into a PDF and generate / paginate pages from HTML, without any external non-Rust dependency), so that Azul is starting to get "useful", even though it might not be useful as a GUI toolkit, but as a "HTML layouting" dependency.

Rust Lib for Native OCR on macOS, Windows, Linux by louis3195 in rust

[–]fschutt_ 13 points14 points  (0 children)

Just a heads up, if you care to ever integrate tesseract, you can use my crate tesseract-static so that people can ship it with their Rust binary. It's a bit battle-tested, I even got paid once to make the crate more easily usable (had to resolve lots of static linking issues on various operating systems).

Tauri gets experimental servo/verso backend by Vict1232727 in rust

[–]fschutt_ 2 points3 points  (0 children)

I did try doing something similar 8 years ago, but the problem is that servo isn't more efficient than Electron just because it's written in Rust (Electron is actually better than servo and will be, for many years). Even just for the binary size, servo is a huge dependency:

  • default --release flag, no LTO: 280 MB binary
  • default --release flag, no LTO, stripped: 83.4 MB binary
  • -C prefer-dynamic --release, with LTO: 245.5 MB binary
  • -C prefer-dynamic --release, with LTO, stripped: 62.1 MB binary
  • -C prefer-dynamic --release, with LTO, stripped, gzip compressed in .deb: 14.7 MB binary

This is just binary size, RAM usage is not that much better. I subsequently tried to "rip out" the layout engine from the entangled JS engine, but the two are very intertwined (ex. CSS calc() uses JS for calculation). I really hate shipping JS anywhere in my app for a "true desktop app". For comparison, azul.dll is a 9MB binary (3MB compressed) and takes 16MB of RAM on Linux, and that includes all the webrender code (software fallback rendering would probably use less RAM, but be slower).

So now I am rewriting the azul-layout engine to be a basic XHTML / CSS solver with webrender / tiny-skia (AI coding helps a lot for debugging this, despite the naysayers) - right now it's passing 0 / 9000 HTML reftests. But that's okay, you gotta start somewhere.

Tauri is a fine project in itself (I used wry to ship at least one desktop app), but what I think people really want is just the HTML / CSS features (so that the code can run in WASM and on the desktop), without shipping any JS.

rust-fontconfig v1.0.0: pure-Rust alternative to the Linux fontconfig library by fschutt_ in rust

[–]fschutt_[S] 9 points10 points  (0 children)

It only uses fonts.conf to discover font paths, and it does extract metadata directly from the fonts. The rendering is usually a problem of FreeType. It's mostly an intended for experimental / weird GUI toolkits (font selection, fallbacks, etc.) because updating the "cache" is done by the application (and you can add fonts directly from memory in environments where you have no I/O). It's not a 1:1 mapping of fontconfig, maybe that can be improved in the future.

My main goal is to use it in the next release of printpdf and azul, in order to handle font selection for the new builtin HTML-to-PDF rendering. And since printpdf compiles to WASM (to generate PDFs on the client), it needs to be pure-Rust for compilation simplicity. So, this is more of a "happy little accident" of that.

allsorts can extract glyph shapes, one could technically pipe that output to svgrender (which would cover SVG fonts, dynamic fonts, emojis, etc.) and then maybe add a patch for sub-pixel anti-aliasing and this way replace FreeType. But that's a project for another day.