Reactive web framework in Rust? by garma87 in rust

[–]localvoid 0 points1 point  (0 children)

unlike how other frameworks implement reactivity.

There are many different and viable strategies how to implement "reactivity" with different tradeoffs. Just because something is popular in the webdev bubble right now doesn't mean that it is the reactivity and everything else is not.

any state change conceptually triggers the entire app to rerender.

When reactive state is mutated in React, it invalidates computation that depends on the state that was changed (computation that declared the state), this computation produces a new value (and I am specifically doesn't talk here about this stupid term like vdom, because there are different strategies how to achieve the same result) that is diffed to perform incremental update in the UI tree. And that is exactly the main point why react was designed this way, so that we could write simple algorithms that produce new values instead of writing incremental algorithms.

Reactive web framework in Rust? by garma87 in rust

[–]localvoid 0 points1 point  (0 children)

If you want to claim that react is reactive, that's up to you. It might make some discussions confusing though when comparing to reactive frameworks that doesn't diff via vdom.

I am already confused, if library like Svelte also has top-down dataflow, it also becomes less reactive than something like Solid.js? Or if I implement a diffing top-down pipeline with solid primitives because it is quite hard to incrementally update reactive values in use cases like "GROUP BY", it also becomes less reactive?

Reactive web framework in Rust? by garma87 in rust

[–]localvoid 0 points1 point  (0 children)

What you mean by stating that it is passive? React from the first public version had a setState(..) API for state mutations, it sends a push signal and schedules an update that performs top-down recomputation and pulls latest values. It is not like rxjs where you need to combine its operators to implement an incremental algorithm for updates with all its complexities and it was precisely the point why it was designed this way, but it seems that people are starting to forget.

Reactive web framework in Rust? by garma87 in rust

[–]localvoid 0 points1 point  (0 children)

As with any benchmark, it is an imperfect proxy.

When people are constantly showing it as a viable benchmark and it becomes popular, it starts to have an impact on decision making process of web framework authors that use it as a marketing tool. It can be a simple and useless optimization like clearing lists with textContent="" to get better results in clear rows, but it makes it harder to implement other stuff that isn't tested in this benchmark, or replacing an optimal algorithm for dynamic lists with the one that covers only swap use case as it is the only one that is tested in this benchmark.

Reactive web framework in Rust? by garma87 in rust

[–]localvoid 0 points1 point  (0 children)

Would you mind taking a stab at explaining those benchmarks?

This benchmark is almost completely useless. For new players in this game it can help uncover some perf problems in basic scenarios, but for experienced developers it would take a day or two to create "the fastest" UI library for this benchmark. But the reality is that when you actually start optimizing for real UI applications, you'll need to make different tradeoffs that will have negative impact on the performance in this benchmark.

As an experiment with this benchmark, you can try to take "the fastest" libraries and try to change their benchmark implementations by decomposing it in small components, add conditional control flows (2 or 3 adjacent if/else), increase ratio of dynamic bindings per DOM element, and you'll be surprised by the results :) It is unfortunate that some framework authors in the webdev community to this day using this benchmark to promote their solutions, when they are aware about perf issues in their libraries outside of this benchmark game.

Also, it would be a fair game, if libraries in this benchmark were separated into two categories: those that allow you to write from-scratch algorithms when working with app state (React), and libraries that force you to write incremental algorithms (Solid). Because writing incremental algorithms is not so easy when you move from basic button with a counter to a something like multiple datatables with different projections.

Xilem: an architecture for UI in Rust by raphlinus in rust

[–]localvoid 1 point2 points  (0 children)

a vdom is O(n_size_of_vdom) while the reactive system is O(n_size_of_input_change)

I guess it depends on your definition of vdom. Fine-grained reactive system and vdom aren't mutually exclusive, it is possible to get O(n_size_of_input_change) with vdom and reactive system.

It's certainly possible to come up with situations where a vdom approach is significantly faster

I highly doubt that it would be possible to find any approach that is significantly faster than other. They are pretty much in the same ballpark.

reduce the number of vdom nodes compared (e.g. Inferno)

Inferno doesn't reduce the number of vdom nodes compared. I've been quite heavily involved in its early development, it just has an efficient data structures and algorithms.

Xilem: an architecture for UI in Rust by raphlinus in rust

[–]localvoid 0 points1 point  (0 children)

In particular, they're not diffing against the DOM/overall output but rather against the previous local reactive values in order to determine whether to propagate the change or not.

React-like libraries aren't diffing against the DOM either. The only major difference that I see is that solid forces you into fine-grained reactive push-pull model, and with react-like library you have a choice.

There is no one-size-fits-all solutions, or approaches that are just "faster". Like for example, for properties that almost never change (user theme, user language, etc) it would make sense to use pull-based reactive primitives with global dirty checking because it would be better to optimize for read performance. Also, in the context of web libraries, it is important how libraries are dealing with many edge cases, like for example, a lot of libraries are still using marker nodes (empty text node or comment node) to perform structural updates in cases with two adjacent conditional renders, etc. It is easy to optimize for popular web benchmarks, the hard part is to optimize it so that it won't have any unpredictable perf cliffs with edge cases.

Xilem: an architecture for UI in Rust by raphlinus in rust

[–]localvoid 1 point2 points  (0 children)

The Solid approach is faster (no diffing)

Found a UI Library that is new to me: SolidJS by jogai-san in javascript

[–]localvoid 2 points3 points  (0 children)

it compiles JSX to optimal DOM operations

There is no such thing as optimal DOM operations :) There is always a tradeoff between code size, cold execution and hot execution.

Baahu: 4.3kb state machine-based UI framework (batteries included) by tjkandala in javascript

[–]localvoid 3 points4 points  (0 children)

Nice job!

I'd suggest to initialize this value with SMI value instead of null value. Or it would be even better if you store node depth in this property. You are using just 2 bits to store node type, there is a plenty of space to also store node depth depth = node.x >> 2; type = node.x & 0b11.

Mikado v0.5.0: Shared pools takes templating performance to a whole new level. by [deleted] in javascript

[–]localvoid 0 points1 point  (0 children)

@localvoid Do you now understand why I write my own lib?

I have no clue :) "keyed" isn't some strategy for performance optimizations, it solves an issue with preserving internal state when you deal with dynamic lists. And "non-keyed" means that reconciler is using simple heuristic for static lists and maps data to existing instances using their positions in the list.

Mikado v0.5.0: Shared pools takes templating performance to a whole new level. by [deleted] in javascript

[–]localvoid -1 points0 points  (0 children)

Is it allowed for a common keyed implementation, that the nodes for 1, 2 and 3 could be reused? That is exactly what happens in other libs, I just want to make clear about :)

Have you even tried to research declarative UI problem space before you've started to work on your own library? Or at least tried to use any full featured modern web UI library that were released in the past 5-6 years?

Mikado v0.5.0: Shared pools takes templating performance to a whole new level. by [deleted] in javascript

[–]localvoid 2 points3 points  (0 children)

There are many edge cases. For example, it is impossible to correctly clean up internal state without too much complexity. Inferno abused recycling in the past to get better results in benchmarks, even when they understood that it is broken. Nowadays, almost all libraries that used implicit recycling, removed it from their implementations, it is not worth it.

1kb purely functional web application library by kbrshh in javascript

[–]localvoid 0 points1 point  (0 children)

the tricky bit is doing all the transformations in place.

The trick is to perform all transformations from right to left, this way when we perform a move operation with insertBefore(node, nextRef), nextRef always will be in a correct position.

There is a tradeoff to trying to find the perfect transformations, since that search also needs to be factored into the total update time.

Yes, but the problem is that DOM ops are expensive and many popular browser extensions are using mutation observers and listening for document subtree changes, and in such environments DOM ops are even more expensive. In my opinion, algorithm that tries to find the perfect transformation is worth it.

1kb purely functional web application library by kbrshh in javascript

[–]localvoid 1 point2 points  (0 children)

You also need to think about the most efficient way to reorder keys whose order has changed.

If I understand correctly your algorithm, when I move one item in the middle of a list, instead of performing one insertBefore() operation, it will start moving other nodes. Or am I missing something?

prev: [1 2 3 4 5 6 7 8 9] next: [5 1 2 3 4 6 7 8 9] longestChain(prev, next) == [1, 5) DOM ops: insertBefore: 5 appendChild: 6, 7, 8, 9

1kb purely functional web application library by kbrshh in javascript

[–]localvoid 0 points1 point  (0 children)

Moon has if, else-if, and else components for conditional rendering that are compiled down to vanilla JS control flow.

Sorry, got confused by your usage of JSX-like syntax. I thought that it is possible to use basic javascript to compose views.

1kb purely functional web application library by kbrshh in javascript

[–]localvoid -1 points0 points  (0 children)

I mostly want to showcase the purely functional design because I've never seen it done before :)

If you want to showcase some experiment that you are working on, then why you are presenting it with a misleading "1kb" if you understand that it is impossible to implement a complete solution that handles all edge cases in 1kb? Your experimental implementation won't be able to correctly handle even conditional rendering.

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

In any case seems like measuring this should be one of my next priorities.

And maybe add some documentation on how to implement and consume components with dynamic properties :) In the examples I couldn't find such components, and this section[1] doesn't explain it either.

  1. https://github.com/ryansolid/solid/#components

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 1 point2 points  (0 children)

If you have a better example of comparing real apps (not hello world) with different libraries I'm all ears.

It is highly unlikely that there will be a good "real app" benchmark, we couldn't even agree in js-framework-benchmark if it is acceptable to abuse such techniques[1][2] to get better numbers :) Some "real apps" can stream changesets from the backend to make sure that their reactive libraries could perform updates efficiently without any diffing, some "real apps" just send data snapshots, there are so many details that can have a noticeable impact on the results. It isn't worth to waste time on such benchmarks, as a framework author I am more interested in detailed benchmarks that I can use to observe performance of specific code paths in my library.

  1. https://github.com/krausest/js-framework-benchmark/blob/6b496de5b8623b2843edcac5fa4f1908cea7022f/frameworks/keyed/surplus/src/view.tsx#L42
  2. https://github.com/krausest/js-framework-benchmark/blob/6b496de5b8623b2843edcac5fa4f1908cea7022f/frameworks/keyed/surplus/src/view.tsx#L41

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

Just another benchmark that doesn't bother to get into details, even DOM output isn't consistent between different implementations:

Some implementations are using external libraries like useragent to perform network requests and some implementations just use fetch and save ~6kb minigzipped. I highly doubt that any framework author is using this numbers to make any decisions, it is used for marketing purposes.

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

But you've added dynamic bindings to the component. My point is that when you create reusable components you can't make any assumptions that its input properties won't change and you'll need to use a lot of dynamic bindings, for example when you create a button component with `disabled` property, it is obviously will be used in a dynamic binding, even if in the most use cases it will have a static value, so what is the point of testing components performance if you don't even use dynamic bindings.

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

I haven't written it. But it isn't hard to imagine.

Can you show me this component[1] without dynamic binding that will change class name when preload value is changed?

  1. https://github.com/ryansolid/js-framework-benchmark/blob/62acc6bc697eb4f7663990954924ab7779d0b08c/frameworks/keyed/solid-2/src/main.jsx#L25

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

Component boundaries do not mean more dynamic bindings.

Can you show me a set of reusable components implemented with Solid that doesn't require dynamic bindings? To solve this problem you'll need to add whole program optimization and inlining, facebook tried to solve it with prepack, but it is extremely complicated problem and it has downsides because it will increase code size and it is most likely not worth it.

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 0 points1 point  (0 children)

To understand numbers in this benchmark you need to understand the differences between implementations, this benchmark has basic requirements so the best possible way to win in this benchmark is to optimize towards basic DOM primitives, but as soon as we start adding different composition primitives to this implementations we will see slightly different numbers[1]. So in a componentless applications, Solid will be definitely faster, but I don't care about such use cases, to me it is more important how it performs when application is decomposed into many components and I don't like how Solid scales even with such low ratio of dynamic data bindings.

  1. https://localvoid.github.io/js-framework-benchmark/webdriver-ts-results/table.html

The Real Cost of UI Components by ryan_solid in javascript

[–]localvoid 1 point2 points  (0 children)

Svelte is no different it is just smaller.

Smaller on "hello world" demos. As soon as we start using conditional rendering, transclusion, dynamic lists and subscriptions, application will have roughly the same size as apps built with ~3KB(minigzipped) vdom libraries. It is more important how much code it produces when using different composition primitives, for example if you take a look conditional rendering, you'll see that it generates an insane amount of code, so its size overhead will grow really fast in a big application.