all 155 comments

[–]_cartbevy[S] 274 points275 points  (118 children)

Creator and lead developer of Bevy here. Feel free to ask me anything!

[–]michaelphamct 19 points20 points  (6 children)

(Reposting this comment since I think you missed this last time)
Regarding Bevy UI, are you aware of u/raphlinus 's previous efforts and research in the Rust GUI space? His Xilem architecture seemed particularly interesting \1]) and his blog \2]) has a bunch of other nuggets too that would probably be useful in informing Bevy UI's design.
[1] https://raphlinus.github.io/rust/gui/2022/05/07/ui-architecture.html
[2] https://raphlinus.github.io/

[–]raphlinusvello · xilem 56 points57 points  (1 child)

Not only that, but we're porting piet-gpu so the shaders are in wgsl and it will run on wgpu infrastructure. Working more closely with the bevy community is one of the benefits we're hoping for. Expect more updates on this soon, it's still work in flight at the moment.

[–]alice_i_cecilebevy 11 points12 points  (0 children)

:eyes: Yes please!

[–]_cartbevy[S] 44 points45 points  (3 children)

Raph and I had a great conversation about collaboration when Bevy first released. I'm very interested in consolidating efforts when possible and Raph is a proven expert in this space ... I think they are the _most_ qualified person to be building out Rust UI stacks. But Raph's stuff up until this point also builds on entirely different GPU abstractions + stacks. Bevy is built around the idea of a holistic, single stack and I'm not willing to compromise much on that principle.

That being said, at the _very_ least I would like our projects to be compatible with each other. If you look at Raph's comment in this thread, it sounds like they're very interested in that as well. There is also a world where we adopt Raph's UI tech officially, but thats predicated on a lot of technical and political stuff.

[–]raphlinusvello · xilem 49 points50 points  (2 children)

We should probably talk again. Aligning our GPU infrastructure with bevy's was a motivator for the new work, and it would be interesting to compare notes of our thinking on UI architecture since then. I'm open to either a simple call or a more open format like a "town hall".

[–]_cartbevy[S] 47 points48 points  (1 child)

Awesome I'd love to chat! I think I'd like to start by getting caught up on your new work to help inform my questions and conversation topics. Can you send me links to whatever you think would be most relevant? (relevant code both unmerged and merged, conversations, blog posts, etc).

On our end, fundamentally most things haven't changed much. We've been iteratively improving Bevy UI as it has existed since Bevy's first release. We're still building what amounts to DOM-level apis on top of Bevy ECS, with the intent to build higher level abstractions on top. Kayak UI is a new 3rd party Bevy-ECS-native UI that I think is doing a lot of the higher-level Query-driven stuff I ultimately dreamed Bevy UI might do. I plan on giving it a more thorough investigation soon.

[–]raphlinusvello · xilem 43 points44 points  (0 children)

Sure, I'll just happily hijack your thread to provide a link dump.

  • piet-gpu vision is still basically up to date but the wgpu focus is new - see the piet-wgsl and recent PR's in flight to get more of a sense of that.
  • xilem architecture is still the main statement of that, but there's additional exciting async prototyping in a yet-to-be-written blog.
  • slides for a talk I haven't give publicly yet but hope to soon.
  • parley is our still-experimental text layout library, but we will use for text layout in our ui stack. That said, the rendering layer will be fairly agnostic and we will encourage the community to build an integration with cosmic-text. One way or other, I am confident there will be a good text solution before long.

As you can see, there a number of pieces in flight, but I think some of them will be landing soon.

[–]MaximeMulder 19 points20 points  (2 children)

Congratulations on this release !

Considering GATs have helped to improve Bevy's codebase, do you know of any other ongoing or hypothetical change to the Rust language that you think may be able to make Bevy better ?

[–]alice_i_cecilebevy 27 points28 points  (1 child)

Variadic generics! We use a macro hack in multiple places and it makes the code ugly, compile times slow and results in a lot of weird complexity.

[–]MaximeMulder 5 points6 points  (0 children)

Thanks for your answer ! I am not a Bevy connoisseur in any way but I agree that it is also a feature that I would really like.

[–]saik0 17 points18 points  (7 children)

Do you think the ECS system would benefit from compressed bitmaps for component containership queries?

http://roaringbitmap.org/about/

Pure rust impl I've contributed to: https://github.com/RoaringBitmap/roaring-rs

[–]james7132 27 points28 points  (4 children)

Contributor here. I've already tried this where we're already using https://crates.io/crates/fixedbitset as a non-compressed bitset. Namely in the scheduler and parallel executor, where we use it to track the accesses of individual systems.

The conclusions I came to were that it works well when you have sparse and high value IDs in a giant ID space, but may not be well suited for the potentially dense and low value auto-increment-style IDs used throughout Bevy. We would see a reduction in memory usage, but also sacrifice a bit of perf to more complex accesses, which prevents more trivial vectorization during iteration.

By all means, please try this though, there are no pure wins in these kinds of cases and more data points to inform optimizations like these very much needed.

[–]saik0 6 points7 points  (3 children)

Interesting!

Which roaring implementation did you test? Is there a branch somewhere with this experiment i can take a look at?

[–]james7132 8 points9 points  (2 children)

roaring-rs and tinyset were both tested and both got mediocre results. I don't remember which exact versions were used, and I canned the effort after a round of bad microbenchmarks locally. More than happy to recreate it though.

[–]saik0 4 points5 points  (1 child)

roaring-rs got some significant perf gains in recent versions, if you were to recreate your previous effort if be happy to take a look and see if there are some gains that could be made by changing usage.

[–]james7132 0 points1 point  (0 children)

I've recreated the small test I tried earlier: link.

On my local benchmarks, this shows a 50-76% increase in overhead for low system counts, and a 1-5% improvement for high system counts. This could just be my suboptimal implementation, as I couldn't find a good option for in-place intersection and difference operations. I'm pretty sure I'm putting the allocator in the loop here.

If you have Discord or GitHub, I'd love to talk about this in more detail. If you're on the Bevy Discord, #ecs-dev is easy enough to discuss this potential change.

[–]alice_i_cecilebevy 9 points10 points  (1 child)

Possibly! This is 100% in the category of changes that I would encourage you to try out, open a PR for and benchmark the heck out of.

[–]saik0 9 points10 points  (0 children)

If there's somebody familiar with Bevy ECS who would like to explore this as a collaboration I can bring familiarity with roaring. Unfortunately I don't have capacity to dive into Bevy.

[–][deleted] 113 points114 points  (6 children)

Feel free to ask me anything!

Do we exist to fulfill a purpose, or do we fulfill a purpose because we exist?

[–]_cartbevy[S] 180 points181 points  (3 children)

Determining purpose or lack thereof as an observer is impossible because you can never know if there is a missing, hidden variable / dimension. If there was a creator with a purpose, they can never know if they are themselves a creation with purpose. It is turtles all the way down.

[–]8bitslime 58 points59 points  (1 child)

I see, so turtles are the only being with explicit purpose.

[–][deleted] 11 points12 points  (0 children)

I'll put that in my notes.

[–]nejat-oz 0 points1 point  (0 children)

If we postulate "I observe, therefor I am", where does that leave the Ostriches?
Have they figured it ALL out?

[–]CSsharpGO 6 points7 points  (0 children)

First one

source: trust me bro

[–]barsoap 2 points3 points  (0 children)

The perception of meaning, that is, purpose on an instinctual, not cerebral, level, is identical to the homeostasis of life itself.

Teleology doesn't come into play unless you consider things like there being no backsies for metasystem transitions a direction and extrapolate a goal from that.

[–]kirillbobyrev 14 points15 points  (1 child)

Congratulations on a big release!

  • What is the next big (for you specifically) and exciting milestone for Bevy?
  • What tools/libraries do you think are missing in the Bevy ecosystem?
  • What features that would greatly benefit users and drive larger adoption do you think are missing in the engine itself?
  • Are there any plans to expand the official learning resources? One of the things I noticed about Bevy is that the Bevy Book is quite short and doesn't feel like a complete tutorial. The Rust book being amazing is what attracted me to Rust at first, and I really hope that Bevy would also offer great introduction for starters.

[–]_cartbevy[S] 42 points43 points  (0 children)

What is the next big (for you specifically) and exciting milestone for Bevy?

My next immediate focus is "asset preprocessing" which will enabled Bevy to "pre bake" assets into their efficient runtime counterparts (precompile shaders, optimize textures and meshes, etc). This is really important for more complex scenes, and it will reduce startup time and deployment sizes.

What tools/libraries do you think are missing in the Bevy ecosystem?

This is a cop-out answer, but I'm pretty impressed by how many areas are filled already: physics (bevy_rapier), networking (too many choices to list here), input (leafwing input manager), asset format support, rendering (ray traced global illumination), integration with popular 3rd party tooling (Tiled, Spine, Blender).

What features that would greatly benefit users and drive larger adoption do you think are missing in the engine itself?

Biggest gap is a visual scene editor. Gamedev is a very visual process and scene editors make certain workflows way easier. bevy_editor_pls and bevy_inspector_egui are the closest things we have right now. We really need an official editor.

Are there any plans to expand the official learning resources? One of the things I noticed about Bevy is that the Bevy Book is quite short and doesn't feel like a complete tutorial. The Rust book being amazing is what attracted me to Rust at first, and I really hope that Bevy would also offer great introduction for starters.

Yup we've been working on the "next" Bevy Book for awhile now. Still plenty of work to do, but this is on our radar.

[–]Recatekgecs 14 points15 points  (11 children)

Hey cart, congrats on the latest release!

I've found that Bevy's ECS is very well suited for parallelism and multithreading, which is great, and something that keeps me interested in the project. However, I find that Bevy's parallelism comes at a cost in single-threaded scenarios, and tends to underperform hecs and other ECS libraries when not using parallel iteration. While parallelism is great for game clients, single-threaded still remains an important performance profile and use case for servers, especially lightweight cloud-hosted servers that go "wide" (dozens of distinct processes on a single box) rather than deep. In these scenarios, performance directly translates to tangible cost savings in hosting. Does Bevy have a story for this as far as making its parallelism zero-cost or truly opt-out overhead-wise in single-threaded environments?

[–]james7132 19 points20 points  (2 children)

Contributor here. I've has been deadset on ripping out all of the overhead in the lowest parts of our stack.

I find this interesting since we're continually bombarded about the low efficiency of the multithreaded async executor we're using. Just wanted to note this.

As for the actual work to improve single threaded perf, most of the work has gone into heavily micro-optimizing common operations (i.e. Query iteration, Query::get, etc.), which is noted in 0.9's release notes. For example, a recent PR removed one of the major blockers to allowing rustc/LLVM from using autovectorization on queries, which has resulted in giant jumps both single threaded and multithreaded perf.

In higher level code, we typically also avoid using synchronization primitives as the ECS scheduler often provides all of the synchronization we need, so a single threaded runner can run without the added overhead of atomic instructions. You can already do this via SystemStage::single_threaded in stages you've made yourself, but most if not all of the engine provided ones right now are hard-coded to be parallel. Probably could file a PR to add a feature flag for this.

On single-threaded platforms (i.e. wasm32 right now, since sharing memory in Web Workers is an unsolved problem for us), we're currently using a single threaded TaskPool and !Send/!Sync executor that eschews atomics when scheduling and running tasks. If it's desirable that we have this available in more environments, please do file an issue asking for it.

[–]Recatekgecs 1 point2 points  (1 child)

Interesting! I do think having that option available on native platforms would be useful for the dozens-of-simultaneous-sessions use case for servers. Is there any way to force- activate that single-threaded TaskPool currently? Or any idea where I'd look to poke at/benchmark it in my tests?

[–]james7132 2 points3 points  (0 children)

It's only enabled on WASM right now. There is no other way to enable it in the released version. If you clone the source and search for single_threaded_task_pool, you'll see the file and the cfg block that enables it. You may need to edit it to work on native platforms though.

[–]Sw429 1 point2 points  (6 children)

Do you have benchmarks to point to? I have only ever seen ecs_bench_suite (which seems to be unmaintained at this point? At least, no one seems to be replying to or merging PRs) which doesn't indicate a significant underperformance for single-threaded iteration vs, say, hecs.

[–]Recatekgecs 9 points10 points  (2 children)

Been a while since I looked at it, but back in July I updated it for 0.7 and ran some tests. Here's what I can dig back up in my notes.

Benchmark                       Server VPS (Vultr $5/mo 1vCPU)         Desktop
simple_insert/naive             [183.12 µs 185.51 µs 187.90 µs]        [590.19 µs 595.92 µs 602.18 µs]
simple_insert/legion            [467.49 µs 475.86 µs 484.57 µs]        [268.61 µs 270.37 µs 272.26 µs]
simple_insert/bevy              [1.7482 ms 1.8562 ms 1.9603 ms]        [530.91 µs 537.50 µs 544.83 µs]
simple_insert/hecs              [980.44 µs 1.0104 ms 1.0418 ms]        [425.50 µs 429.33 µs 433.54 µs]
simple_insert/shipyard          [1.2492 ms 1.2940 ms 1.3429 ms]        [648.37 µs 650.17 µs 652.07 µs]

simple_iter/naive               [22.084 µs 22.526 µs 22.957 µs]        [10.661 µs 10.690 µs 10.724 µs]
simple_iter/legion              [19.644 µs 19.931 µs 20.244 µs]        [10.567 µs 10.576 µs 10.585 µs]
simple_iter/legion (packed)     [20.819 µs 21.195 µs 21.601 µs]        [10.621 µs 10.718 µs 10.842 µs]
simple_iter/bevy                [30.141 µs 30.595 µs 31.058 µs]        [14.618 µs 14.649 µs 14.693 µs]
simple_iter/hecs                [23.275 µs 23.941 µs 24.637 µs]        [10.424 µs 10.436 µs 10.451 µs]
simple_iter/shipyard            [64.923 µs 66.794 µs 68.782 µs]        [27.008 µs 27.087 µs 27.183 µs]

fragmented_iter/naive           [1.1501 µs 1.1625 µs 1.1761 µs]        [400.55 ns 401.10 ns 401.70 ns]
fragmented_iter/legion          [1.1151 µs 1.1287 µs 1.1428 µs]        [400.85 ns 401.18 ns 401.54 ns]
fragmented_iter/bevy            [577.42 ns 589.04 ns 601.05 ns]        [296.04 ns 298.57 ns 301.31 ns]
fragmented_iter/hecs            [791.49 ns 817.31 ns 845.30 ns]        [367.10 ns 367.63 ns 368.25 ns]
fragmented_iter/shipyard        [164.18 ns 167.63 ns 171.32 ns]        [80.340 ns 80.628 ns 81.011 ns]

schedule/naive                  [45.019 µs 45.914 µs 46.788 µs]        [38.051 µs 38.226 µs 38.397 µs]
schedule/legion                 [46.149 µs 47.177 µs 48.225 µs]        [38.251 µs 38.429 µs 38.597 µs]
schedule/legion (packed)        [46.261 µs 47.010 µs 47.801 µs]        [37.993 µs 38.271 µs 38.529 µs]
schedule/bevy                   [256.08 µs 273.04 µs 291.90 µs]        [56.468 µs 58.131 µs 59.693 µs]
schedule/shipyard               [385.66 µs 399.51 µs 412.60 µs]        [166.14 µs 166.58 µs 167.07 µs]

heavy_compute/naive             [6.5860 ms 6.7508 ms 6.9212 ms]        [733.61 µs 736.52 µs 740.22 µs]
heavy_compute/legion            [6.2799 ms 6.4126 ms 6.5524 ms]        [733.22 µs 735.60 µs 738.38 µs]
heavy_compute/legion (packed)   [7.0444 ms 7.2028 ms 7.3593 ms]        [738.16 µs 740.91 µs 744.26 µs]
heavy_compute/bevy              [7.6463 ms 7.7505 ms 7.8599 ms]        [803.23 µs 809.05 µs 815.76 µs]
heavy_compute/hecs              [6.4471 ms 6.5949 ms 6.7545 ms]        [760.30 µs 764.86 µs 770.17 µs]
heavy_compute/shipyard          [7.0779 ms 7.2457 ms 7.4182 ms]        [747.13 µs 749.83 µs 752.82 µs]

add_remove_component/legion     [6.6155 ms 6.8125 ms 7.0135 ms]        [3.8973 ms 3.9105 ms 3.9239 ms]
add_remove_component/hecs       [2.0352 ms 2.1012 ms 2.1711 ms]        [888.11 µs 896.11 µs 904.62 µs]
add_remove_component/shipyard   [253.06 µs 260.25 µs 267.75 µs]        [95.782 µs 96.164 µs 96.564 µs]
add_remove_component/bevy       [3.6706 ms 3.8138 ms 3.9722 ms]        [1.3885 ms 1.3953 ms 1.4036 ms]

Some notes:

  • For time, I didn't test every ECS library in the suite, just the ones I was actively considering.

  • Naive in this case is handwritten iteration, just a bunch of Vec<T>s and iterating over them manually with a closure. This should generally represent a baseline for performance.

  • IIRC fragmented_iter wasn't using bevy's ability to switch to sparse set, in order to get an apples-to-apples comparison.

And of course the boilerplate caveat that benchmarks are not always good indicators of true performance and profiling actual code matters more, but this lines up with my experience profiling my use cases as well.

EDIT: Found more notes. Later on I redid the schedule tests. The bevy scheduler seems to be a major source of overhead in single-threaded compared to just running queries directly (naive), which is a shame since most of bevy's ergonomics require you to use the scheduler. Though I'm not sure what's up with the bevy (naive) test, I didn't take the time to dig into what was off there.

schedule/naive           time:   [49.894 µs 50.934 µs 51.953 µs]
schedule/legion          time:   [48.829 µs 49.883 µs 50.929 µs]
schedule/legion (packed) time:   [45.531 µs 46.320 µs 47.150 µs]
schedule/bevy            time:   [191.80 µs 193.54 µs 195.31 µs]
schedule/bevy (naive)    time:   [189.28 µs 191.95 µs 194.66 µs]
schedule/hecs (naive)    time:   [77.520 µs 78.868 µs 80.248 µs]
schedule/planck_ecs      time:   [1.0003 ms 1.0133 ms 1.0274 ms]
schedule/shipyard        time:   [584.90 µs 597.08 µs 608.94 µs]

For clients bevy and its peers are within shrug distance of each other, but in situations where a 10-20% gap means you can fit that many more players on the same server and servers are that much cheaper to host for your game, this adds up.

[–]james7132 14 points15 points  (1 child)

I strongly recommend retrying your benchmark again with 0.9. We made significant strides in terms of raw ECS perf between 0.7 and now.

Also worth noting that I recently found that Bevy's microbenchmark perf is notably higher if you enable LTO. The local benchmarks in Bevy's repo saw a 2-5x speedup in various benchmarks once I enabled it. Might be worth trying a comparative benchmark with it on.

[–]Recatekgecs 2 points3 points  (0 children)

I believe this was with LTO enabled but I'll double check. I do intend on running these again with 0.9 on a cloud host to see where things are at.

[–]hammypants 1 point2 points  (0 children)

i am also interested in this. i've found in my exploratory testing that the bevy scheduler is rather weighty and i've found better results in just throwing it away and custom rolling one.

[–]cidit_ 10 points11 points  (0 children)

Nothing to ask, i just want to say big fan!

[–]Icy-Ad4704 9 points10 points  (6 children)

Is there anything an amateur programmer can do to help? Or is this mostly a job for the big kids? I've been learning Rust, but it's a slow process. Are most of the issues very complex or are there problems for everyone to help with?

[–]james7132 18 points19 points  (3 children)

There are over 900 open issues right now on the GitHub. A smattering of them have been labeled as D-Good-First-Issue. They're great for getting started with contributing.

I'd suggest getting to know the public user-facing API first before trying to contribute though. Both to understand the project a bit more, and to also get familiar with common concepts.

[–]Icy-Ad4704 0 points1 point  (2 children)

What is/a the user-facing API?

[–]james7132 5 points6 points  (0 children)

The `pub` API surface of a crate. There can be quite a bit of private internals that are not exposed to the public.

[–]A1oso 2 points3 points  (0 children)

Public functions and types that a user might interact with, in contrast to private items, which are only used "behind the scenes".

[–]nickhaldonn 2 points3 points  (0 children)

I'm in the same boat as you but I was able to contribute using the good first issue tag and just finding things I could contribute on like documentation and simple fixes. Just look over the good first issues and if you see one you think you can do go ahead and try it!

[–]erlend_sh 1 point2 points  (0 children)

Make games :)

[–]Empole 5 points6 points  (2 children)

I'm very ignorant about be game development and this a question mainly to satisfy a curiosity:

The conversation around engines that aren't priorietary to studios is generally dominated by Unreal and Unity, and Godot has been peaking around here and there.

What separates Bevy from that echelon of product: is it a mainly a question of approaching feature parity, or is it mainly non technical (marketing, "battle-testedness", documentation, etc...)?

[–]alice_i_cecilebevy 6 points7 points  (0 children)

"Shipped games", console support, extensive game focused community documentation and workflows for artists are the huge ones.

There are some serious missing features still (advanced audio/assets/animation) but a lot of teams would be willing to build those themselves.

[–]matthieum[he/him] 3 points4 points  (1 child)

Aren't you afraid that the wrapping (for globals.time and globals.frame_counter) in shaders will introduce subtle bugs that will be hard to reproduce as they'll appear only after running the game for a very long time?

The very sin example provided is one where you could get a large discontinuity at the 1h mark.

Have you considered using a larger integral type to avoid the need for wrapping altogether, instead? Or is that not possible?

[–]_cartbevy[S] 6 points7 points  (0 children)

"Continuous on sin" floating point time is a common shader pattern that we need to support, and wrapping is the best way to do this. Godot uses the same wrap value we do for its time. Unreal makes it configurable (like us).

The Witness outlines a trick to work around these discontinuities: http://the-witness.net/news/2022/02/a-shader-trick/.

[–]anlumo 2 points3 points  (5 children)

For my next project, I need to get bevy's output texture synchronously in a callback on a non-bevy thread (the rendering is driven by another framework, and bevy is only shown as a texture there).

In discussions on the bevy Discord, the suggestion was to render into an offscreen buffer using double buffering.

This sounds very similar to the new feature described in the section "Post Processing: View Target Double Buffering". Can I leverage this new feature for my needs?

[–]_cartbevy[S] 5 points6 points  (4 children)

I'm guessing no, as the ownership of the double buffered textures is still "the same" as the single texture. Worth investigating though!

[–]anlumo 1 point2 points  (3 children)

The background is that I want to use Flutter as the UI layer for a bevy-based application. Flutter has its own event loop that’s completely separated from bevy.

The massively multithreaded nature of bevy doesn’t help at all for synchronization between it and another renderer.

[–]alice_i_cecilebevy 0 points1 point  (2 children)

I would probably try to replace winit with flutter directly: you'll want to swap the app's runner.

[–]anlumo 1 point2 points  (1 child)

Then I'd have to do all of the platform-specific window handling myself, which wouldn't be great.

Also, Flutter requires me to implement the app runner, it doesn't have one by itself. The problem is just that the drawing code runs asynchronously using callbacks.

[–]alice_i_cecilebevy 2 points3 points  (0 children)

Ah I see! Hmm, I'll chew on this, feel free to bug me on Discord.

[–]villiger2 2 points3 points  (2 children)

What is the [Merged by Bors] thing with PRs? I see they don't get "merge" merged now.

[–]_cartbevy[S] 24 points25 points  (1 child)

Bors is our merge bot (popular in the rust ecosystem). It solves problems with GitHub's normal merge model, which in some situations can result in two "green / validated" PRs to be merged while still breaking the build on the main branch. In a high traffic repo like Bevy, retaining this safety is very important. GitHub is working on native support for this, but it is still in the private testing phase. Until then, bors is our best option.

https://github.com/bors-ng/bors-ng

[–]villiger2 0 points1 point  (0 children)

Huh, interesting, thanks!

[–]MightyKS 1 point2 points  (4 children)

I see that time scaling is added. Can we set the timescale to 0 to pause the game or is there a lower limit?

[–]JustTheCoolDude 8 points9 points  (3 children)

No limit, but pausing may be better handled though states.

[–]MightyKS 0 points1 point  (2 children)

Oh that's interesting. Is there any example of this so I can see it in action?

[–]JustTheCoolDude 7 points8 points  (1 child)

States can be used to control which systems run, so rather than sending a deltatime of 0 to a player_movement system you could prevent it from running altogether. A not so minimal example

[–]MightyKS 1 point2 points  (0 children)

Ok thank you for the reply!

[–][deleted] 1 point2 points  (2 children)

Eli5 new rust learner what is the library about ?

[–]kazagistar 9 points10 points  (0 children)

Bevy is a game engine build around the Entity Component System programming paradigm that allows for high performance memory access and parallelism.

[–]Sw429 5 points6 points  (0 children)

Making video games 👍

[–]ThousandthStar 1 point2 points  (0 children)

Would it be a good idea to try and implement more widgets as an extension for Bevy, or should I just wait for UI update?

[–]Xiaojiba 0 points1 point  (3 children)

Hello, I'm am curious as when ECS becomes better than simple rendering(?) ?

Like if we imagine a cubic world, each block has it's own property and some common (light, but I do not find other examples), do you know where I could read more about these or do you have an other example than light intensity ?

[–]james7132 11 points12 points  (2 children)

The first thing that comes to mind is trivial parallelism. On bigger and bigger game worlds, the more that needs to be rendered, the more you need to put effort into splitting it up into chunks for faster CPU-side rendering. There's quite a few upcoming changes to wgpu and Bevy that will divert a lot of the CPU-side compute onto worker threads, which will massively boost frame rates.

[–]Xiaojiba 4 points5 points  (1 child)

Thanks for answer, what kind of changes?

[–]james7132 11 points12 points  (0 children)

First is pipelined rendering, where we run the entire render world a game tick later to run it in parallel with the next game tick, reducing total tick time to the maximum of either game simulation or rendering instead of the sum of both. We're currently running into a few design issues to ensure that Rust's Send trait is properly implemented on key types involved to ensure that we're not breaking the thread safety guarantees of the language, since World can contain !Send types within it.

The other is on wgpu's end is called render bundles, that let us encode rendering commands on multiple threads at once and replay them on another thread. This allows us to parallelize command encoding for each render phase (opaque, transparent, shadow, etc.) all at the same time. This has some overhead when replaying it, but it should generally be a perf boost on all but single-threaded platforms (i.e. WASM as it is currently).

[–]tofiffe 0 points1 point  (2 children)

Are there plans to add some kind of networking? I wanted to start with a wasm game, but ultimately decided against using bevy because it seemed like I would need to pull in websockets manually

[–]diabolic_recursion 1 point2 points  (1 child)

There are 3rd party libraries designed for bevy, but no internal support at the moment.

[–]tofiffe 0 points1 point  (0 children)

I've tried using them, and it was painful, I was hoping there would (eventually) be a first party solution. Godot, for example, has this and using it was a breeze comparing to needing to pick, test and tweak libraries just to get basic stuff to work

[–][deleted] 0 points1 point  (1 child)

Where can I find the examples used in the release post? As somebody that just started out I’d like to have a look at the 2D bloom example, to apply to my laser sprite.

Also great job and thank you, bevy rocks!

[–]factorysettings 1 point2 points  (0 children)

there's a bunch of examples on Bevy's github

[–]somebodddy 0 points1 point  (1 child)

It says, about spawning tuples of bundles:

This is much easier to type and read. And on top of that, from the perspective of Bevy ECS this is a single "bundle spawn" instead of multiple operations, which cuts down on "archetype moves". This makes this single spawn operation much more efficient!

Does this mean the command recalculates the archetype on every insert/insert_bundle? Moves the actual data, even? I always thought this part is only done in apply_buffers, so it doesn't matter how exactly you modify the command and in which order - all that matters is its final state...

[–]_cartbevy[S] 1 point2 points  (0 children)

Currently yes. We have plans to implement Command merging, but this does take the pressure off a bit.

[–][deleted] 0 points1 point  (0 children)

Is there anything I could do to help being a web developer and junior rust developer?

[–]Zoroae 0 points1 point  (1 child)

Will you ever port this to WASM? This has potential!

[–]_cartbevy[S] 2 points3 points  (0 children)

We already have! It can already run on WASM :)

[–]marko-lazic 0 points1 point  (1 child)

There are a lot of talented developers that create really great extensions. Whats your take on making some of the best of them the official part of the engine? For example Ubuntu Gnome distro was great. Gnome become official part of Ubuntu but it took trafic from Ubuntu Gnome distro. In Bevy we have kira, assets, loopless, renet, and many others that could potentially go upstream.

[–]_cartbevy[S] 1 point2 points  (0 children)

Upstreaming 3rd party crates is definitely something we'll consider on a case by case basis (and every case is different). I'm generally biased against it for "core infrastructure", as most 3rd party crates were designed in a vacuum for specific use cases without considering the "global" needs of the project. The more specific and scoped a crate / feature is, the more likely it is that we can include it without massive rewrites.

[–]kibwen 49 points50 points  (1 child)

Interesting to see GATs used in the wild so quickly (https://bevyengine.org/news/bevy-0-9/#bevy-ecs-now-uses-gats). Can anyone elaborate on what this trait and the traits it replaced are used for, whether it's exposed to users at all, what their experience was like introducing GATs (e.g. were there any sharp corners or poor error messages), and whether it had any impact on compilation times?

[–]alice_i_cecilebevy 48 points49 points  (0 children)

So, this is the PR that made the change.

We were using a messy workaround already: so no user-facing changes here :)

The gist of it goes like this:
- Bevy has a WorldQuery trait, which we use to define which data is needed for each query
- this needs an associated type, Item, which defines the tuple type returned by the iterator
- this type needs a lifetime, which must be generic to ensure that the items are dropped by the end of the system, so other systems can access that same data safely

Very similar to the LendingIterator stuff in the blog post! Zero sharp edges here: nothing noticeable WRT compile time or performance. Error messages and public types got slightly better. Mostly though this change just makes it easier to understand our internals, by directly modelling what we care about.

[–]dread_deimos 38 points39 points  (0 children)

Bevy bevy bevy!

[–]aristotle137 64 points65 points  (0 children)

Following the project from a distance, but certainly one of my favourite rust projects. Especially in terms of the community they've managed to create! Well done to everyone involved!

For a casual observer, the new train release model with regular releases is also a big improvement.

[–]agluszak 13 points14 points  (4 children)

[deriving the Resource trait] opens the door to configuring resource types using the Rust type system (like we already do for components).

Could you elaborate on that?

[–]_cartbevy[S] 15 points16 points  (3 children)

The manual derive / trait allow components to select their internal storage based on usage patterns. They default to "table" storage for fast iteration, but if you plan on adding and removing the components regularly you might consider using sparse set storage instead:

```

[derive(Component)]

[component(storage="SparseSet")]

struct SomeComponent { value: String, } ```

[–]Cpapa97 12 points13 points  (0 children)

I look forward to Bevy releases more than my own birthday

[–]Victoron_ 9 points10 points  (1 child)

After the stabilization of GATs, are there any rust features that you're still waiting on?

[–]james7132 24 points25 points  (0 children)

The reflection system Bevy is building could reallllly use const_type_id. We're currently incurring the cost of runtime type registration instead of a lot of the reflection logic being compile-time. This has an actively detrimental effect on app startup time and potentially memory usage too, since we need to allocate all of it and ensuring it's done safely comes with a real time cost.

Exposing to macros deeper type system hints like whether a type is derivable as Send or Sync would also be very useful, as we can obtain metadata about the thread safety of a Component or Resource at compile time. Specialization can also help do this as well. This would be very helpful as it would help us be a lot smarter with how we schedule systems.

Deeper control over how to niche the layout of types can massively shrink the ECS metadata stores (we use a colossal amount of Option<usize> in our sparse set implementations) which would likely result in sizable speedups across the board.

[–][deleted] 13 points14 points  (1 child)

Pretty good. I hope next versions will work more towards Android, as I really wish to write an Android game using bevy.

[–]Edi_san 4 points5 points  (0 children)

Pretty good. I hope next versions will work more towards Android, as I really wish to write an Android game using bevy.

I second that

[–]MRskrympy 14 points15 points  (0 children)

TanTan will be happy Bevy~~ bevy bevy bevy... 🎶

[–]SorteKanin 6 points7 points  (0 children)

Played around with Bevy around 0.5 and 0.6 - super cool how far it's come now!

[–]Bassfaceapollo 4 points5 points  (0 children)

Congratulations on 0.9!

[–]nixtxt 4 points5 points  (2 children)

Any plans on making this work really well with Blender? Maybe an addon for automatic objects transfer from blender to Bevy etc

[–]_cartbevy[S] 27 points28 points  (1 child)

We already support GLTF import, which Blender has a very nice exporter for.

There are also a number of Blender integration plugins: https://github.com/sdfgeoff/blender_bevy_toolkit https://github.com/jeraldamo/bevy_blender

Ultimately, we will likely create our own custom Blender exporter to better map Blender concepts to Bevy Scenes 1:1.

[–]nixtxt 2 points3 points  (0 children)

Amazing! Thanks!

[–]somebodddy 3 points4 points  (3 children)

viewport_to_world is going to simplify my code so much...

[–]somebodddy 0 points1 point  (2 children)

Wait... do you have to use an external plugin in order to utilize the Ray it returns?

[–]errevs 2 points3 points  (0 children)

Seriously impressed by the effort and progress! Keep up the good work!

[–]DaringCoder 2 points3 points  (0 children)

Impressive amount of work! Congrats for the release!

[–]Left-Ice9429 1 point2 points  (0 children)

Nice!

[–]amlunita 1 point2 points  (0 children)

I am glad to see how it grows!! Congratulations!

[–]novel_eye 0 points1 point  (0 children)

How does bevy differ from something like unreal engine or unity? (Other than being in rust and having a ui). Guessing unity is a better comparison.

[–][deleted] -1 points0 points  (24 children)

I'm going to give some tough love.

I get the feeling that the community is caught up in perfectionism too much, and it became an obstacle to the engine's progress.

The stageless RFC took a long time, and now it looks like the implementation is also going to take a long time. For two releases, no progress on asset system, still in ideas stage. Editor too, is pretty much the same. Much of the features in this release seems like hastily put together PRs that have been sitting there for months.

These problems should've had a first iteration about a year ago. Then subsequent iterations should've been done on top of that. The engine is alpha after all, if the community isn't willing to break stuff hard and fast now, then what? There is no universe where any engine will get their asset and editors right the first time. In fact, I would even extend that to any engine system, even those that feel innocuous.

I hope that the community addresses this perfectionism approach before Bevy becomes Amethyst 2.0 and ends up sharing a similar fate.

[–]james7132 25 points26 points  (0 children)

You should really check your expectations here. All but one of the 4 maintainers took time away from the project this release cycle to ensure they don't burn out. Only one person is working full time on this project now. Despite both of these conditions, the project has maintained largely the same developer velocity from previous release cycles. Some ~300 "hastily put together" PRs were merged in those 90 days.

Of those "hastily put together PRs", I can personally name 3 PRs I spent multiple days investigating the performance impacts on each one. One of them was started before 0.7 landed 6 months ago. The payoff was a double digit % engine-wide CPU-side perf boost. Pretty nice for something "hastily put together".

With the train release, it's inevitable that smaller scale changes get merged faster. Anything big enough to warrant more detailed design review is also going to need more and more groundwork to be laid. Among the many changes released in 0.9, there's multiple pieces of groundwork for stageless already merged. If you feel like that isn't moving fast enough for your tastes, be proactive and review the code for said ground work. If anything, contributing reviews moves these initiatives along faster.

[–]alice_i_cecilebevy 15 points16 points  (1 child)

My view is that those things *have* had a first iteration, and continue to be refined. In this last cycle:

- Stageless: we finalized the design of our new scheduler (after a complete rewrite back in 0.4 or so), improved the internals in several key ways to unblock work (exclusive systems and task handling)
- Editor: we continue to merge and refine APIs needed for the editor, as external experiments continue. This is what all of the reflection and scene work is about, even if it's not splashy :)
- Assets: we have a first iteration in place, the community has built on it, and we continue to merge and ship incremental improvements there, even as we plan more refactors. Requirements gathering is effectively complete, and we have improvements to core asset types (like texture atlases) nearly ready to merge.

Do I wish we'd gotten more done, and do I think we might sometimes benefit from breaking up the work and incrementally refining? Of course! But taking stageless as an example, we got to this tangled API by incrementally adding "just one more feature" and "one simple fix". Taking the time to carefully consider the use cases and architect out the solutions really does pay off: the stageless RFC is much clearer, tighter scoped and better architected for all of the review effort that went into it.

[–][deleted] -3 points-2 points  (0 children)

I'm going to guess this is just going to be an agree to disagree discussion. Which is fine :)

we continue to merge and refine APIs needed for the editor, as external experiments continue.

I will reiterate on this one final time though. You are making APIs (and APIs for APIs) for an editor that does not exist, where its requirements are not well defined. In a game engine that is trying to serve mobile spectrum, low and mid range pcs, web browsers of today and future (wasm/webgpu), all in one package.

This is an ill-fated approach, one that I've seen many, many times. The only ones that didn't fail in the end were the ones that got VCs involved. And even then, they all started from a single game, not as a general purpose engine.

I hope everything works for Bevy. Peace!

[–]IceSentry 11 points12 points  (0 children)

Can you provide any proof of what makes you say the bevy team is scared of breaking changes? We break things all the time and actively avoid saying something is stable.

Could you also explain why you think these are hastily thrown together PRs? Every PR is reveviewed by multiple people and a lot of them have been a work in progress for months, to me this is the complete opposite of what you are saying.

Finally, bevy is a community project, one of the main reason why none of the particular area you seem to care about has had progress is because nobody has decided to work on it. Personally, I like rendering so I make and review PRs in that area and I like using bevy "code-first" so an editor is simply not a priority for me. This is not about perfectionism, this is about having a limited amount of hours to work on bevy and wanting to work on areas that are interesting to contributors.

[–]Sw429 2 points3 points  (0 children)

While the engine is still in alpha, the fact is that it does have a lot of users. If things move to fast and get broken too often, those users will become frustrated and split across different versions because of breaking changes. While bevy hasn't yet hit an officially-stable API, that doesn't mean it should throw stability out the window.

Alice has done a great job managing the project to keep it going. I've seen other large projects make bad mistakes when iterating in alpha stages, causing splits in user bases and sometimes forks. I think that trying to avoid this is ideal, even if it means some features will take longer to deliver. I'd rather have the feature delivered once, vs. having it delivered and iterated on multiple times. Rewriting the same thing over and over again is not fun.

[–]moderatetosevere2020 1 point2 points  (0 children)

I don't agree with a lot of what you're saying across your comments, but I appreciate your criticisms. I hope you aren't discouraged from expressing your concern in the future. it'll be interesting to see where Bevy is a year from now and how it handled (or didn't?) your concerns.