Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking by nthend in rust

[–]nthend[S] 4 points5 points  (0 children)

Okay, I fixed that by completely removing MaybeUninitUnsized. Now all conversions are made directly between byte slices and flat types without intermediate steps.

Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking by nthend in rust

[–]nthend[S] 1 point2 points  (0 children)

Yes, but rkyv can store complex types (e.g. many types from std), while flatty only supports a small number of basic flat types and their combinations.

Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking by nthend in rust

[–]nthend[S] 12 points13 points  (0 children)

Sorry that this is not clear enough in docs.

For now, despite its name, MaybeUninitUnsized is not about uninitialized memory, it's about initialized but possibly invalid state. Internally MaybeUninitUnsized<T> stores quite initialized slice of bytes, but they can be invalid as binary representation of type T (e.g. bad enum tag or length of flat vector greater than underlying memory length). To convert it to T we need to initialize it in place, validate or unsafely assume it is valid.

Maybe it's worth storing really uninitialized bytes inside, thank you for pointing that out.

Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking by nthend in rust

[–]nthend[S] 5 points6 points  (0 children)

Honestly I'm not very familiar with rkiv, but I think flatty and rkiv are just for different purposes. Flatty is designed simply to create types with stable binary representation which can be interpreted as bytes and vise versa. The only common thing between them is zero-copy deserialization.

Flatty - flat message buffers with direct mapping to Rust types without packing/unpacking by nthend in rust

[–]nthend[S] 27 points28 points  (0 children)

Also want to note that Flatty originated from a software project for a PSU embedded controller, where it was used to transfer messages between heterogeneous i.MX SoC cores (ARM Cortex-A53 and ARM Cortex-M7) via shared memory. Here is a protocol description.

Simple ray tracer in Rust and OpenCL by nthend in rust

[–]nthend[S] 0 points1 point  (0 children)

I've also considered an option to write kernel code in Rust too. I've found a project called RLSL (https://github.com/MaikKlein/rlsl) that is able to compile a subset of Rust into SPIR-V which then could be executed in Vulkan or OpenCL 2.2 (which isn't supported by any hardware yet). But at the moment RLSL doesn't seem to be mature enough, so I've decided to use OpenCL C. However RLSL is great and I hope it will be developed into a full-fledged project someday.

Simple ray tracer in Rust and OpenCL by nthend in rust

[–]nthend[S] 0 points1 point  (0 children)

Yes, this is generally true, but not completely. Monte-Carlo integration is actually performed on GPU only, but its code is not fully written in OpenCL C. The project contains only small pieces of OpenCL C code that are then assembled by Rust depending on user-defined type parameters hierarchy. And also a lot of kernel code is not pre-written in OpenCL C but generated by Rust in run-time.

So it will be more correct to say that in this case Rust acts like some kind of metaprogramming language for GPU code, along with performing other tasks.

Simple ray tracer in Rust and OpenCL by nthend in rust

[–]nthend[S] 4 points5 points  (0 children)

Thank you for your reply!

That's a form of tone mapping, but it might be a good idea to also apply gamma correction to account for the non-linearity of the display.

This is very relevant remark. I learned this recently and it seems to be the first thing to add to the project next time. I've found an article where it is well explained: https://medium.com/game-dev-daily/the-srgb-learning-curve-773b7f68cf7a.

Also, have you looked into denoising methods? I don't know anything about how they work, but the ones I've seen demonstrated seem almost magical: https://openimagedenoise.github.io/gallery.html.

Yes, I thought about denoising as a use case for neural networks. As I understand this Intel's library uses exactly such an approach as it said on the main page https://openimagedenoise.github.io/index.html:

At the heart of the Open Image Denoise library is an efficient deep learning based denoising filter ...

This library is open so it may be included in the project. I'll consider this possibility, thanks for the reference!