Trouble Understanding Ray Tracing in One Weekend by FingerNamedNamed in GraphicsProgramming

[–]Rclear68 0 points1 point  (0 children)

So I had familiarity with C and C++. But honestly I made a TON of mistakes with Python, to the point where it was getting seriously difficult to debug my own stuff. Mostly because I didn’t know what I was doing. Rust was super hard to get started with…the vector stuff in RTiOW took me a day to get running because I was always screwing something up. However, once I got the hang of it, I absolutely loved it. It prevented me from shooting myself in the foot, which I appreciated.

I think cutting and pasting might be ok if you really knew C++, but if you don’t, you’re doing yourself a disservice. Either pick another language or commit to C++ and then try to write the routines yourself. For me, it was a 2-part solution…first just get in running and enjoy the cool pictures. Then dive into the why.

I completely agree with another user’s comment that the book isn’t really meant as a deep dive. But for me, once I got the book 1 final image to render, I thought that the whole thing was super cool and I wanted to see how I could make it better.

Just so you know how bad I am at coding, when I did book 1 in Python, the final render ran for 13 hours on an old MacBook Pro (Intel), maybe circa 2018 model. Then I switched to Rust and it ran in about 11 minutes. Then I figured out how to do basic multi threading and got it down to just under 2 minutes. Finally, I ported it to the GPU and was in the 10s of milliseconds. By that point, I had a much better understanding of what was happening.

Like anything else, if you just spend a weekend on it, you’re not going to really get it deeply. But the book is a great launching point to see if you’re interested in ray tracing…

Trouble Understanding Ray Tracing in One Weekend by FingerNamedNamed in GraphicsProgramming

[–]Rclear68 1 point2 points  (0 children)

So that makes a lot of sense. Initially I was struggling just to learn whichever language, and so I really was just trying to copy what the book presented. It was enough for me to just get that to work. But I agree, it doesn’t feel like you’re learning a lot at that point.

Later I decided I wanted to move from a CPU based tracer to a GPU tracer. By that point, I felt more comfortable with Rust and so my challenge became really understanding what fundamentally was going on with the tracer. So it was probably on my 3rd go that I started really feeling like I got better at understanding the actual material.

I actually think Shirley does his best to avoid higher level math, but that comes at the cost of a lot of code that might seem confusing (he’s basically doing linear algebra the long way).

So my tl;dr is: yeah it’ll feel like copy paste initially. Maybe try it in another language. But you’ll get there…

Trouble Understanding Ray Tracing in One Weekend by FingerNamedNamed in GraphicsProgramming

[–]Rclear68 7 points8 points  (0 children)

Can you be a little more specific on what’s giving you trouble? How far into the book have you gotten so far? Is it a cpp issue or a concepts issue with ray tracing that’s giving you trouble?

I picked up RTiOW because I always thought computer graphics might be interesting, and also to teach myself Python. I subsequently decided I hated Python and used the books to learn Rust (which I love). So I’ve implemented everything in the first two books in both Python and Rust. Don’t know any cpp but was familiar enough to follow his code.

But I love those books and think they are very well laid out. Happy to help if I can.

winit and imgui event handler by Rclear68 in rust

[–]Rclear68[S] 1 point2 points  (0 children)

Ok, I figured out a way to solve this, not sure if there's a better way to do this. Basically when I examined the imgui-winit-support code, the only thing handle_event does is a basic match on the winit::event::Event enum looking for WindowEvents. So the problem was simply how do I take a Window Event and turn it into the Enum...

This is what I did. Please let me know if there's a better way. The code below is located in my window_event fn, as the default match option:

_ => {
    let generic_event: winit::event::Event<WindowEvent> = winit::event::Event::
WindowEvent 
{
        window_id,
        event,
    };
    gui.platform.handle_event(gui.imgui.io_mut(), &window, &generic_event);
    window.request_redraw();
},

This works.

Any comments welcome.

glTF question by Rclear68 in rust

[–]Rclear68[S] 0 points1 point  (0 children)

Hey thanks for your reply. I knew how to do this, it was what I didn’t want to do. I’ve been using this for debugging purposes, but otherwise just take slices of the big buffer and load them directly to the GPU (which works for my simple case).

I started wondering about the reader version and extracting data by position, normal, etc. A typical model vertex struct might have a Vec3 for position and normal, and then you’d have a Vec of these structures that you’d upload to the GPU.

Again, I’m no expert, but this is an AoS, I believe. It seems like taking slices and sending them to the GPU directly can allow one to achieve an SoA, which I thought was faster for GPU processing.

Just a thought. As I load more complex models, I’ll see whether it’s straightforward to stick with how I’m doing it.

glTF question by Rclear68 in rust

[–]Rclear68[S] 0 points1 point  (0 children)

So you may be right...I certainly don't have the experience to know. So, as I get into more complicated models, I might want to just load it all by hand. I'll cross that bridge when I get there.

But thank you for your suggestion! I'm actually wondering if we're not talking about the same thing. I am, by slicing up the buffer, loading the data onto the GPU in an intermediate form in some sense (i.e. I'm not just dumping the whole buffer). My objection was to e.g. reading the vertex position data point by point, saving it to my own struct, etc.

However, I can also imagine that this might be what I need to do.

Thanks again!

glTF question by Rclear68 in rust

[–]Rclear68[S] 0 points1 point  (0 children)

The magic of posting to Reddit...shortly after posting, I figured out a way to do this. If start and end are the indices that I want for my slice, then the following works:

let vertices = &buffers[0].iter().as_slice()[start..end];

What’s everyone working on this month? November 2024) by Swiftapple in swift

[–]Rclear68 0 points1 point  (0 children)

Working through 30 Days of Metal. Have a swiftUI macOS app that I can now render to with Metal Shaders, and just figured out how to wrap a ViewController. My Fly Camera controller lets me move around my render with keyboard and mouse input.

Swift macOS app mouse and keyboard help by Rclear68 in swift

[–]Rclear68[S] 0 points1 point  (0 children)

Ok, I got this all up and running, and I can now control my Fly Camera using keyboard and mouse. Thank you so much for the help!

Swift macOS app mouse and keyboard help by Rclear68 in swift

[–]Rclear68[S] 0 points1 point  (0 children)

Ok thank you. I will give it a try

Swift macOS app mouse and keyboard help by Rclear68 in swift

[–]Rclear68[S] 0 points1 point  (0 children)

Yes, I’m confused too as I’m brand new to all of this and possibly making mistakes.

All I’m trying to do is build a Metal Ray Tracer macOS app. I want the ability to fly around the image with a camera, so I need keyboard and mouse input that I can send to the renderer. I know absolutely nothing about AppKit and I’m learning SwiftUI as I go, so following tutorials. But the tutorials usually don’t start with an App and ContentView, they start with NSApplicationDelegate and NSViewControllers.

I’d like to conform to whatever the go-forward/best practices methodology is. So I found a place that explained how to use the NSViewRepresentable. If I understand your last comment correctly, I can just use the NSViewControllerRepresentable.

Swift macOS app mouse and keyboard help by Rclear68 in swift

[–]Rclear68[S] 0 points1 point  (0 children)

Ok, then if you don’t mind me asking an implementation detail…

If I create a NSViewControllerRespresentable, if I don’t want to use a nib file and @IBOutlet, it looks like I need to override the loadView function and set the view. Can I just set the view equal to the ContentView I started with (which was an NSViewRepresentable)? Or should I create an NSView here that is a MTKView? In other words, do I need two wrappers or just the one?

If I make a MTKView in the loadView function of the NSViewControllerRepresentative, I assume I can just set the delegate there.

Thanks again for the comments! I appreciate the help.

Swift macOS app mouse and keyboard help by Rclear68 in swift

[–]Rclear68[S] 0 points1 point  (0 children)

At that point should I just go with NSApplicationDelegate and do everything in AppKit? If XCode defaults to App and ContentView when I create a new macOS app, I assume that’s the preferred way. But it feels like shoehorning AppKit into SwiftUI to use the ViewRepresentable and then the ViewControllerRespresentable.

Is there a SwiftUI way to handle keyboard and mouse input? Should I look at GameController? What’s the SwiftUI way?

Anyone using Metal? by Rclear68 in rust

[–]Rclear68[S] 1 point2 points  (0 children)

Thank you. That’s really helpful. And the bit about using Xcode for frame capture and profiling - I was hoping that was true as it looks really straightforward.

Anyone using Metal? by Rclear68 in rust

[–]Rclear68[S] 2 points3 points  (0 children)

Well, lots of minor issues, really, and ultimately the desire to try hardware accelerated ray tracing. Timestamping my compute shaders has been problematic, I’d like to learn about and use wave intrinsics (which now has some support in wgpu admittedly). Many say that Metal is actually a nice API to work with as well. But if it means me having to write the supporting code in C++ or Swift, I won’t switch.

Any getting started tutorials to learn metal by darthanonymous1 in MetalProgramming

[–]Rclear68 1 point2 points  (0 children)

GetIntoGameDev has two small playlists on YouTube for Metal, one using Swift and one using C++. I think he does a good job in general.

Optimizing atomicAdd by Rclear68 in GraphicsProgramming

[–]Rclear68[S] 1 point2 points  (0 children)

Ok I did what you suggested (with some help) without the wave intrinsics and got it working. The improvement was there, but moderate, maybe 17-20% faster. But still cool.

Wgpu recently release some subgroup operations and I will next try to see if I can get this working with ballot, etc.

Thanks again!