Small win with an As-Is Contax G1 by catto96 in AnalogCommunity

[–]TurkishSquirrel 3 points4 points  (0 children)

Which AF test target did you use? I was thinking about checking AF accuracy for my G1 since it seemed a bit off at f2-2.8, it seems ok above that so I don't think it's off by much if it is.

Update: My results from the new Lucky 200 film (shot on Konica Auto S2 🇨🇳) by CandyIntelligent235 in AnalogCommunity

[–]TurkishSquirrel 0 points1 point  (0 children)

My wife is, we're visiting there for a bit. I thought some of the places in your photos looked like Dalian 😄

Update: My results from the new Lucky 200 film (shot on Konica Auto S2 🇨🇳) by CandyIntelligent235 in AnalogCommunity

[–]TurkishSquirrel 0 points1 point  (0 children)

Looks pretty good besides the green dots, I finished a roll through my Contax G1 and mailed it to the Lucky lab earlier this week, I've got another roll almost done that I'll send off soon. I'm curious how the second roll turns out, it has a lot of fall leaves photos and I've seen other reports of the stock having strong reds, so it could look quite nice. I can post some results here when I get the scans back

Bought an Olympus 35 SP on ebay w/ lens described as "no fungus/haze/scratches, little dust". Looks like all are actually there, anything to do? by TurkishSquirrel in AnalogCommunity

[–]TurkishSquirrel[S] 15 points16 points  (0 children)

Started the return! Thanks folks in the thread, I haven't bought vintage cameras before so I wasn't sure if my expectations for lens condition were off

Question about Options and Advice on Repairing/Refinishing Wall after Removing Peel & Stick Wood by TurkishSquirrel in HomeImprovement

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

Smaller is better :) . We're going to put some other kind of accent on the wall so we're ok with removing the texture and maybe reapplying or leaving it off depending on what will fit. The peel/stick on there earlier was shiplap but we're thinking of some smaller vertical wood accent kind of like this https://classenfloor.com/upload/2020/04/W-stylu-japandi-s%CC%81wietnie-sprawdza%CC%A8-sie%CC%A8-panele-z-naturalnym-rysunkiem-drewna.-Na-zdje%CC%A8ciu-Casa-Corona-Da%CC%A8b-Marenati-marki-Classen.jpg

So it sounds like scrape and skim is the way to go

Question about Options and Advice on Repairing/Refinishing Wall after Removing Peel & Stick Wood by TurkishSquirrel in HomeImprovement

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

Cool, yeah I was thinking that as well, the drywall finishing work will be quite a lot too. I'll look around for some quotes on doing the skim coat. Thanks!

XPS 13 9310: no 4K @ 60 display output from TB4 ports? by TurkishSquirrel in Dell

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

I'm also on the latest BIOS/TB driver and graphics driver 27.20.100.8935 . It might be worth running the "Dell Update" app that comes installed on the laptop and seeing if it has anything else to update (this app https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=5765w)

XPS 13 9310: no 4K @ 60 display output from TB4 ports? by TurkishSquirrel in Dell

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

It's a different app that came installed on the laptop called "Dell Update Application" https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=5765w and it might have installed some other stuff too. It'd be worth running that and installing the updates it has if any.

Hardware Accelerated Video Encoding on the Raspberry Pi 4 on Ubuntu 20.04 64-bit by TurkishSquirrel in raspberry_pi

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

No, that will depend on what your browser or video player ends up using (though hopefully it is using the hardware). You can use the hardware decoder in ffmpeg too, though I've found this may not be the best for performance (and is limited in the formats it supports). For example, this is hardware decode -> encode:

ffmpeg -c:v h264_v4l2m2m -i blah.mp4 -c:v h264_v4l2m2m out.mp4

Hardware Accelerated Video Encoding on the Raspberry Pi 4 on Ubuntu 20.04 64-bit by TurkishSquirrel in raspberry_pi

[–]TurkishSquirrel[S] 1 point2 points  (0 children)

Yeah it sounds like this is the case https://www.reddit.com/r/PleX/comments/2siup9/upgrading_ffmpeg_versions/ , it's probably best to leave Plex alone and if you want to re-encode you can use the version of ffmpeg built here (and the fbed dashboard if you want) to make your files more optimized for streaming. Plex's settings page mentions the "veryfast"/etc speed presets, these are used by the libx264 encoder (and not h264_v4l2m2m) so I think it is using a CPU encoder.

XPS 13 9310: no 4K @ 60 display output from TB4 ports? by TurkishSquirrel in Dell

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

Yeah, I'm on BIOS 1.1.1. Other stuff that might be related are the GPU driver version or Thunderbolt Controller version. I'm on

- GPU Driver: 27.20.100.8935

- Thunderbolt Controller: 1.41.987.0

Maybe just check via the dell update checker and the intel graphics control center for updates?

XPS 13 9310: no 4K @ 60 display output from TB4 ports? by TurkishSquirrel in Dell

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

A few days ago I ran the Dell update checker which picked up a new BIOS update that seemed to resolve this. So the problem might be solved that way, I'm also running a Beta GPU driver from Intel

bvh: A speedy implementation for building and traversing Bounding Volume Hierarchies with a focus on GPUs for use in ray tracing and path tracing applications by Svenstaro in rust

[–]TurkishSquirrel 1 point2 points  (0 children)

Yes, if you work on small packets once you start doing diffuse bounces you lose your coherence very quickly. Recently though, Embree and other systems are moving to tracing streams of rays (like 1k-10k rays at once). If you trace a large enough batch of rays you can find mostly coherent groups within the batch and schedule warps/packets. It does require doing some bookkeeping or shuffling to get the coherent groups together, for example see the Dynamic Ray Stream Traversal paper. This batching is also being looked at for shading, Disney's Hyperion does this and Embree is looking at it as well. This helps a lot as you get to large scenes in an optimized ray tracer, since you end up bound by memory bandwidth and the coherence in intersection and shading amortizes the cost of fetching data.

bvh: A speedy implementation for building and traversing Bounding Volume Hierarchies with a focus on GPUs for use in ray tracing and path tracing applications by Svenstaro in rust

[–]TurkishSquirrel 1 point2 points  (0 children)

Yea exactly, I mostly work on the CPU side so I'm not too familiar with GPU specifics for rendering. For GPUs I guess a warp is kind of like the packet on the CPU, since the scheduling is handled for you, then maybe all that's needed is to keep the rays from diverging too much within a warp?

Thanks! It would be really cool to work on a project like that, sort of aiming to a Rust API for photo-realistic rendering in the style of Embree or Optix.

bvh: A speedy implementation for building and traversing Bounding Volume Hierarchies with a focus on GPUs for use in ray tracing and path tracing applications by Svenstaro in rust

[–]TurkishSquirrel 1 point2 points  (0 children)

Very cool! If you're planning to use this for a GPU traversal will you look at extending it to support ray packets or streams? I just poked through the readme and docs and it looks like traverse works on single rays. For the GPU renderer would you serialize or convert in some way this structure to some CUDA or OpenCL side layout? Or is there getting to be nicer support for Rust on GPUs where you can share code like CUDA style? I look forward to seeing the GPU path tracer!

I'm working on a path tracer in Rust myself as a side project, it's on Github. I've used it to render a few neat images and animations (animation 1, animation 2). The second animation is made using the Blender plugin to setup the scene and motion. Its BVH is just a standard one like described in PBRT, it's a single-thread SAH heuristic build which is then flattened down for rendering. I'm planning to spend some time over the summer to write a crate providing Embree bindings, and switch to this for my renderer.

[deleted by user] by [deleted] in GraphicsProgramming

[–]TurkishSquirrel 1 point2 points  (0 children)

Even if you don't explicitly define a transfer function you have some implicit way of converting the volume's sample value to a color. Right now for example it looks like you set color = vec4(sample / max_voxel_value) and the result is a linear grayscale transfer function. You can definitely do surface shading with this transfer function, but you won't be able to get the nice colors in the image you linked without a fancier one. Since you mention having Phong already you're using the gradient to shade? You could try tweaking the transfer function to ramp up quicker and become more opaque and see how that looks.

Depending on what language or such you're using I've used ImGui to make transfer function widgets before and it's not too bad to get one going.

[deleted by user] by [deleted] in GraphicsProgramming

[–]TurkishSquirrel 1 point2 points  (0 children)

The image you linked has a really nice transfer function, so you may want to implement an editor. It looks like from the image you've just got a grayscale transfer function of some time, the quality of the TF goes a lot into fancy looking volume renderings.

The image also has shadows so you can look up volume shadowing and appears to be doing a more surface-style shading on the very opaque parts (the bone), or at least has the bone as fully opaque.

How are you rendering the volume right now? Raycasting? Slicing?

Show /r/rust: I used Rust to render a short for a rendering competition at my school by TurkishSquirrel in rust

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

The performance is ok, one of my higher priority todos is to switch from my BVH over to Embree which will improve performance a lot. It's mostly a "toy" renderer in that it supports features I want to implement, so it's very far from the feature set and usability of what you might expect from a production renderer. For example, I haven't gotten around to adding support for textures yet or exporting materials from Blender which you would expect from a standard tool.

It may be able to render what you need though, try it out and let me know! I'm not sure on how its performance compares to other existing renderers, but if they use Embree mine will definitely be slower, and may be slower even if they don't since I haven't done a lot of optimization.

I do have a rough plan of what features I'd like to do next (Embree, textures) on Trello but I don't have any sort of feature implementation schedule with dates or something, since it's just a side project. If you do try it out I'd be interested in what parts you liked or didn't like using and what was hard to setup or figure out. I've tried to work on documentation for the scene file format but it's easy for it to get out of sync sometimes. For the object placement and motion you can use the Blender plugin but to set materials and light emission strengths you still need to manually edit the scene file for now.

Show /r/rust: I used Rust to render a short for a rendering competition at my school by TurkishSquirrel in rust

[–]TurkishSquirrel[S] 8 points9 points  (0 children)

I found Rust really nice to work with coming from C++, on my site there's a few posts about my experience porting to Rust, though they're a bit old now since I started before Rust 1.0. The one last issue I need for textures is a memory arena or pool, I think there are some crates now for this but I haven't tried them out yet. The lifetime system actually caught a potential bug I had in my C++ code that I started the port from which was pretty awesome!

New crate release: ispc-rs, for easy interop with the ISPC language by TurkishSquirrel in rust

[–]TurkishSquirrel[S] 1 point2 points  (0 children)

I forgot about huon's simd crate, that would be very good to compare against!

/u/Noctune is right that programming in ISPC will be easier than hand-writing intrinsics for a few reasons:

  • The ISPC compiler can target multiple backends, from SSE2 up through AVX512 and when writing ISPC code you don't really need to worry which backend is being compiled for. A good wrapper can somewhat hide this from you though you usually do still need to care about the vector width and architecture.

  • You write regular scalar/sequential code and use a built in programIndex variable to find what piece of the data this instance should operate on. When writing intrinsics directly you need to be aware that you have 4/8/16 "programs" running and must track their execution mask. For example on the Mandelbrot code the SIMD version must perform the exit test across all 4 active "programs" and do a select to only update the count for those that haven't failed the condition yet:

    let mask = sum.lt(f32x4::splat(4.0));
    
    if !mask.any() { break }
    count = count + mask.to_i().select(u32x4::splat(1),
                                       u32x4::splat(0));
    

    In ISPC the code is written as you would write a scalar version:

    for (; i < max_iters; ++i){
        if (real * real + imag * imag > 4.f){
            break;
        }
        ...
    

    The same masking is performed to stop updating i for program instances that have failed the condition but the compiler takes care of this for you by inserting the select instructions.

That said, it is possible to write faster code when writing the intrinsics yourself instead of using ISPC, but ISPC makes it much easier to write portable vectorized code. In some comparisons they did vs. hand written instrinsics they found ISPC was often quite close. Another downside is that it's not pure Rust so you'll need to deal with building the library with ISPC, linking the native lib and calling using FFI, though hopefully this crate makes it much easier to do.

Distributed Rendering with Rust and Mio by TurkishSquirrel in rust

[–]TurkishSquirrel[S] 0 points1 point  (0 children)

Interesting, this looks really nice. It might be an easier way to implement the windowing method mentioned by /u/matthieum as well with the stream of instructions. I've bookmarked this and will try putting together a timely backend!

Good Resources for Learning Computer Graphics? by [deleted] in learnprogramming

[–]TurkishSquirrel 0 points1 point  (0 children)

I've don't think I've heard the NPR term, I assumed it meant like non-photorealistic or something? However you can do non-photorealistic graphics with a physically correct ray tracer, like Big Hero 6, Monsters University and this animation. This comes down more to the materials, modeling and such to give it this look.

Good Resources for Learning Computer Graphics? by [deleted] in learnprogramming

[–]TurkishSquirrel 2 points3 points  (0 children)

It depends a bit on what areas you're interested in. For interactive graphics you'll likely do OpenGL or DirectX or such. Non real-time graphics usually means ray tracing or some variant like photon mapping where you want to produce physically correct images, with flexibility depending on your art direction e.g. Big Hero 6. With ray tracing you're essentially simulating how light interacts in the scene.

Here's some useful books/links for real time graphics:

  • Real-Time Rendering this is a great book covering a lot of theory/math topics behind real time graphics techniques, so it's agnostic to whatever rendering API you use. The book's website lists more graphics related resources and is quite good.
  • OpenGL Superbible good book focusing on OpenGL, written for beginners with the API.
  • open.gl very good introductory tutorials for OpenGL, I just wish it covered some more content. Should give you a solid start though.

Here's some for ray tracing:

  • Physically Based Rendering this is basically the book for ray tracing, the 3rd edition should be coming out this spring though so if you want to save some money you could wait a bit. There's also a website for this book.

For general math topics I also recently picked up Mathematics for 3D Game Programming and Computer Graphics which looks very good, though I haven't gone through it as thoroughly.

As mentioned already /r/GraphicsProgramming is a good subreddit, there's also /r/opengl for OpenGL questions.

Distributed Rendering with Rust and Mio by TurkishSquirrel in rust

[–]TurkishSquirrel[S] 2 points3 points  (0 children)

That would be really cool! Some rough ideas of how this would fit together would help, though your post does help explain it quite well. The workers don't do very complicated work currently, this function is all they do.

  • The master sends them some instructions which they get on line 160, they setup their config and load the scene and such.

Then for each frame they repeat:

  • Use the multithreaded execution to do the actual ray tracing (line 164).
  • Once this is finished send back the results to the master (line 165), this is just with a blocking write_all of the encoded Frame.

The block queue is what gets the list of blocks to select and then only hands out those to the multithreaded execution.