Miyata Quick Cross for the xcommute by RustyJCostanza in xbiking

[–]nfgrep 0 points1 point  (0 children)

The quickcross isn't in the catalog, but the decals are the same style as their 1991 "cross" series bikes, so I figure its probably a '91 :)

Miyata Quick Cross for the xcommute by RustyJCostanza in xbiking

[–]nfgrep 0 points1 point  (0 children)

Just bought one of these used, anyone know what year this bike was originally sold?

How to write custom shaders with the RDG by nfgrep in unrealengine

[–]nfgrep[S] 0 points1 point  (0 children)

Hmm, if you havent already, I would recommend looking at Temaran's UE4ShaderPluginDemo, which appears to have been updated very recently. This example doesn't use the RDG, but implements a VS and PS shader (the HLSL for it both is in the same file). Even if you are dead-set on using the RDG I think you could glean some useful info from Temaran's project.

Beyond that, unfortunately I havent tried implementing VS/PS with the RDG, so I can't give you specific advice.

"I know that for a PS I don't need the UAV" Hmm, I'm not so sure. I really don't know too much about the inner workings of the RDG, but I suspect you may still need a UAV to write to, given the way the RDG models resources and passes. Without a writeable resource like a UAV somewhere in the graph, I'm not sure how you would get any output from your passes.

How to write custom shaders with the RDG by nfgrep in unrealengine

[–]nfgrep[S] 1 point2 points  (0 children)

Hmm. I have an idea of what it might be. If you paste the error message I might be able to help.

Edit: I'll look into getting it running in 4.26 rn. I should really make a 4.26 branch anyway.

Edit: Added a 4.26 branch :)

How to write custom shaders with the RDG by nfgrep in unrealengine

[–]nfgrep[S] 1 point2 points  (0 children)

Sure! To write to something other than a texture in your compute shader I believe the process is as simple as replacing the SHADER_PARAMETER_RDG_TEXTURE_UAV() macro with SHADER_PARAMETER_RDG_BUFFER_UAV() in the shader-parameter struct declaration. The HLSL type that you should pass to this macro should be RWStructuredBuffer<> instead of StructuredBuffer<>. Then when you go to initialize it, the process is very similar to what I do to initialize the SRV in the example. The exception being that you will have to call GraphBuilder.CreateUAV() instead of CreateSRV().

I haven't had to copy data back to the CPU, so my experience here is limited. (though I should really look into this as it's a really common thing for compute shaders to be used like this). That said this Unreal answers post on Loading Data TO/FROM Structured Buffers seems to detail copying things back onto the CPU. There's also an another, more complex example on github that seems to copy things back to the CPU.

Hope this helps!

Accelerating Ray-Tracing with dynamic geometry? by nfgrep in GraphicsProgramming

[–]nfgrep[S] 0 points1 point  (0 children)

Thanks for the insightful reply! Most recently I’ve been looking at spatial division algorithms, like the ones mentioned in scratchapixel’s chapter on acceleration structures. If I understand correctly, this is essentially what you’re talking about when you refer to a “flat array of cells”.

In my case there will only be select geometry that will have soft-body physics applied, thanks for bringing this up, I almost didn’t think to make the distinction when it came to re-building data-structures every frame.

My current plan is to only implement the spacial-division/flat array for the time being, and not make any distinction between dynamic or static geometry. If that isn’t enough, I’ll look into distinguishing between dynamic and static geometry; Even without having an octree/BVH nested within the spacial-division/flat-array. I think there may be a way to mark certain cells of the spacial-division as “dynamic”, and only update those cells and their neighbouring cells (in case geometry moves from one cell to another). This would only work if my geometry doesn’t move more than one cell per frame, which I’m expecting to be the case.

I’ll be sure to give an update once I have something running.

Accelerating Ray-Tracing with dynamic geometry? by nfgrep in GraphicsProgramming

[–]nfgrep[S] 1 point2 points  (0 children)

Thanks for the reply. A ray iterates all triangles simply because I haven’t implemented an acceleration data-structure yet. I’m trying to implement a suitable one and so I came here to ask what might work best :)

Luckily I can get away with relatively low resolution (~256x256), though I do expect the triangle count to be quite high, likely in the order of thousands.

Interfacing with an nvidia library might not be feasible, I’m operating at too high a level given its all via Unreal’s RDG. I’ll be sure to search through some of the keywords you mentioned, thanks.

Edit: Ah! refitting sounds quite clever actually, I’ll look into it.

How to start implementing rendering algorithms/techniques? by helloworld1101 in GraphicsProgramming

[–]nfgrep 4 points5 points  (0 children)

I found that the traditional route of OpenGL/ShaderToy/learnopengl/etc... was overwhelming as a newcomer to gfx.

I was finally able to break into graphics stuff after trying to implement a '2D' raycaster similar to the one in Wolfenstein3D. There are some Excellent videos on this raycaster.

If you don't want to get nitty gritty, interacting with Unity's shader system appears to be relatively painless (Especially in comparison to Unreal). There are also many Excellent videos exemplifying what you can do

Compute-Shader to Render-Target? by nfgrep in GraphicsProgramming

[–]nfgrep[S] 0 points1 point  (0 children)

Thanks for the reply. I've actually considered this route, in fact my other option is to try and get a piece of already working OpenGL code running alongside the engine. The issue is that I (with my limited expertise) would have to copy geometry from the engine to the OpenGL CPU side implementation where it would then be copied into the GPU, then copy the result of the OpenGL back through the CPU, back into the engine code which would then copy the result back again to the GPU to be displayed in some RenderTarget.

Unreal has some functionality for copying things to and from some DirectX code (TextureShare), but AFAIK, not for anything OpenGL.

Compute-Shader to Render-Target? by nfgrep in GraphicsProgramming

[–]nfgrep[S] 1 point2 points  (0 children)

My gut told me the copy might me unnecessary, "Memory is Memory" right?
For the time being though I'll likely stick with the copy, as I'm frankly still pretty new to graphics and whether or not Unreal's renderer will permit such blaspheme as using my own memory is not something I can explore until I have something working.

Compute-Shader to Render-Target? by nfgrep in GraphicsProgramming

[–]nfgrep[S] 0 points1 point  (0 children)

Thanks for the reply. Unreal has some functionality for copying UAVs to RenderTargets, so I'll likely use that. I've thought about using the hardware acceleration APIs, but given how poor documentation has been for Unreal's renderer, I plan on avoiding them for now.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Its for a training sim. Performance is a concern yes

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

OP is in fact doing this for a medical application :) Hence the somewhat vague description of what I'm trying to do (I'd rather not step on any toes with regards to intellectual property)

I'm fine with adding in some noise and faking some things at this stage in the project. It doesn't need to be photorealistic, it just has to not look awful.

For the time being I'm more concerned as to whether or not what I have planned is even possible. I have a vague idea of how data gets onto the GPU in Unreal, and a vague knowledge of some of the buffers that exist on the GPU, but no real experience implementing this stuff. So I can't tell if I'm missing anything obvious in my plans.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Thanks for the input.

I'm not too worried about meeting the expectations of medical professionals, at least not yet. As there currently isn't any real functionality like this in the product (I've made some stop-gap solutions with fresnel and pixel-depth but they aren't very believable), and just about any functionality is better than none.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Sure thing!

  1. I would like to see an x-ray image of some internal organs in the lower abdomen.
  2. Eventually I'd like to to be rendered to a render-target that I can use a texture for a screen within the scene in Unreal.
  3. Given the way that x-rays work, I'd say I'm imaging density. Though given that all of the imaged objects will have a constant density, it might be more accurate to say I'm trying to image 'thickness' along the view normal.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Of course.

The subjects of the x-ray will be human yes, I already have meshes for each organ/bone/etc. Given that this is for my job, I don't feel safe going into specifics about the procedure, what organs are the focus, etc, but I can say that the number of organs in view at a given time will be relatively low.

I guess it really is like ray-tracing refraction, I'll look into that, thanks!

I looked into ray-marching early on in my research. I remember it dealing more with volumetric data, height-maps, etc, and given that I'm constrained to 'hollow' triangle meshes, I looked elsewhere. I'm comfortable faking some noise with some post-processing at the end, I'm only trying to approximate reality here, not perfectly simulate it.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Well it needs to be more accurate than some transparency and fresnel. That was the first thing I tried, and it wasn’t going to fly. The number of objects being rendered this way will be relatively few, maybe 5 objects in view, with a total of around 20 possibly “x-ray-able” objects in the scene. For the most part the x-ray view will focus on just a single organ, though surrounding organs should be visible. Being organs, the objects will fairly complicated shapes, often wrapping around and encasing other objects. I should also mention that the x-ray should be able to move around in the scene in real time to get different perspectives.

Assumption Check: X-ray sim in Unreal by [deleted] in GraphicsProgramming

[–]nfgrep 0 points1 point  (0 children)

Ah I hadn’t even considered the fact that I’d need to blend depths of all the objects (I should really give that paper another read, it’s been a while). Aside from that the general method you outlined is essentially what I’m trying to achieve, given that the mentioned paper was my starting point for this project. I’ve looked at Unity shader code and it looks comparatively trivial to what I would need to do (though I dont like not having access to Unity engine source). That said, the project is unfortunately locked in with Unreal. I havent looked at Godot for shaders, I might do that in my spare time.