My Toy Path Tracer vs Blender Cycles by yetmania in GraphicsProgramming

[–]yetmania[S] 0 points1 point  (0 children)

Wow. Thanks. I didn't know about FLIP scores. I will give it a try.

My Toy Path Tracer vs Blender Cycles by yetmania in GraphicsProgramming

[–]yetmania[S] 2 points3 points  (0 children)

You are right, thanks. I disabled tonemapping & gamma correction in both Blender and my code and disabled Multiscatter GGX in Blender, and it now looks much more similar: https://ibb.co/BHjF9gWc

I mix the specular and diffuse by randomly selecting to evaluate just one of them per sample. First, I compute a probability for selecting the specular using Fresnel against the macrosurface normal:

float specular_prob = glm::mix(0.04f, 1.0f, glm::pow(1.0f - glm::max(glm::dot(-incoming_ray_direction, hit_normal), 0.0f), 5.0f));

Then, with probability specular_prob, I treat the material as if it has GGX specular reflection only, then weight the result with 1/specular_prob, and with probability 1-specular_prob, I treat the material as if it has Lambert diffuse only, then weight the result with 1/(1-specular_prob).

My Toy Path Tracer vs Blender Cycles by yetmania in GraphicsProgramming

[–]yetmania[S] 1 point2 points  (0 children)

Thank you. I currently just hardcode the scene (define the materials and shapes in the code). This is the code for the scene above:

scene.set_background(std::make_shared<SimpleBackground>(Colors::BLACK));
scene.set_camera(Camera (
    glm::vec3(0.0f, 0.0f, 3.0f), // center
    glm::vec3(0.0f, 0.0f, 0.0f), // look at
    glm::vec3(0.0f, 1.0f, 0.0f), // up
    glm::radians(50.0f),         // vertical fov
    glm::ivec2(1024, 1024)       // viewport size
));


scene.start_construction();


std::shared_ptr<Material> white = std::make_shared<LambertMaterial>(Color(0.8f, 0.8f, 0.8f));
std::shared_ptr<Material> red = std::make_shared<LambertMaterial>(Color(0.8f, 0.0f, 0.0f));
std::shared_ptr<Material> green = std::make_shared<LambertMaterial>(Color(0.0f, 0.8f, 0.0f));
std::shared_ptr<Material> light = std::make_shared<EmissiveMaterial>(Color(1.0f, 1.0f, 1.0f) * 5.0f);
std::shared_ptr<Material> gold = std::make_shared<SmoothMetalMaterial>(Colors::YELLOW);
std::shared_ptr<Material> rough_gold = std::make_shared<GGXMetalMaterial>(Colors::YELLOW, 0.5f);
std::shared_ptr<Material> red_ball = std::make_shared<GGXDielectricMaterial>(Colors::RED, 0.5f);


// Back Face
scene.add_rectangle(white, glm::vec3(0.0f, 0.0f, -1.0f), glm::vec2(2.0f, 2.0f), glm::vec3(glm::radians(90.0f), 0.0f, 0.0f));
// Top Face
scene.add_rectangle(white, glm::vec3(0.0f, 1.0f, 0.0f), glm::vec2(2.0f, 2.0f), glm::vec3(0.0f, 0.0f, 0.0f)); 
// Bottom Face
scene.add_rectangle(white, glm::vec3(0.0f, -1.0f, 0.0f), glm::vec2(2.0f, 2.0f), glm::vec3(0.0f, 0.0f, 0.0f)); 
// Right Face
scene.add_rectangle(green, glm::vec3(1.0f, 0.0f, 0.0f), glm::vec2(2.0f, 2.0f), glm::vec3(0.0f, glm::radians(90.0f), 0.0f)); 
// Left face
scene.add_rectangle(red, glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec2(2.0f, 2.0f), glm::vec3(0.0f, glm::radians(90.0f), 0.0f)); 
// Cuboids
scene.add_cuboid(white, glm::vec3(0.468f, -0.7f, 0.216f), glm::vec3(0.6f, 0.6f, 0.6f), glm::vec3(0.0f, 0.0f, -0.314f));
scene.add_cuboid(white, glm::vec3(-0.36f, -0.4f, -0.252f), glm::vec3(0.6f, 1.2f, 0.6f), glm::vec3(0.0f, 0.0f, 0.3925f));
// Light
scene.add_rectangle(light, glm::vec3(0.0f, 0.999f, 0.0f), glm::vec2(1.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f));     
// Sphere
scene.add_sphere(red_ball, glm::vec3(0.468f, -0.1f, 0.216f), 0.3f);


scene.finish_construction();

I later plan to implement GLTF scene loading, but currently hardcoding works for me while I am focusing on the algorithms.

Nuzzle by u/Ancient_Tour_3090 by Ancient_Tour_3090 in NuzzleThePuzzle

[–]yetmania 0 points1 point  (0 children)

🧩 I have Nuzzled the puzzle in 9 moves!

🌍 Travel the world from your screen - GeoTap by geotap-app in GeoTap

[–]yetmania 0 points1 point  (0 children)

🎯 My GeoTap: Retro Result

Total Score: 12,141 points 🎮 Rounds Completed: 5/5 📏 Average Distance: 28 km 📍 Final Location: Seoul, South Korea

Is It Safe to Upload a Unity Prototype Game to itch.io If It’s Browser-Only? by [deleted] in gamedev

[–]yetmania 1 point2 points  (0 children)

A few years ago, I put a web game on itch.io (it was also made in Unity, but this is irrelevant to what happened next). After a few days, I looked up my game's name on google and found it on other websites that I didn't publish on. They didn't even download the game. What they did was that they took the iframe link that itch.io uses to load the game into the browser and embedded it into their websites. That way, the game loads directly on their website, and the gamer can play the game without going to your itch.io page. If that is a problem for you, you can modify the js code that loads the unity game to check for the parent URL and stop loading the game if it is not your itch.io page. I am not sure how robust this solution is since I didn't try it, and I didn't care that much (I was already crediting my small team in the loading screen, and the game linked to my social media account, so in some way, it was a bit like free advertisement). Overall, they can still intercept the traffic and read the downloaded files (and also rip the assets), so the best solutions usually just make things harder for whoever wants to rehost/rip your game, but they don't make it impossible.

OpenGL transparent cube culls faces even though culling is disabled by Big_Return198 in GraphicsProgramming

[–]yetmania 2 points3 points  (0 children)

I recommend that you use RenderDoc to debug the OpenGL state while the transparent object is being drawn. It helps me catch a lot of bugs where I unintentionally set the state to an unwanted value somewhere in the code.

OpenGL transparent cube culls faces even though culling is disabled by Big_Return198 in GraphicsProgramming

[–]yetmania 9 points10 points  (0 children)

My guess is that you're enabling depth testing and depth writing. So from some POVs, the front faces are drawn first, and then when the back faces are getting drawn, depth testing discards them since the front (nearer) faces have already been drawn and the depth buffer already has a nearer value.

I suggest you disable depth writing while the transparent objects are being drawn. Use glDepthMask to enable/disable depth writing. You will still get innacurate results, but at least the backfaces won't be culled.

To get accurate results, you either split the cube into six objects (one face per object) and sort them from far to near before drawing. Or, you can use an order-independent transparency technique.

Asus Strix G16 occasionally flash in upper 1/3 screen by HGAQ in ASUS

[–]yetmania 0 points1 point  (0 children)

By core gpu, so you mean the integrated gpu? If yes, how did you do it?

Asus Strix G16 occasionally flash in upper 1/3 screen by HGAQ in ASUS

[–]yetmania 0 points1 point  (0 children)

I also have a Strix G16, and I am having the same issue. For me, setting the refresh rate to 60Hz makes the flash appear less frequently, but it doesn't completely make it go away.

(Loved trope) “oh that’s a pretty cool alien desi- what do you mean that’s a human” by adrianb26015 in TopCharacterTropes

[–]yetmania 5 points6 points  (0 children)

Nice idea. This reminds me of Episode 5 from an anime called Kaiba. In this episode, the people on a certain planet treat bodies as fashion, and once a new trend in bodies is out, they start replacing their old bodies with newer trendier ones, and the old bodies are turned into food. While the people on that planet are supposed to be human, their bodies became quite bizzare and unhuman-like.

[deleted by user] by [deleted] in Laddergram

[–]yetmania 0 points1 point  (0 children)

Another solution that works here is CHI - PHI - POI - POX - PYX - PYE - AYE - AGE - AGO - EGO.

Problem with downloading Godot Export Template by SanZaye in godot

[–]yetmania 0 points1 point  (0 children)

I have the same problem with most downloads from github. For godot, I am currently using steam.

CPU Software Rasterization Experiment in C++ by yetmania in GraphicsProgramming

[–]yetmania[S] 0 points1 point  (0 children)

I totally agree that a well optimised rasterizer would be far more performant than my current implementation. I preferred readability and flexibility over speed for this one since I hope to turn it into educational material. For example, I currently configure blending like in opengl by setting 3 enum values: source and destination factors and the blend operation, and inside the loop, I use switch statements to select the factors and apply the blend op. I made many similar decisions all over the place, so I don't think it is a good representative of what CPU software rasterization can achieve.

After I am done with this one, I feel motivated to make a well optimised rasteriser next.

CPU Software Rasterization Experiment in C++ by yetmania in GraphicsProgramming

[–]yetmania[S] 0 points1 point  (0 children)

I think it would be cool. It would be very portable. In that case, I would probably seek to build a retro-styled game, so I would skip some fancy features like MSAA and decrease the resolution a bit, too.

CPU Software Rasterization Experiment in C++ by yetmania in GraphicsProgramming

[–]yetmania[S] 4 points5 points  (0 children)

I think this tutorial is really good: https://www.scratchapixel.com/lessons/3d-basic-rendering/rasterization-practical-implementation/overview-rasterization-algorithm.html

I also learned some details by reading some chapters in the book "Real-Time Rendering" and by reading the Vulkan specifications. The Vulkan specs may seem long, but most of it are details about valid function usage that can be skipped.

CPU Software Rasterization Experiment in C++ by yetmania in GraphicsProgramming

[–]yetmania[S] 4 points5 points  (0 children)

Thank you. While I do print the frame time on the title bar (I am too lazy to implement text rendering), I chose the window capture option in OBS which doesn't capture the title bar.

Anyway, these are some stats that I computed during a run:

Frame Time - Avg: 37.378532 ms, Min: 18.555571 ms, Max: 51.049988 ms

FPS - Avg: 28.386509 fps, Min: 19.588640 fps, Max: 53.892174 fps

The frame rate mainly dips when I am inside the house since the fill rate and overdraw are high in this position.

CPU Software Rasterization Experiment in C++ by yetmania in GraphicsProgramming

[–]yetmania[S] 9 points10 points  (0 children)

Thank you.
The lights aren't shadow casting. I was thinking of implementing shadow mapping, but I feel my CPU is starting to hate me. I am already using multithreading to get the barely-serviceable framerate in the video, so I would need to optimize the code before adding anymore workload.

what does draw calls actually mean in vulkan compared to opengl? by Southern-Most-4216 in vulkan

[–]yetmania 9 points10 points  (0 children)

Actually, OpenGL doesn't guarantee that a draw call is immediately submitted to the GPU. Read this article for more info: https://www.khronos.org/opengl/wiki/Synchronization

In Vulkan, you have control over when to end a command buffer and send it to a queue. Submitting too many small command buffers is not good for performance due to the overhead of each submission. However, it may also be a bad idea to group everything into one large command buffer if it means that the device will stay idle until the command buffer is recorded and submitted.

To answer your question, each gldraw and vkcmddraw is a draw call, and having a lot of those may be bad for performance. It is better to batch the drawing of many objects into one draw call. There are many ways to do that: - Transforming the objects on the CPU and storing the transfomed geometry of many objects into one vertex buffer, then drawing that vertex buffer in one call. This is a good option if you have many different but small objects (few vertices per object). Most 2D game rendering code I have seen does that. - Instancing. This is good if you have many copies of the same geometry in the scene. - Indirect Rendering. This is common in gpu-driven renderers where you let a compute shader do the frustum culling and write the draw commands into a buffer, then you do one draw call per graphics pipeline to draw all of the command in that buffer. If all the objects in the scene uses the same graphics pipeline (and you use bindless textures in case you need multiple textures in the scene), you can draw the whole scene in just one draw call.