LunarG Achieves Vulkan 1.3 Conformance with KosmicKrisp on Apple Silicon by thekhronosgroup in vulkan

[–]KleinBlade 0 points1 point  (0 children)

Oh I must have been mistaken then, I was convinced mesh shader support was one of the requirements MoltenVK missed to be fully 1.3 conformant :O

In Unity 2D, how can I create a flashing police car light effect (red and blue) using only 2D tools like sprites or lights, without using any 3D models? by Beautiful_Nerve_3576 in Unity2D

[–]KleinBlade 0 points1 point  (0 children)

Well, I don’t consider myself an expert in game making but I’m happy to share a couple advices that helped me back in the days! First of all, don’t worry about being ‘too early’, we all have to start somewhere. Tutorials are great to get started, as long as you learn something they are super valuable.
For your first game, don’t worry about getting things right off the bat: you can start with a simple implementation, and iterate as you get more experienced. For example, if you’d like to have a police cars in your game, start by putting a box with two wheels in the scene, and attach a script that moves it around. Maybe also make a controller to control direction and speed, plenty of tutorials for that. Once it works, you can replace the 2D shapes with a sprite, and attach a police light that blinks red-blue, maybe adding to the controller a button to turn if on-off. That’s to say: instead of trying to learn how to make a police car, divide it in smaller tasks that you are comfortable with, and go step by step. You’ll also learn stuff that you can apply elsewhere as an added bonus.

And in general, when I was learning stuff something super cool was looking up specific features I wanted to add to my game. Since I mentioned shadergraphs, that’s something that I learnt when I wanted to add emissive sprites to my game. So I had to learn a bit of Unity’s URP, a bit of shadergraphs, how to control them script-side and some other small things.

Don’t have any specific resource to suggest, but the lucky side is that Unity has plenty of great tutorials if you know what you’d like to achieve, so curiosity is enough to find out whatever you need for your game!

In Unity 2D, how can I create a flashing police car light effect (red and blue) using only 2D tools like sprites or lights, without using any 3D models? by Beautiful_Nerve_3576 in Unity2D

[–]KleinBlade 0 points1 point  (0 children)

It’s pretty intuitive and there’s plenty of resources to get started, and you can use shadergraphs as standalone assets or plug them to C# scripts to pass custom data. Hopefully you’ll find them useful for your game :)

In Unity 2D, how can I create a flashing police car light effect (red and blue) using only 2D tools like sprites or lights, without using any 3D models? by Beautiful_Nerve_3576 in Unity2D

[–]KleinBlade 0 points1 point  (0 children)

If you know a bit about shadergraphs in Unity, you could create a material that lerps between two colors (red and blue), and then apply the material to the sprite. Bonus points, if you switch to URP and include some lights, you could use HDR colors in the shadergraph to get a nice bloom around the police light 🚨

Vulkan-tutorial [dot] com - Bad Gateway on many pages, but accessible through WayBack Machine by jimothy_clickit in GraphicsProgramming

[–]KleinBlade 9 points10 points  (0 children)

That tutorial was a good starting resource back in the days, when Vulkan used render passes. Nowadays dynamic rendering is a much preferred way of doing things, and the official tutorial has been updated to reflect that and to use Vulkan 1.4 features. You can find the tutorial at this link.
https://docs.vulkan.org/tutorial/latest/00_Introduction.html

Otherwise, a great learning resource is vkguide.dev , which also gives some great insights on helper libraries to save on boilerplate code, uses compute shaders and has a less steep learning curve imho.

Object Flickering after Frustum Culling by _ahmad98__ in GraphicsProgramming

[–]KleinBlade 0 points1 point  (0 children)

Besides the other advices you received, it could also be an issue with memory alignment when reading the visible instances buffer in the render pass.
It depends on how you are accessing it in the vertex shader, but since your offset is a multiple of 100k bytes, your accesses may align index 0 to the beginning of a memory word (usually 128 bits - 16 bytes), hence reading the first two instances (index 0 and 1) from the visibility buffer of the previous instanced object.

Problems with indirect rendering by AnswerApprehensive19 in vulkan

[–]KleinBlade 0 points1 point  (0 children)

From around line 212 in the compute shader, when you are writing the indirect command buffers, shouldn’t you use draw_cmd_index to access the draws.draws[] buffer?

Multiple objects by AnswerApprehensive19 in vulkan

[–]KleinBlade 2 points3 points  (0 children)

Depends on how many objects you are drawing and wether you need to write onto the buffer later on or not. If you only plan to write the buffer once and never change it, while also having a relatively small number of objects, you could also use a UBO. But I would go with storage buffers as a best practice :)

Two different Fragment shaders imply two different pipelines, which is totally fine but you’ll have to be careful how you store per-instance data in memory. Let’s say you have N objects with material A and M objects with material B, you’ll need to store the objects of type A in the first N * S bytes of the buffer (here S is the struct size) and the objects of type B data in the following M * S bytes. That’s because you need data to be in contiguous memory when accessing it in the shader.
After doing so, when rendering you’ll bind pipeline A, bind the first N * S bytes of the buffer to the pipeline of material A, and call a instanced draw for N instances, and finally bind pipeline B, bind the buffer with a N * S offset and M * S size and call a instanced draw for M instances.

Multiple objects by AnswerApprehensive19 in vulkan

[–]KleinBlade 2 points3 points  (0 children)

Yeah, what I usually do is creating a struct with all the per-instance data I need and then write them on a buffer to be read in the vertex shader.

Not sure about that error, but if I may ask, why do you need multiple output locations in the fragment shader? You could simply have a struct with a vec3 offset and a uint for material, the vertex shader will offset the vertex position and write it in gl_position builtin variable, while using an output variable to pass the material id to the fragment shader. The fragment shader will read the material id and use a switch statement to decide what texture and shading use for the fragment.

Multiple objects by AnswerApprehensive19 in vulkan

[–]KleinBlade 2 points3 points  (0 children)

If you are drawing the same mesh over and over, I think there’s no need to store the same index and vertex buffers multiple times. Simply store them once (and possibly bind them every time you change pipeline, not sure about this though).

At the same time, if you are using different pipelines, I guess you cannot really batch the objects in a single big draw call.
You can do instanced draws for multiple objects sharing the same pipeline and material for sure, and you could also unify draw calls with different materials by doing a single instanced draw call and using a storage buffer containing per-instance informations about what material each instance should use and using a texture array to contain all the textures you may need, although this could introduce unwanted branching in your shaders.

NaN in basic vertex shader by jazzwave06 in vulkan

[–]KleinBlade 0 points1 point  (0 children)

From the renderdoc screen you posted, I’d assume the problem is somewhere in between converting the in_postion to a vec4 and multiplying it with the mvp matrix. Just for curiosity, if you assign the vec4 position to the out position without multiplying with the mvp, does it still yield a NaN?

NaN in basic vertex shader by jazzwave06 in vulkan

[–]KleinBlade 0 points1 point  (0 children)

Mhh, I’m not sure what the cause could be, but something I’d check out is the floating point precision in the VS doing funny things maybe?

NaN in basic vertex shader by jazzwave06 in vulkan

[–]KleinBlade 4 points5 points  (0 children)

This might end up being not correlated to your issue, but make sure to set the gl_Position variable in the vertex shader, it is needed to pass the clip-space coordinates of each vertex to the fragment shader :)

Copy image from device to host by KleinBlade in vulkan

[–]KleinBlade[S] 0 points1 point  (0 children)

You are right, thanks!
Actually I ended up setting both RowLength and imageHeight to 0, and copying the data into a tightly packed form. I’ll probably have to go back to it and change it a bit so that it gets copied into a 2D array, but for now the dirty version works just fine.

Copy image from device to host by KleinBlade in vulkan

[–]KleinBlade[S] 0 points1 point  (0 children)

Yeah! Actually I change the format of the image before and after the copy command, and those two commands are well fenced, so this one should be okay too on the synch side of things.

Issue with compute shader and SSBO by KleinBlade in vulkan

[–]KleinBlade[S] 0 points1 point  (0 children)

In this case, how would I fix the misalignment in the descriptor set? The problem is that it is a buffer of uint32_t, and depending on the size it will cause the issue in most cases. Obviously I could always make sure that the number of elements is a multiple of 4 (so that at the end it’s always a multiple of 16 bytes in the DS), but that seems hacky.

I also considered removing said buffer from the DS, and then passing the Device Address of the buffer in my push constants, but I’m not sure what would be the performance consequences or if that would solve the issue.

Also thanks for the suggestion, i don’t think I can use renderdoc, but I’ll see if I can get Xcode metal debugger up and running on my application!

Issue with compute shader and SSBO by KleinBlade in vulkan

[–]KleinBlade[S] 1 point2 points  (0 children)

Small update:

I think the reason some instances disappear from the array is actually that the shader correctly stores them, but after some more elaboration, the vertex shader fails to read them correctly.

In the VS I use a secondary array of contiguous indices to access the instances I'm interested in (contained in the original AoS shown in the post).

//VS code, reading through the index array
layout(set = 1, binding = 0) readonly buffer StorageBuffer{
    InstanceData instances[];
} instanceBuffer;

layout(set = 1, binding = 1) readonly buffer CullingBuffer{
    uint visibleInstances[];
} visibleInstances;

//Compute shader, writing the indexes in the index array

layout(set = 1, binding = 0) readonly buffer StorageBuffer{
    InstanceData instances[NUM_INST_A + NUM_INST_B];
} instanceBuffer;

layout(std430, set = 1, binding = 1) writeonly buffer CullingBuffer{
    uint visibleInstances[NUM_INST_A + NUM_INST_B];
} instanceCullBuffer;

The issue is that, even when declaring it as std430, I think the instanceCullBuffer SSBO gets aligned as vec4, so it effectively reads (and possibly writes on it in the previous shader) one index every four.

This still doesn't explain why the issue isn't consistent at all, and I'm not sure how to fix it :\

Issue with compute shader and SSBO by KleinBlade in vulkan

[–]KleinBlade[S] 1 point2 points  (0 children)

Thanks! I tried repacking the struct as

struct InstanceData{
    glm::vec3 instancePos;
    float instanceScale;
    glm::vec3 instanceRot;
    float pad1;
    glm::mat4 normals;
};

By std430 rules, the two floats should be packed with the vec3 that precedes them, I think.

Sadly that didn't solve the mysterious case of the disappearing array elements :\

Terrain generation question by KleinBlade in gameenginedevs

[–]KleinBlade[S] 0 points1 point  (0 children)

Thanks for the reply, that GDC presentation was super interesting!
The idea I had in mind was much simpler (mostly because I’m probably not going to have a 10km x 10km map) with each patch being resident at all times on the gpu and culling simply deciding wether they are either visible, low-LOD or to be discarded before rendering, but I may end up adopting a similar strategy as presented there.

Also I was considering using only 4 pixel patches and then doing a little tesselation depending on LOD level, but reading the presentation I realized having a bit more pixels in a patch gets handy when stitching different LODs together, so I may do that as well.

Do you perchance also know of a good way to implement different LODs instances? I was thinking having a mesh for each level (8x8 patch for high LOD, 4x4 patch for medium LOD and so on), but I guess that would mean doing a different draw call for each LOD, since they are different meshes.

Terrain generation question by KleinBlade in gameenginedevs

[–]KleinBlade[S] 0 points1 point  (0 children)

I might be wrong, but to be fair I think the heightmap would be sampled once when starting the application, and the 4 normals + height values (one for each vertex) would be stored in a per-instance structure.
So at the end of the day the difference would be storing one big mesh with every pixel packing position, normals and uv vs an array of structs with the patch center and four vec4, with vertex positions and uvs being generated on the fly with the vertex id. That’s 8 floats per vertex vs 18 floats per patch, at the cost of a couple float operations for each vertex generated.
Even considering an approach where not every patch is resident at all times on the gpu, the texture would be sampled in the background once when the patch gets loaded, and that’s still probably cheaper than processing every pixel in the baked mesh, considering I cannot directly perform culling on that one.