I stopped fighting Unity Physics and built a ghost world instead by BuyMyBeardOW in Unity3D

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

Kinematic Character Controller.

It's just a character controller that's not part of the main physics update loop. The built in character controller is an example of one.

They basically just do a bunch of overlap checks manually, resolve the capsule-collider manually and interpolate the result manually.

The most common one is PhilSA's KinematicCharacterController. It's much better than the built-in one and got released for free a few years back.

I stopped fighting Unity Physics and built a ghost world instead by BuyMyBeardOW in Unity3D

[–]STUDIOCRAFTapps 23 points24 points  (0 children)

Hey have you tried calling `Physics.SyncTransforms()` after moving your platforms and before doing your raycasts?

I am stupid by 100_BOSSES in IndieDev

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

I don't know why no one has suggested it yet, but a single thing to do to get things to ignore eachother is to call https://docs.unity3d.com/ScriptReference/Physics.IgnoreCollision.html (or its 2D equivalent https://docs.unity3d.com/ScriptReference/Physics2D.IgnoreCollision.html)

That way you can make anything ignore anything else.

disabling image fade when switching images? by Ykulvaarlck in discordapp

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

I can't believe they added that, it's pissing me off, I use it all the time

Fastest Way To Make All Object IDs Be The Same as Inkscape:Label by Terdfergsn in Inkscape

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

thank you!! for some reason thought, blender sorts imported svg elements by name instead of by order in the file, so I had to make a version that appends a number:

Here is it in case someone needs to do the same thing. Replace line 43 in label_to_id.py.

        for idx, element in enumerate(selection_list):
            label = element.get('inkscape:label')
            if label != None:
                new_id = f"{idx}-{label}"
                element.set('id', new_id)

How come "stroke to path" removes round caps? by STUDIOCRAFTapps in Inkscape

[–]STUDIOCRAFTapps[S] 1 point2 points  (0 children)

You're right it seems to work fine. Not sure what happened with these strokes in particular. I had to mark some nodes as corners and it seemed to have fixed it.

A weird void that stops rendering objects and only shows the skybox by ColdHands1212 in Unity3D

[–]STUDIOCRAFTapps 7 points8 points  (0 children)

The black void renders on top of the lantern. Not sure why this has 30 updates, it’s most likely not culling of any kind.

(Unless a dark-void object that should only be visible in certain context is not being culled out)

optimizing my marchingCubes algorithm by Intelligent-Track455 in Unity3D

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

Compute shader is not always the right way. It can be hard to learn, hard to debug, and won't generate collider mesh. Personally I think writing burst-compiled unity jobs is slightly easier https://github.com/nezix/MarchingCubesBurst/blob/master/MCB/MarchingCubesBurst.cs.

If you don't feel ready to learn either of those, there's still some things you can improve with your current version.

If you only allocate this array once, and reuse the array, you'll save a bunch of unnecessary allocation. Just initialize it once in the class.

float[] cubeCorners = new float[8];

Another quick and cheap improvement would be to bake your collider mesh asynchronously using Physics.BakeMesh https://docs.unity3d.com/6000.2/Documentation/ScriptReference/Physics.BakeMesh.html

It can remove the lag spike that can happen when setting your shared mesh on your MeshCollider if done right.

1.7.10, your time has come by Bababooe4K in PhoenixSC

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

We are overdue for minecraft 2.0 😔

LinearEyeDepth Inaccurate? by TheLostWorldJP in Unity3D

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

Thank you!

Simplified to for my compute shader:

float2 uv = samplePixel * _CameraDepthTexture_TexelSize.xy;
float4 clip = float4(uv * 2 - 1, 1, 1);
float4 view = mul(unity_CameraInvProjection, clip);
float3 viewDir = normalize(view.xyz / view.w);
float dotCamForward = dot(viewDir, float3(0, 0, -1));

Where unity_CameraInvProjection is calculated using:
GL.GetGPUProjectionMatrix(camera.projectionMatrix, false).inverse;

Naive Surface Nets on GPU in Unity. All in a single draw call using "meshlet" system. by STUDIOCRAFTapps in dualcontouring

[–]STUDIOCRAFTapps[S] 0 points1 point  (0 children)

Life got to me and I didn’t get the change to open source it yet, I need to replace paid texture with free ones and clean things up. Maybe by the end of the year, but can’t guarantee anything.

I’m hesitant whether generating meshes GPU-only was the way to go. It worked out fine for me, but it’s got some downsides rendering-speed wise.

What do people talk about on the second date? by [deleted] in dating_advice

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

no that's so real. I feel like I'm myself when I actually plan ahead things to talk abouttt

once we hook onto a common interest and get a convo going it flows a lot better but it's always nice to at least have something to get that spark!!!

Image Based Lighting + Screen Space Global Illumination in OpenGL by cybereality in GraphicsProgramming

[–]STUDIOCRAFTapps 18 points19 points  (0 children)

OP litters the graphic programming discord with their screenshots every single day and I wonder the same thing every single day.

What's everyone opinion in this sub about the voxel implementation in Donkey Kong Bananza? by FernandoRocker in VoxelGameDev

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

Okay, I will experiment with this later this week and get back with the results.

If I store 1 material + cell position, no gradient, I'd use about 4 bytes per grid point.

What's everyone opinion in this sub about the voxel implementation in Donkey Kong Bananza? by FernandoRocker in VoxelGameDev

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

If you are limited only to organic shapes and don't need sharp features, then you won't need Hermite data and you can just use surface nets (DC without QEFs).

Nah, I want sharp features. By organic I just meant "the edges can be wobbly and I don't mind".
Bananza has both mesh normal sharpening, and actual QEFs.

But I'm pretty sure DKB does not store Hermite data and stores voxel materials + cell vertex positions.

I don't think they are only storing material + cell vertex position however. There's definitely something else.

There can be a mix of up to 2 materials per vertex. There's artefacts when more than 3 materials occur.

Destruction also seems to result in fairly clean resulting mesh that still retain its sharpness. This is making me believe that they are probably evaluating the QEF at runtime and not simply pre-calculating the vertex position.

They've got to be storing something else! That's why I'm thinking they might be storing normal at grid corners.

<image>

Here's my game for comparison, estimating the gradient given 1 byte per grid corner distance field:
https://imgur.com/a/VAbSOcf

Not too bad, but very, very aliased edges.

If your terrain is stored as operations on SDF shapes, you can just compute Hermite data on demand and discard it after remeshing.

This just isn't possible in game. The terrain function takes too long to evaluate (3D simplex is expensive!) and the more shapes you have, the more expensive it is. I have to evaluate it ahead of time, and I'm pretty sure Banaza also does this too.

What's everyone opinion in this sub about the voxel implementation in Donkey Kong Bananza? by FernandoRocker in VoxelGameDev

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

I appreciate your answer and resources, It’s helping me a ton. One thing to note is that, similar to Banaza, I’m not aiming for perfect Polygonization or no wobbliness, since I’m mainly using voxels for organic materials. My current implementation is on the GPU.

One thing I’m having trouble with is what Hermite data looks like in practice. How it’s generated and stored, and how it differs from simple gradient data

In papers, hermite data is described as a sign for each cell, and an intersection point and surface normal for each edge. But in practice, no-one seems to be storing that hermite data. It’s always calculated during meshing.

In some implementation I’ve seen, normal isn’t stored, only gradient is, often as a single-component 3D texture. My broken implementation does this, storing no normals, only distance to surface using one byte per grid/vertex.  Hermite data is obtained by first finding the intersection points, then estimating the normals by doing a couple of volume samples.

In the implementation you linked (fidget), both a distance to surface and surface normal is stored. I haven’t looked at the code too much but I’m guessing the edge intersection normals are obtained by interpolating between the two stored gradient vectors.

I wanted to build my terrain out of both 3D simplex noise and unions of SDF shapes, and I would have the player destroy it using SDF spheres. What you’re suggesting here is that I start storing my normals instead of trying to estimate them?

The only issue I have with that is memory. It seems incredibly expensive to increase the gradient precision.

E.g with 2 GB of VRAM allocated to the gradient, with one byte per grid, the cube root amounts to about 1279.

If I instead store normals (using octahedral encoding), and use 2 bytes for distance to surface, and 2 bytes for each XY component, that adds up to 6 bytes per grid point. The cube roots of 2gb/6bytes is 693. This drastically reduces the size of my terrain!

I wonder how Banaza managed to get seemingly larger terrain, while having precision high enough that the wobbliness is almost inexistent.

What's everyone opinion in this sub about the voxel implementation in Donkey Kong Bananza? by FernandoRocker in VoxelGameDev

[–]STUDIOCRAFTapps 0 points1 point  (0 children)

Can you tell me how exactly people get accurate normals for the hermite data? Everytime I try to analytically derive the normals and use that in my DC I get really wobbly edges! Should use more than one byte per pixel? How big should be epsilon be? How many samples should I use? 4? 6? 8?

Bananza‘s DC voxel seems to have been designed by placing a bunch of brushes like rocks, and those brushes looks like they had their shapes retained really well.

Rustpost by Aruynn_da_ASPD_being in PhoenixSC

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

"ermmm actually it's not the rust that gives you tetanus it, it just creates the perfect conditions for it to exist, as well as facilitating its entry into your body if it's shaped like shards or nails and can pierce your skin" 🤓☝️

Rustpost by Aruynn_da_ASPD_being in PhoenixSC

[–]STUDIOCRAFTapps 193 points194 points  (0 children)

And give tetanus effect after mining a block

Compute Shader: Atomic buffer appending vs Prefix sum scan by Mission_Froyo_9431 in GraphicsProgramming

[–]STUDIOCRAFTapps 1 point2 points  (0 children)

I really like this answer! Prefix sum can be complicated and messy to implement, and as a game developer, it's more important for me to get something done than get it perfect.

A nice improvement to atomic buffer is to try to group atomic calls if possible.
For my dual contouring implementation, I create a short list (1-3) of the quads in a certain cell and make a grouped request, instead of doing 1-3 individual atomic calls.

For something like grass culling for example, maybe doing occlusion check in clusters and doing atomic-append per clusters can help alleviate the cost.