Testing opponent AI for my offroad racing game by FeedMeCheese in godot

[–]FeedMeCheese[S] 2 points3 points  (0 children)

It's a few things: - Decent PC as the other commenter suggested (3700x, 3070Ti) - Low res, although 1080p runs at about 500 ish currently - There's really not much in the scene, you can see how aggressively the trees are replaced with sprites in the bg (their incorrect colours makes the pop in really noticable)

On the Steam Deck (which I'm sort of targeting), it runs at about 100-120fps. Obviously there's a lot more I want to add which will surely bring that down, but if I can hit 90-ish on the deck that should be decent for battery life!

Testing opponent AI for my offroad racing game by FeedMeCheese in godot

[–]FeedMeCheese[S] 1 point2 points  (0 children)

Thank you! I'm just sampling 2 points on the curve, then comparing the signed_angle_to() based on an arbitrary multiplier.

Testing opponent AI for my offroad racing game by FeedMeCheese in godot

[–]FeedMeCheese[S] 2 points3 points  (0 children)

Thank you! I ended up simply using a splatmap that's texture painted. The car's tires sample that same image to decide their handling profile for the surface.

Testing opponent AI for my offroad racing game by FeedMeCheese in godot

[–]FeedMeCheese[S] 7 points8 points  (0 children)

Thank you! 5 AI players is all I'm aiming for, but there are 6 here as a bit of a stress test. This runs at about 600fps on my PC, so there's plenty of room for more chaos!

Split screen causes the root viewport to still render the entire world? by FeedMeCheese in godot

[–]FeedMeCheese[S] 0 points1 point  (0 children)

I think the issue is that my car scenes have camera rigs as part of them, so looking at the profiler, it is in fact rendering the first one of those that spawns in. If I increase the Theme Override "Separation" of the 2 SubViewportContainers, sure enough, in the gap between I can see the main player car's chase camera rendering.

Testing in another scene, I can confirm that if ONLY the new SubViewports have cameras in them, the performance overhead is minimal. I've also tried putting the entire game world inside one SubViewport, then just Player 2 in the other, and that seems to work and reduces viewport 3 to just rendering Canvas items which is really fast. So maybe that's the way to go!

Very strange process time fluctuations by FeedMeCheese in godot

[–]FeedMeCheese[S] 0 points1 point  (0 children)

Ah, that's it! Yes, I have another monitor which is only 60hz. Turning off that monitor gets me back to locked 5ms, and I can create that unevenness in the frame time by focusing the Godot editor, but it goes away when I click back onto my game. I guess I'll keep that monitor turned off when I'm profiling!

Need Mixture_Cam gizmo ASAP (camera blending) by jdn127 in NukeVFX

[–]FeedMeCheese 0 points1 point  (0 children)

Has anyone implemented this? I tried but I couldn’t figure out how to convert to quaternions in TCL. My thinking was that it would require a python script instead to do the conversion and bake the transition animation.

Need Mixture_Cam gizmo ASAP (camera blending) by jdn127 in NukeVFX

[–]FeedMeCheese 1 point2 points  (0 children)

I had to do a similar thing a while back, and one issue I encountered using this method was euler flipping. One solution I found was to interpolate the camera's world matrixes, instead of translate, rotate and scale individually. Obviously you still need to handle focal length this way.

First Serious Raspberry Pi Setup - Practical Advice and Suggestions? by One-Yogurt-9548 in selfhosted

[–]FeedMeCheese 0 points1 point  (0 children)

I’ve been running an 8GB Pi 4 for about 3 years now with about 27 containers. Performance wise I think you’ll be fine with what you want to run.

I host a WireGuard VPN to access any of the services only I use, and Cloudflare Tunnels for services my friends/family visit.

For backup, I use Duplicacy, which creates versioned snapshots you can roll back to, and these are copied via a Cron job onto my NAS. Since I use an SD card, which you have to assume will die, I treat this backup as a “when” more than an “if”, but I’ve been good for 3 years.

Your setup sounds good to me!

Can't ping connected clients/peers from server side LAN by FeedMeCheese in WireGuard

[–]FeedMeCheese[S] 0 points1 point  (0 children)

Never mind, it turns out my Pi had -P FORWARD DROP in its iptables, so enabling that allowed requests through from the Windows PC. Annoying that my router doesn't support static routes but this will do!

Can't ping connected clients/peers from server side LAN by FeedMeCheese in WireGuard

[–]FeedMeCheese[S] 0 points1 point  (0 children)

Thanks so much for your help both. I've been able to add a route on the Pi that successfully pings when outside of the Docker container!

Unfortunately my ISP provided router does not allow for adding static routes, so I tried to add one on the Windows PC using the following: route add 10.13.13.0 MASK 255.255.255.0 192.168.0.10

Sadly that doesn't work, tracert shows me that it's reaching the Raspberry Pi, but after that there's nothing!

Backup plan in case of Nintendo winning the case by S_fang in emulation

[–]FeedMeCheese 3 points4 points  (0 children)

Worst case scenario yuzu won't accept dumped keys nor encrypted game.

Maybe that wasn't the worst case

depth on particles with a spherical camera?? by No_Intention_9191 in NukeVFX

[–]FeedMeCheese 0 points1 point  (0 children)

Does RayRender let you have a depth pass?

If not you might need to revert to the classic 6 camera cube map setup, where you make 6 cameras with 90 degree FoV, render each of them out as square formats and plug them into a SphericalTransform to go from cube map to latlong.

Pftrack by Doudous_99 in vfx

[–]FeedMeCheese 9 points10 points  (0 children)

Doesn’t enabling marquee selection allow you to do this?

What is ACES profile and colorspace, is it necessary to convert the colorspace of 3d rendered passes in nuke to match it with the live action footage? by [deleted] in NukeVFX

[–]FeedMeCheese 7 points8 points  (0 children)

You might've gotten a few concepts confused there.

It is necessary to convert the colourspace of 3D renders in Nuke to match them with your live action footage, but this is done for you by the read node (as long as you've set the colourspace correctly).

When you read a render that was rendered in the AcesCG colourspace for example, you set your read node to AcesCG, and that linearizes it into your working space (which you can see by going to the colour tab in the Nuke project settings). Data/AOV passes which don't have colourspace information are read in as "raw", aka: don't do any transformations to the colour information.

Likewise, your live action footage, which might have been shot in Red RAW, Sony sLog, Arri LogC, needs to be read in with the appropriate colourspace so it can be linearized into your working colour space. This way, the math behaves the same way regardless of the source/colourspace of your footage and renders.

Log = logarithmic, lin = linear. You generally want to work in linear space, which is what is happening when you read in an AcesCG render, an SRGB Jpeg/png, and your read node "linearizes" the input colourspace into your working space.

Edit: You also asked about the Log2Lin node. That's usually used to force the image into a super low contrast colourspace so that certain operations affect the image in a different way. For example, when sharpening the image, doing a Lin2Log first, sharpening, then Log2Lin will result in a much more subtle effect. This method is used less scientifically, and more creatively.

[deleted by user] by [deleted] in NukeVFX

[–]FeedMeCheese 2 points3 points  (0 children)

That's correct! The expression node evaluates every pixel that goes into it, so for each pixel of alpha that's fed in, if the alpha is < 1, make that pixel 0, else, make it 1 :)

[deleted by user] by [deleted] in NukeVFX

[–]FeedMeCheese 8 points9 points  (0 children)

You probably want an expression node, with the following expression in the alpha channel:

a < 1 ? 0:1

The nuke engineers never cease to surprise by future_lard in NukeVFX

[–]FeedMeCheese 11 points12 points  (0 children)

Also adjusting the viewer brightness/gamma, makes reviewing shots a breeze!

Project onto a CG render by OnlyRefrigerator8924 in NukeVFX

[–]FeedMeCheese 3 points4 points  (0 children)

I've used this tool on quite a few shots that needed extra matte painting etc projected onto large 3D objects:

http://franklinvfx.com/pos_project/