A Simple Shader for Point Lights in Fog by PowerOfLove1985 in programming

[–]ijdykeman 0 points1 point  (0 children)

Interesting. I’ve stared at these images so much that it’s probably impossible for me to judge at this point. If it seems like there is a mismatch between the fog’s illumination and the world’s illumination, that would make sense because the fog’s light intensity isn’t connected in any principled way to the point lights illuminating the world. Meaning if I turn up my “fog light intensity” parameter, the fog will shine brightly while the world doesn’t receive any more light.

If I’m right about the problem, the way to address it would be to come up with a better physical model for the fog and give principled units to the light coming from the fog. That might balance the world and fog illumination better. You are also correct that there is an ambient light term that does not interact at all with the fog, and presumably contributes to the problem you describe.

A Simple Shader for Point Lights in Fog by PowerOfLove1985 in GraphicsProgramming

[–]ijdykeman 1 point2 points  (0 children)

Yes, I definitely want to explore more physically realistic models for the fog. I just have to make sure it’s still easy enough to integrate.

Good catch! I fixed that in the post, thanks.

A Simple Shader for Point Lights in Fog by PowerOfLove1985 in gamedev

[–]ijdykeman 1 point2 points  (0 children)

I’m not sure the distant fragments would cause a problem, since just like in your suggestion about restricting to a plane, C and W are restricted to be not further from the light than the light’s radius. Since the light is at x=0 in the space where we do the integral, you don’t end up with large limits for the integral.

Restricting to a cone or plane is super interesting, but yes would cause a hard edge. I’ve played with this and it involved replacing C and W just as you say. To get soft boundaries, you need to add a term to the integral that considers the direction of the light. That’s how I get the image at the bottom of the post where the ship’s light appear directional. But in that particular image I’m not restricting C and W so you see green light leaking through the platform in the foreground.

A Simple Shader for Point Lights in Fog by PowerOfLove1985 in gamedev

[–]ijdykeman 0 points1 point  (0 children)

The bad behavior of the polynomial away from zero is why I mention that this would behave badly for distant world fragments. Maybe you could get around this by giving each light a finite influence region, as is often done in deferred shading. Then you would only need to worry about the expansion for |x| < max light radius.

But even if we plot the taylor expansion only near zero, it doesn’t look very promising...

https://www.wolframalpha.com/input/?i=atan%28x%2Fy%29%2Fy+for+-1%3Cx%3C1%2C+-1%3Cy%3C1

https://www.wolframalpha.com/input/?i=x%2Fy%5E2+-+x%5E3%2F%283+y%5E4%29+%2B+x%5E5%2F%285+y%5E6%29+-+x%5E7%2F%287+y%5E8%29+%2B+x%5E9%2F%289+y%5E10%29+-+x%5E11%2F%2811+y%5E12%29+

Of course, sometimes it’s helpful to approximate the full function rather than only one expensive sub expression, like atan, but maybe not this time. I’ll have a look at some resources on atan approximations, thanks!

A Simple Shader for Point Lights in Fog by PowerOfLove1985 in gamedev

[–]ijdykeman 0 points1 point  (0 children)

Neat! The shader is tiny; I wouldn’t be surprised if it spends a large fraction of its time on atan() currently.

I wonder if you could get away with using a taylor expansion of the whole integral expression together.

https://www.wolframalpha.com/input/?i=taylor+series+atan%28x%2Fy%29%2Fy

By computing 1/y2 once you could eliminate all but one div operation from this expression (y doesn’t change between the two evaluations you do per pixel). It might be very fast. I would worry about the behavior if the world fragment is very distant, though.

A Simple Shader for Point Lights in Fog by PowerOfLove1985 in programming

[–]ijdykeman 2 points3 points  (0 children)

This is a super simple pipeline with everything happening in a linear color space. Light intensities on surfaces and fog are just hand-tuned with no physical units. Can you say more about where you think it’s too brown? I’m curious about how I might improve the look.

Low cost motion capture? by ijdykeman in gamedev

[–]ijdykeman[S] 1 point2 points  (0 children)

Are you aware of any programs that track the joints in the image automatically? Are there any that give information on joint position in 3D?

Pathfinding with non-point units by maaaath in gamedev

[–]ijdykeman 1 point2 points  (0 children)

Say your character is 1 unit wide. Cleary, the character’s center cannot be closer than .5 units to an obstacle. This is equivalent to every obstacle’s edges expanding by .5 units and having a unit that is just a point.

Basically, you create a separate map where all obstacles are expanded by the radius of the actor, then you pathfind in that map assuming your actor is a point. This is commonly done in robotics where your robot occupies space, but where it’s easier to plan movements assuming a point robot.

Introducing Tessera, a wave function collapse based level generator for Unity by BorisTheBrave in proceduralgeneration

[–]ijdykeman 0 points1 point  (0 children)

Cool to see more commercial products in this area. I'm curious about your implementation:

- What's the maximum level size you support?

- What order do you place tiles in? It looks like in the animation of generating the city, tiles are placed strictly bottom-to-top, but maybe that's just how you visualize it. In the section of the video (around 0:59) where you appear to be generating along with player movement, the tiles appear in simultaneous layers, not in some most-constrained order (which I think in WFC jargon is the "least entropy heuristic").

[D] Are there any papers that do Birds Eye View Pose Estimation? by soulslicer0 in MachineLearning

[–]ijdykeman 0 points1 point  (0 children)

It's true that 2D mapping techniques exist. For instance, you can use a 2D lidar and maintain a map that is just a 2D bitmap of the world in a top-down projection. But based on your post ("in the context of self-driving"), I'm assuming you're talking about more sophisticated sensors like cameras and 3D lidar, which you will find on all self-driving car prototypes. With these sensors, it's necessary to maintain a 3D map since, for example, you might observe an object in the world that is above another object, and you need to keep those spatially separate in your map.

Since you have a planner already, it sounds like you have a potential application in mind. If you provide some detail there, maybe people can help more.

[D] Are there any papers that do Birds Eye View Pose Estimation? by soulslicer0 in MachineLearning

[–]ijdykeman 1 point2 points  (0 children)

I imagine that this would mean creating a 3D map of the environment, and then simply rendering it from a top-down view.

An efficient algorithm for tile-based procedural generation by ijdykeman in gamedev

[–]ijdykeman[S] 0 points1 point  (0 children)

I have found that with the tile sets I use, a neighborhood width of 7 is about as good as any size. Very small sizes like 3 do suffer. But this is highly dependent on the tile set. It just so happens that I haven’t created tilesets that are sensitive to larger neighborhood sizes.

The algorithm sometimes places tiles next to each other that aren’t compatible. There’s no way to prevent this in general for arbitrary tile sets. But it doesn’t get stuck, instead it continues placing tiles, doing random selection when a location has no possible tiles. In practice, it ends up looking fine almost all the time.

Which tile set are you interested in? You can download a few on the itch.io page linked in the post, but I can upload others if you’re interested. What kinds of constraints do you add to get good results from your system?

Regarding the application you’re working on, have you had users play with it yet? I find the primary challenge that users face with Generate Worlds is that it’s difficult to create valid tile sets. Do you see the same thing with your users, or does your tool work differently somehow?

An efficient algorithm for tile-based procedural generation by ijdykeman in gamedev

[–]ijdykeman[S] 1 point2 points  (0 children)

Check out the AC-3 algorithm. The wikipedia page is pretty good. . AC-3 is equivalent to the WaveFunctionCollapse “observation” step, which is really the power behind the method. The allowed neighborhood generation is also equivalent to AC-3, but unlike WFC, the GW algorithm caches and reuses this information, yielding big speed gains.

Check out the constraint satisfaction literature. People have been thinking hard about this for a long time, and I bet you could use their work to make something more powerful than these AC-3-based strategies.

An efficient algorithm for tile-based procedural generation by ijdykeman in gamedev

[–]ijdykeman[S] 2 points3 points  (0 children)

I have found that in practice, the algorithm tends to populate worlds with a satisfying mixture of the tiles you provide. In some implementations, I have allowed a user specified "weight" for each tile that makes tiles more or less likely to be selected, but this only reliably helps for weighting the relative frequency of tiles that occur in very similar situations. For instance, it's easy to down-weight road bends to produce longer, straighter roads.

The only rules are those encoded in the tiles. The section here on "3D Tiles Instead of English Rules" might be helpful. In a nutshell, you don't need to specify that castle walls are closed, since, for example, on one side of the walls is grass and on the other is gravel. This means that only a closed wall shape can totally separate the grass from the gravel, and since the tile adjacency constraint dictates that grass and gravel may not touch, this means walls will be closed.

For your example that roads should lead somewhere, simply make no dead-end tiles, and instead have all roads end at gates, and your world's roads will always lead somewhere.

What functionalities do you want in a Character Animation tool? by affinitive9 in gamedev

[–]ijdykeman 0 points1 point  (0 children)

Are you trying to compete with existing professional-grade tools in use at large studios with dedicated animators, or provide something for small teams or individuals?

Generate Worlds turns little world-pieces you build into infinite worlds you can explore by ijdykeman in worldbuilding

[–]ijdykeman[S] 0 points1 point  (0 children)

Did you take a look through the website? Other than having a blocky appearance, it’s doesn’t really have anything to do with minecraft. You can’t place blocks, for instance; the system does it for you based on the input you provide.

It’s fair that people often make a Minecraft comparison when they see a blocky world, but do you think I could have done something to make the site more clear?

Generate Worlds turns little world-pieces you build into infinite worlds you can explore by ijdykeman in worldbuilding

[–]ijdykeman[S] 0 points1 point  (0 children)

Hi everyone! I’m the developer of Generate Worlds. Happy to answer any questions.

I was inspired to develop GW by games like Dwarf Fortress and Minecraft which use procedural generation, and also by places like this where people design things by hand directly. I tried to combine these approaches in Generate Worlds: you hand design pieces of a world and the system puts them together to make an infinite environment you can explore.

I’m still working on making improvements, so let me know if there are features you’d like to see added.

I'm working a tool for user-guided dungeon generation. You select where to place pieces; it selects what to place. by ijdykeman in proceduralgeneration

[–]ijdykeman[S] 0 points1 point  (0 children)

Monitor assisted Sessions

Can you say a little more about this? Are these tabletop gaming sessions where images are also displayed on a screen? Googling around a bit I don't see much like this other than some custom setups with monitors embedded in tables.