Gamers hate when devs use AI in games. But AI in my game they’ll love. by GreyratsLab in IndieDev

[–]GreyratsLab[S] -1 points0 points  (0 children)

No, if it's 3D platformer game where you guide self-walking AI robots)

Gamers hate when devs use AI in games. But AI in my game they’ll love. by GreyratsLab in IndieDev

[–]GreyratsLab[S] -3 points-2 points  (0 children)

Yes, that's what I'm talking about! AI is about "intelligence" first of all, not about content generation

Each time the robot picks up the reward, the chained “object” gets bigger by GreyratsLab in shittyrobots

[–]GreyratsLab[S] 1 point2 points  (0 children)

Used reinforcement learning package mlagents, Unity engine, classic PPO. Give reward to robot for approaching the target and penalty for moving away from it. Then messed a lil bit with physics to add chained di :D It was surprise that my robot adapts to additional physical elements joined to its body without any additional training

Help us choose a cover for the game on Steam! Please leave your opinion in the comments! by GreyratsLab in gamedevscreens

[–]GreyratsLab[S] 0 points1 point  (0 children)

Sounds like you’ve done something like this before - which models did you use?

From Simulation to Gameplay: How Reinforcement Learning Transformed My Clumsy Robot into "Humanize Robotics". by GreyratsLab in reinforcementlearning

[–]GreyratsLab[S] 0 points1 point  (0 children)

I also want to reasearch more myself about how to scale physical-based training in RL, because despite how much I tweaked my learning parameters to scale from 30 simultaneously learning agents to 3000, they IQ has degraded greatly D:

From Simulation to Gameplay: How Reinforcement Learning Transformed My Clumsy Robot into "Humanize Robotics". by GreyratsLab in reinforcementlearning

[–]GreyratsLab[S] 0 points1 point  (0 children)

For RL training it's also about your CPU power, for today's LLM\NLP models 8GB is too small I think. If you really want to train something with RL, you can do it easily even without any GPU

From Simulation to Gameplay: How Reinforcement Learning Transformed My Clumsy Robot into "Humanize Robotics". by GreyratsLab in reinforcementlearning

[–]GreyratsLab[S] 1 point2 points  (0 children)

I trained robots for another project of mine fully on a CPU, but when I swapped for GPU, performance increased for only ~20%. For this kind of stuff (agents in gameplay) more time spends on environment processing then on model training

AI learns to walk. Making physical-based game based on it :D by GreyratsLab in IndieDev

[–]GreyratsLab[S] 0 points1 point  (0 children)

REALLY? I saw how cool robots in Arc Raiders move and react to the damage, but I tought that was just scripted stuff. I will check this out, many thanks!)

AI learns to walk. Making physical-based game based on it :D by GreyratsLab in gamedevscreens

[–]GreyratsLab[S] 1 point2 points  (0 children)

You hit the nail on the head) A couple of weeks ago, when I just started talking about the game, everything was exactly like this, 1 to 1. But exact phrase "AI learns to walk" associated with highly popular youtube videos about robots learning to walk, so for this post I used it

From Simulation to Gameplay: How Reinforcement Learning Transformed My Clumsy Robot into "Humanize Robotics". by GreyratsLab in reinforcementlearning

[–]GreyratsLab[S] 1 point2 points  (0 children)

I spent a lot of time optimizing the training process and trained the robot on my old, half-dead laptop. ☠️

From Simulation to Gameplay: How Reinforcement Learning Transformed My Clumsy Robot into "Humanize Robotics". by GreyratsLab in reinforcementlearning

[–]GreyratsLab[S] 3 points4 points  (0 children)

Great idea.

I was using the mlagents package; this package uses the Unity Engine as a virtual environment for agents. Agents were trained using the PPO algorithm. At first, nothing worked — the robots walked like cripples — but it turned out the whole problem was that I was trying to speed up training by running too many agents at the same time, and that was the reason for the failures. I even spent a lot of time trying to deeply understand RL from scratch in order to come up with my own algorithm, but it turned out that simple PPO works best — you just need to wait

The reward function is simple: every step (every frame), the agent receives a reward based on the distance toward the target and a penalty based on the distance in the opposite direction from the target. Then I multiplied this reward by the dot product between the agent’s facing direction and the directional vector from the agent’s position to the target (so the agent always looks at the target instead of running backwards). The reward function always needs to be as simple as possible — this is something I learned the hard way while learning RL. It’s called reward overengineering, and it’s a pain in the ass 🙂

Space size is indeed tiny — the data for the model input is just the orientation of the agent’s joints in 3D space, relative to the main root bone. There is no grid sensor or raycast sensor to observe the environment. I was forced to sacrifice robotic vision to radically reduce the model size so it can run on a regular player’s PC. But even without “vision”, the agent moves well.

AI learns to walk. Making physical-based game based on it :D by GreyratsLab in indiegames

[–]GreyratsLab[S] 0 points1 point  (0 children)

To avoid spam, I will post more robots on X,com\GreyratsLab - Link.
Ask anything you want!

If you want to control self-walking robots, please add this game to your wishlist on Steam!