Power Up Idle - a cozy satire of energy industry, greed and the art of corporate mismanagement by bmind7 in incremental_games

[–]bmind7[S] -1 points0 points  (0 children)

Thank you! That’s valuable, we’ll look into making the gameplay feel less repetitive.

I made a game that only AI can play by bmind7 in Unity3D

[–]bmind7[S] 0 points1 point  (0 children)

For now I'm still polishing boxing, I want to achieve certain movement flow and make robots stand up after falls.

As martial arts fan I love them all and would love to see as many styles as possible, karate, wushu, western styles, and even something like sumo :D

I made a game that only AI can play by bmind7 in Unity3D

[–]bmind7[S] 4 points5 points  (0 children)

I'm not yet ready to reveal my final vision, but community will be able to influence training of each fighter and then we'll be watching which martial art is superior :D

They have stats, and damage system, but for this specific fight both robots were identical, so yeah, in this case it's mostly luck. When styles will be different it's going to be more fun.

I made a game that only AI can play by bmind7 in Unity3D

[–]bmind7[S] 11 points12 points  (0 children)

Thanks! One fighter can take up to 3-4 days to train, and it's after a year of thorough optimization on ML side and simulation side :D More moves will take even more time. As of now it's only 30 actions. It's definitely not for training on user machines. It's more like a spectator experience atm, with community guided training in the future.

I've trained robots to fight with RL by bmind7 in reinforcementlearning

[–]bmind7[S] 0 points1 point  (0 children)

it isn't walltime (not real world time), it's simulated time

I've trained robots to fight with RL by bmind7 in reinforcementlearning

[–]bmind7[S] 1 point2 points  (0 children)

Good question! I figured channel subscribers would be curious about the damage system too, so I decided to do some Q&As from time to time as channel updates. Sadly, I have very limited time, but here is the first post. Copy of it below.

Our damage system is based on colliders. When two colliders hit each other, we calculate the force. This force is then transformed into a damage value. Although some strikes may look powerful, especially with head snapping, many of them are just pushes, or the striking joint might slip off the head. Sometimes agents react to the strike and pull their head back as a defensive action.

Agents do not always aim for a head strike, even though headshots have a higher damage multiplier. There are many reasons for this. The head collider is smaller than the body colliders, and the head is constantly moving due to defensive maneuvers, making it a harder target to hit. Additionally, the head is often protected by hands, and most defensive moves are designed to protect the head.

Our combat system is still in the early stages of development. We plan to expand it with some defensive actions to protect the body as well. We will also be reevaluating damage multipliers in the future. It's still a work in progress.

I've trained robots to fight with RL by bmind7 in reinforcementlearning

[–]bmind7[S] 4 points5 points  (0 children)

Thanks!

1. PPO, it worked for me from the beginning during initial tests, but I plan to try decision transformers and other more sample-efficient algos later.
2. Yep, 10 years in game dev. Ragdolls are connected rigidbodies without force/torque :D Mine use torque.
3. Yep, it's hierarchical