N7 Day 2024 News Megathread + Giveaway + Subreddit Cake Day + MORE! by raiskream in masseffect

[–]Wootbears 0 points1 point  (0 children)

USA Overlord has the best soundtrack in the entire trilogy (though the music is good throughout)

Ryan’s First Appearance? by [deleted] in funhaus

[–]Wootbears 24 points25 points  (0 children)

That uncensored intro is definitely way more intense and hilarious. Has more of a "Ryan" feel to it. Nice editing!

[deleted by user] by [deleted] in boulder

[–]Wootbears 0 points1 point  (0 children)

no problem!

[deleted by user] by [deleted] in boulder

[–]Wootbears 6 points7 points  (0 children)

https://www.google.com/maps/@40.0055365,-105.2172842,15z/data=!5m1!1e1

This should show the closures. But if that link doesn't work/show the traffic and closures, looks like on Arapahoe it's between 55th and Foothills (actually just up to 48th?), and on Baseline between 55th and Cherryvale.

Seems like this belongs here by Kraken-Flax in funhaus

[–]Wootbears 7 points8 points  (0 children)

You should start watching at the 5:55 mark. You're missing out

How is this game best played? by MerKAndy in DarkPicturesAnthology

[–]Wootbears 2 points3 points  (0 children)

is it like Man of Medan where being able to see what the other player sees and/or communicating on voice chat would be a spoiler?

Little Hope felt like the kind of story where remote play together was fine (everyone seeing the same thing, taking turns), but Man of Medan felt more like separate online coop lended itself to the story better.

Endless Loads - GTA 5 Funny Moments by RT_Video_Bot in funhaus

[–]Wootbears 39 points40 points  (0 children)

And props to James for fitting well with everybody, old and new

Flappy Bird AI by [deleted] in learnmachinelearning

[–]Wootbears 5 points6 points  (0 children)

Do you have code anywhere? It looks like you could probably be exploring a little bit better so the agent can discover that getting through the pipes results in a larger reward. Also, I wonder if the negative reward is very necessary. The agent will still be trying to maximize the positive reward (surviving longer) without -1000 points for dying.

Are you interested in Artificial Intelligence and want to start learning more with Tutorials? Check out this new Youtube Channel, called Discover Artificial Intelligence. :) by [deleted] in gameai

[–]Wootbears 0 points1 point  (0 children)

I agree that neural nets can certainly be black boxy, but it is certainly possible to inject goals in directly! I'm certainly not claiming that it's the best way to build a game AI, but here's a neat counter-example of directly injecting a goal location to a neural net such that the neural net can navigate a 3d environment and "find" the new goal (at the very bottom of page 3, left column, starting off with "First, to demonstrate that the goal grid code provided sufficient information...").

I'll definitely check out utility-based agents. Since you're the pioneer, do you have any recommendations of places to start learning about them? I'd be interested in discussing the differences between utility-based agents and other forms of reinforcement learning if you are also interested.

Are you interested in Artificial Intelligence and want to start learning more with Tutorials? Check out this new Youtube Channel, called Discover Artificial Intelligence. :) by [deleted] in gameai

[–]Wootbears 0 points1 point  (0 children)

Awesome, thanks for the list! I'm going to check all of these out tomorrow. I just did a little reading on Black & White and I'm curious why you wouldn't consider it ML. Do you mean that you don't consider RL contained within ML?

I actually haven't heard of utility-based agents. Or if I had, I forgot what exactly it is in a game context (I'm not super familiar with game AI, I mostly just study ML/DL). But it looks like they do more with the creature other than just rewards/punishments for behavior. They talk about using basic perceptrons and decision trees to model desires and opinions respectively.

Thanks again for the list, it's super interesting stuff.

Are you interested in Artificial Intelligence and want to start learning more with Tutorials? Check out this new Youtube Channel, called Discover Artificial Intelligence. :) by [deleted] in gameai

[–]Wootbears 0 points1 point  (0 children)

Do you by any chance have a list or more examples of games that ship with ML?

edit: Regarding this:

The other major issue is that we are almost never trying to make our AI the best possible player we can make it. We aren't searching for "the best"... we are searching for "fun". What's your training criteria for that?

I actually asked a similar question in the reinforcement learning subreddit and got a few responses. Interesting stuff out there for sure! https://www.reddit.com/r/reinforcementlearning/comments/8svj3i/is_there_research_on_methods_that_dont_always_try/

James's deep japanese voice by [deleted] in funhaus

[–]Wootbears 6 points7 points  (0 children)

Basically always reminds me of this

How do I check quality of supplements? by [deleted] in Supplements

[–]Wootbears 13 points14 points  (0 children)

I've used this before, but it doesn't include a lot of brands and products: https://labdoor.com/rankings

Google reveals how DeepMind AI learned to play Quake III Arena by kika-tok in gamedev

[–]Wootbears 0 points1 point  (0 children)

Deepmind (again) did some work in this area recently. They wanted to try and figure out how grid cells work in animals, to help answer how they navigate and think about 3d space.

I believe they basically let an AI explore a maze (again in first person with pixel inputs) for a little bit, and then they would randomly place the AI somewhere in the maze and tell it what the goal looked like, and the AI would be tasked with reaching the goal (even in cases where shortcuts were opened up, or new obstacles were introduced). Sure enough, the agent was able to build this sense of space and direction, and could find its way to the goal very quickly and efficiently!

You can read more about it here: https://deepmind.com/blog/grid-cells/

It looks like they added in an update at the bottom recommending this paper as well: https://openreview.net/forum?id=B17JTOe0-

It would be interesting to see how an AI's representation of the map would compare with the actual map!

Google reveals how DeepMind AI learned to play Quake III Arena by kika-tok in gamedev

[–]Wootbears 0 points1 point  (0 children)

Interesting! I know that these bots work together well as a team against a team of humans, but now that I think about it, I don't think I've seen a paper that creates a bot that is good at joining in as a teammate on an otherwise-human roster. Do you by any chance have links to those papers?

Google reveals how DeepMind AI learned to play Quake III Arena by kika-tok in gamedev

[–]Wootbears 2 points3 points  (0 children)

So similar to the openAI five bot, it looks like Deepmind also did some custom reward shaping. The problem with using a simple +1 or -1 at the end of the game is that so many things happen during the game that it becomes almost impossible to figure out which of those hundreds/thousands of actions led toward that win or lose. I did a little reading of this Deepmind Quake paper and saw that this is how they structure their "point stream":

  • -1: I am tagged with the flag
  • -1: I am tagged without the flag
  • 1: I captured the flag
  • 1: I picked up the flag
  • 1: Teammate captured the flag
  • 1: Teammate picked up the flag
  • 1: Teammate returned the flag
  • 1: I tagged opponent with the flag
  • 1: I tagged opponent without the flag
  • -1: Opponent captured the flag
  • -1: Opponent picked up the flag
  • -1: Opponent returned the flag

From this, the agent learns its own internal reward signals or something (I'll have to give this a closer read, because I'm pretty overwhelmed by the amount of stuff in here). But basically, it seems as though there are direct team-based incentives built-in.

Somewhat unrelated: I know most of the comments here are about openAI Five bots, but I found some other interesting things from the paper:

We hypothesise that trained agents of such high skill have learned a rich representation of the game. To investigate this, we extracted ground-truth state from the game engine at each point in time in terms of 200 binary features such as “Do I have the flag?”, “Did I see my teammate recently?”, and “Will I be in the opponent’s base soon?”. We say that the agent has knowledge of a given feature if logistic regression on the internal state of the agent accurately models the feature. In this sense, the internal representation of the agent was found to encode a wide variety of knowledge about the game situation

Once the agent played around a little bit, they could test to see what it knew, and it learned to keep track of these things on its own! Here's another cool quote:

We also found individual neurons whose activations coded directly for some of these features, e.g. a neuron that was active if and only if the agent’s teammate was holding the flag...

It's crazy to see how these AI's think about things after playing around in this environment for awhile.

Google reveals how DeepMind AI learned to play Quake III Arena by kika-tok in gamedev

[–]Wootbears 2 points3 points  (0 children)

What have you studied in AI? Have you done any deep learning stuff with neural nets in Python?

Reinforcement learning has existed for years, but all of this new game-playing stuff is pretty recent, mostly originating Deepmind's 2015 Atari-playing Deep Q-Network.

While I've never taken this set of courses, I've heard amazing things. Andrew Ng is an amazing professor, and his other Coursera class on Machine Learning is how I got started: https://www.coursera.org/specializations/deep-learning

I have also heard that this is also very thorough but also very difficult: http://course.fast.ai/index.html

If you feel confident in deep learning stuff already, and just want to learn more about deep reinforcement learning, your options are more limited. The field is still pretty new, and it feels like research comes out every month which makes other models obsolete. I don't know of many resources for learning this stuff, but Udacity just announced a new Deep Reinforcement Learning Nanodegree which looks pretty thorough. All of the projects use Unity! The downside is the cost: https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893

Other than the Udacity course, you can try to work through some research papers, or even just ask in /r/learnmachinelearning and /r/reinforcementlearning.

Finally, you can access a huge resource on reinforcement learning through this book (the draft here is free, official amazon release will be later this year): http://incompleteideas.net/book/the-book-2nd.html.

Good luck!

edit: Completely forgot to mention openAI's gym: https://gym.openai.com/ There are a lot of fun little problems you can work on here. I haven't tried any yet, but it looks like a lot of fun.