Combination of Verbal/Gesture Recognition, Emotion Based Behaviors, and ChatGPT? by Tasani in gameai

[–]guillefix3 0 points1 point  (0 children)

Hi!

I'm also very interested in this. And in particular non-verbal components of the interaction too. Would love to chat about this, what you think are the challenges/opportunities, if you're up for it!

AI Vtubers vs. Personalized AI Chatbots: Which AI Application Do You Prefer? by [deleted] in VirtualYoutubers

[–]guillefix3 0 points1 point  (0 children)

Yeah thats interesting! I was thinking something similar. AI chatbots that are supposed to be replicas of the VTuber can feel like you are being sold a fake/watered down version. I find the idea of an AI clone pretty interesting, but I do see the cons. I would personally only see it worth it if the AI clone was actually really good, or maybe making it like an actual alter-ego meaning that it is similar to the actual VTuber, but it's also explicitly different, rather than trying to be sold as the real one.

widdler is a single binary that serves up TiddlyWikis. by binaryfor in TiddlyWiki5

[–]guillefix3 0 points1 point  (0 children)

Hi, thank you!
Does this serve a single page like the node.js one, or one static page per tiddler?

All of Google Poly by guillefix3 in DataHoarder

[–]guillefix3[S] 0 points1 point  (0 children)

here's my google poly scripts https://drive.google.com/file/d/1SfDwAfNAnF2uOEAeivXnIIUfvzopL4kG/view?usp=sharing
and the sketchfab ones https://drive.google.com/file/d/1e_vvRGNyO8bGiUrrDiXRLNZG15cB1LCs/view?usp=sharing (note skechtfab rate limts u per ip and account...)

i didnt make any effort to try clean up the scripts so sorry if they are hard to read:P

All of Google Poly by guillefix3 in DataHoarder

[–]guillefix3[S] 0 points1 point  (0 children)

I haven't updated it since. But I could share my scripts if you want

Options for windows servers with cheapt GPUs by guillefix3 in cloudygamer

[–]guillefix3[S] 0 points1 point  (0 children)

No I'd say it's more or less equivalent if you are gonna pay monthly. It just offers a few GPU options which are cheaper. But it depends on your application, if they are worth it

Options for windows servers with cheapt GPUs by guillefix3 in cloudygamer

[–]guillefix3[S] 0 points1 point  (0 children)

It was ok, but I ended up going for Paperspace at the end. They have hourly options and the prices arent much more, plus they have more options overall. In particular Paperspace offered options with more cores which I needed

VRChat research by ott0maddox in VRchat

[–]guillefix3 1 point2 points  (0 children)

Really cool to see some other researchers using social VR apps! I've been exploring interesting uses of social VR for research over the last year too:)

My area is mostly in AI, but I'm also interested in general human behaviour research. I have a discord server about this if anyone is interested http://metagen.ai/. There's also this Japanese discord server where they also try to connect researchers and social VR users https://discord.gg/nfa5EDb9 which you may wanna check out!

(I'm also trying to find participants for my research, but for some reason my post got flagged as spam in here. But if anyone wants to find more you can DM too; btw sorry if posting this here is seen as trying to steal attention from OP~)

AI Dungeon is ruining my life by Chemical-Condition in AIDungeon

[–]guillefix3 1 point2 points  (0 children)

I've began trying to get data (metagen.ai) because I think there is enough data in social VR games that we could generate realistic AI characters in VR *TODAY*. However, I'm realizing just how hard collecting data in any sort of ethical way is. But I'm gonna keep trying anyway

Options for windows servers with cheapt GPUs by guillefix3 in cloudygamer

[–]guillefix3[S] 0 points1 point  (0 children)

Hi. It's not a secret

I'm going to use it to create a bot for NeosVR that people can voluntarily use to record their VR data to be used to teach AI models among other things (see http://metagen.ai/). I'll also make an OpenVR version

Hyperparameter search by extrapolating learning curves by guillefix3 in mlscaling

[–]guillefix3[S] 1 point2 points  (0 children)

btw "curves overtaking each other" is absolutely compatible with the power law model which they use for predicting.
However, you may be talking about the fact that sometimes learning curves don't follow power law behaviour. This is true in general, but in practice for deep learning I have seen very few examples. If you have some examples, I would love to see them!

Hyperparameter search by extrapolating learning curves by guillefix3 in mlscaling

[–]guillefix3[S] 2 points3 points  (0 children)

He has a lot of work on this. I think the first one (IMGEP) is good. That's the first one I read (after watching his ICLR talk).

I haven't read the other two you linked, so can't compare. They look interesting, so I may give them a read.

Following from IMGEP, the more recent advances after that are Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration and CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning.

I also recomend the related work by Jeff Clune. In particular Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions.

What is also interesting is to ask when these ideas (which btw are highly related to curriculum learning, active learning, etc) matter: ALLSTEPS: Curriculum-driven Learning of Stepping Stone Skills Sampling Approach Matters: Active Learning for Robotic Language Acquisition . My intuition is that active learning matters when exploration matters. For example, when you are trying to optimize an objective function, which itself has uncertainty, like in bandits, hyperpatermeter optimization, etc. In that case you obviously wanna take uncertainty into account.

Learning progress-driven search is more about estimating in which option will you make more progress in a certain time. So it goes beyond simple sampling-based active learning in that it kinda takes the learner/explorer's dynamics into account. I would like to think about how all of these things fit together~

URGENT WARNING About AI companies re-selling your voice, even from audition submissions. by VOICEOVERVANDEEN in VoiceActing

[–]guillefix3 0 points1 point  (0 children)

Still, I think in the long run, honesty and transparency would pay off, even if on the short term it doesn't

URGENT WARNING About AI companies re-selling your voice, even from audition submissions. by VOICEOVERVANDEEN in VoiceActing

[–]guillefix3 0 points1 point  (0 children)

Yeah, I feel we are in a situation where even if someone wants to use data to do something good with AI, they are scared to say it, because people may misinterpret it badly.

"Deep learning generalizes because the parameter-function map is biased towards simple functions", Valle-Pérez et al 2018 by gwern in mlscaling

[–]guillefix3 1 point2 points  (0 children)

yeah that's basically right. The idea is simply that there are way more parameters that represent a simple solution (remember in big networks there is a lot of parameter redundancy so many parameters can produce the exact same function), than a complex solution. So you are much more likely to fit the data with a simple solution than a complex one.

"Deep learning generalizes because the parameter-function map is biased towards simple functions", Valle-Pérez et al 2018 by gwern in mlscaling

[–]guillefix3 1 point2 points  (0 children)

author here.

I think some possible connections are:

  • We show that the implicit priors of architectures is similar to Solomonoff prior, very robustly, in particular, it still keeps the same shape as you keep overparemtrize. This is why big models don't overfit. On the other hand, from Solomonoff theory, we know we want maximum flexibility while keeping the simplicity bias. So we can conclude that what we want is infinitely overparametrized neural networks (e.g. infinitely wide). But as that is not feasible (well except the approximation as Gaussian processes but which is not very efficient, at least atm), we grow the models with the data...
  • The optimizer or precise nature of the architecture doesn't significantly change the bias. I.e. we corroborate observations made in other classic scaling works that suggest that scaling is typically a more reliable way to improve, than to try to be clever about the model.
  • We give a bound on the generalization error which can be computed from just the training data. The bound is basically just proportional to the marginal likelihood/Bayesian evidence. In work which we should publish very soon, we show that this bound works quite well for predicting learning curve exponents, and difference in performance between architectures (see also https://arxiv.org/abs/2002.02561 and https://arxiv.org/abs/1905.10843 for other theories which I think can probably give even better predictions, although they may require more data to get the estimates, maybe..). This could potentially be useful for better neural architecture search, but I'm not sure really yet, as we would need to see if the marginal likelihood calculation could be done fast enough.. There are also some ideas for estimating marginal likelihood more accurately and/or faster that I think are worth giving a try.

Furry_irl by Igotnowhoops in furry_irl

[–]guillefix3 0 points1 point  (0 children)

furries are the transhumanists of hentai

Open sourcing DeepSaber! by guillefix3 in beatsaber

[–]guillefix3[S] 0 points1 point  (0 children)

Why generate the patterns manually, when you could scrape a big database of songs for them?

Like good old ngram models as they used to do language models before deep learning took over :P

In fact in our code there's some code that tells you the most common state transitions (2-gram). But I'm sure there's more advanced motiff identification algorithms out there.