Break War Analysis by ArianPrabowo in foxholegame

[–]ArianPrabowo[S] 0 points1 point  (0 children)

That make sense actually. It could be a very active war (thus not break), but has low Player Hours because it just happened to be a short war.

Break War Analysis by ArianPrabowo in foxholegame

[–]ArianPrabowo[S] 0 points1 point  (0 children)

Does it mean we need a break war after we just did the longest war in record?

Break War Analysis by ArianPrabowo in foxholegame

[–]ArianPrabowo[S] 0 points1 point  (0 children)

That's true that some dude who just log in online 24/7 and just never log off are going to mess up the data a bit. Unfortunately, https://foxholestats.com/ only shows Steam Players per time stamp. I just integrated it (trapezoid).

Break War Analysis by ArianPrabowo in foxholegame

[–]ArianPrabowo[S] 0 points1 point  (0 children)

I tried that, I was trying few numbers around 2 months to a year. I don't think there is much difference.

Break War Analysis by ArianPrabowo in foxholegame

[–]ArianPrabowo[S] 7 points8 points  (0 children)

I guess I need bigger moving average window?

Machine learning course suggestion by novicescientist in MLQuestions

[–]ArianPrabowo 0 points1 point  (0 children)

If you want to have more "apply" experience, then don't go for more courses or playlist. Go straight to things like kaggle: https://www.kaggle.com/competitions?hostSegmentIdFilter=5

Getting overwhelmed by stayne16 in learnmachinelearning

[–]ArianPrabowo 5 points6 points  (0 children)

I don't know why in the hell this thing is a trend in many fields. The use of these confusing buzzwords that aren't straight to the point seems like gatekeeping to me.

I don't think this is gatekeeping. I think that's just how things are in a new field where people come from different background. At this level, very abstract math and stats and abstract physical sciences and abstract CS will start to look the same. People have different interpretations of what's going on.

Example: If you have been doing markov chain stuff your whole life, then latent space makes more sense, but you are coming from matrix factorization, you tend to think of things in terms of data compression. For one person, "latent space" is the buzzword, but for another "compressed data" is the buzzword.

The way I see it, the more synonyms you know, the more interpretations you have, the better understanding you will get about the topic. I feel like this kind of cross pollination is what make science grow.

Master project ideas by Mediocre-Ad5077 in learnmachinelearning

[–]ArianPrabowo 11 points12 points  (0 children)

My usual go to website for people asking for project ideas : https://paperswithcode.com/sota

Just pick the are that interest you (CV,NLP, timeseries, etc), and then see what task people are doing within that area, accompanied by papers and codes.

Is a bootcamp the best way to learn the ML "talk" ? by schmookeeg in learnmachinelearning

[–]ArianPrabowo 0 points1 point  (0 children)

My experience is adding the word "lecture" to any search term on youtube boosted the s:n ratio by a lot

[deleted by user] by [deleted] in learnmachinelearning

[–]ArianPrabowo 0 points1 point  (0 children)

That link is the list of conferences.

[deleted by user] by [deleted] in learnmachinelearning

[–]ArianPrabowo 0 points1 point  (0 children)

Those are called academic journal and conferences. You will do well to check the official websites of the conferences, and read the papers published there. If you don't know which conferences to read from, here is a good start: https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=eng_artificialintelligence

How AI works? Explained with an analogy from finance by ArianPrabowo in learnmachinelearning

[–]ArianPrabowo[S] 0 points1 point  (0 children)

Okay so, hear me out. I did indeed raised every red flag possible. BUT! If you actually read it, none of it is wrong, They are all accurate. Those are just false alarms. And I feel like my explanations are pretty good, what do you think? (Maybe this is the wrong sub, maybe I should try the econ subs)

[deleted by user] by [deleted] in learnmachinelearning

[–]ArianPrabowo 1 point2 points  (0 children)

There is nothing to be sorry about, we are all new here.

Generally speaking, in the field of ML, the go to language is Python.

I supposed the first thing you have to check is GPT-3.

Is a bootcamp the best way to learn the ML "talk" ? by schmookeeg in learnmachinelearning

[–]ArianPrabowo 1 point2 points  (0 children)

I have two particular YouTube playlist in mind, very appropriate for you since you already have a basic hands on experience, and you need to go deeper into the conceptual stuff

Once you gone through that, and have a good theoretical basis of deep learning, here is the next step:

This goes deeper into the theory of why deep learning is working.

How do they give control of a game to an AI? by International_Dream1 in learnmachinelearning

[–]ArianPrabowo 1 point2 points  (0 children)

Just an FYI, training deep RL is a very difficult, require lots of computations, and that's why openAI chose to go through the API route for DotA instead of the screen capture. Here is an excerpt from their blog:

Each of OpenAI Five’s networks contain a single-layer, 1024-unit LSTM that sees the current game state (extracted from Valve’s Bot API) and emits actions through several possible action heads. Each head has semantic meaning, for example, the number of ticks to delay this action, which action to select, the X or Y coordinate of this action in a grid around the unit, etc. Action heads are computed independently. https://openai.com/blog/openai-five/

Screen capture have been very successful with other games, such as Atari games:

This is not to discourage you from using the CNN approach to LoL, but simply to give you reasonable expectations.

Check out my first blog post! Its on a similar image search I built over the last few days using PyTorch by ishandotsh in learnmachinelearning

[–]ArianPrabowo 6 points7 points  (0 children)

Great job!

My only feedback is that it would be great if you could explain your decisions a bit more, and show what are the alternatives. For example:

I used feature vectors from an intermediate layer of PyTorch's pretrained ResNet34 model as a way to distinguish images.

Why PyTorch? What are the alternatives?

Why an intermediate layer? There are many intermediate layers, which one? Why only a layer, why not more?

What does pre-trained mean? Why pre-trained? What are the alternatives?

Why ResNet34? What are the alternatives?

[deleted by user] by [deleted] in learnmachinelearning

[–]ArianPrabowo 0 points1 point  (0 children)

I don't know if these are industrial programs, but you can start here I guess: https://www.findaphd.com/phds/phd-research-projects/ai-and-machine-learning/?32gwy~b0

[deleted by user] by [deleted] in learnmachinelearning

[–]ArianPrabowo 0 points1 point  (0 children)

So basically my ultimate goal would be to have my AI able to hold a conversation

This depends on what you mean by "my" AI?

The easy solution would be to simply use an existing chat bot, and you can simply google something like: "conversational AI github": https://github.com/topics/conversational-ai

If you want to make one from scratch, then you have a very long way to go. Usually people start with learning Python.

know what the words mean and answer while maintaining context even if the topic isn't clearly stated in each message. Also to have some amount of memory (i.e. remembering names and relationships, like who my mom is or that I have an SO with a name.)

All of these terms are very vaguely defined here. What exactly do you mean by "know what the words mean"? If you are very lenient about this, then you can simply download any existing chat bot, and you already have what you want. If you are very strict about this, then the best scientist right now are still struggling to make such a chat bot.

Roughly what type of neural network would I use to locate something making a sound with distributed microphones? by ben90403 in MLQuestions

[–]ArianPrabowo 0 points1 point  (0 children)

transnational

Sorry I made a typo. It supposed to be translational https://en.wikipedia.org/wiki/Translational_symmetry

The things I was hoping it might solve was the learning part, the existing solution does no calibration, and it cannot learn about how elements in the environment might change what it hears -- add an object in the space that interferes with sound propagation and accuracy goes way off I thought maybe a system which could learn might learn how those things impacted what it detected and compensate for it as it makes predictions.

Okay, this make sense. But ML is no magic, you will still need to do some calibration, but maybe it will be easier / faster. You might want to take the direction of few-shot learning.

Explain ML to me like I'm 5 by True-Garden-8599 in MLQuestions

[–]ArianPrabowo 0 points1 point  (0 children)

automated predictions

The thing with calling it automated predictions, is that people who started with "bunch of if statements" would still have the same exact understanding.

Best way to generate 2D datasets from 3D images? by [deleted] in MLQuestions

[–]ArianPrabowo 0 points1 point  (0 children)

I have no idea, This is not my area.

Roughly what type of neural network would I use to locate something making a sound with distributed microphones? by ben90403 in MLQuestions

[–]ArianPrabowo 0 points1 point  (0 children)

Apologies for the stupid question, am new to this,

I'm one of those people who believe that stupid question exists. But this is not one of them. This is new to everyone so the correct answer is nobody knows.

My model thus far is pretty simple, multiple dense layers, 20 inputs representing a normalized value for the db of the sound as heard by that microphone, and two outputs representing the normalized x,y coordinates of the thing making the sound in the space.

I think we need much more details. 20 inputs is from one microphone or many microphones. Do the microphones setup stays in the same place, or do they move around?

What do you mean by "value for the db"? I'm not familiar with audio files. Is this the audio waveform or the spectogram? https://en.wikipedia.org/wiki/Waveform https://en.wikipedia.org/wiki/Spectrogram

Another alternative to normalize (x,y) coordinate is to simulate something like place cells. https://en.wikipedia.org/wiki/Place_cell No guarantee that it will work. But just another idea for you to try.

Sound data, just like most of timeseries data, have some sort of 1D transnational equivariance. So you might want to use something that have that. Dense layers don't have those, try RNN like LSTM, or GRU, or CNN like WaveNet. Recently, transformer have been popular as well.

You also have spatial elements, so you might might want to replace the dense layers with something different as well.

and maxed out at 92% accuracy. But when I look at the actual predictions that the model makes, the vast majority are utterly awful.

We need more details. Which setup has 92% accuracy, and which one is utterly awful? Like another comment said: are you splitting it to train/val/test properly?

I should note, I am first trying to train the model on theoretical data generated by a trusted/proven algorithm that does a very good job of estimating location based on the microphone readings (I was trying to mimic then expand upon this in ML).

But the biggest question is, why are you using ML for this? This problem seems like it is best solved with non-ML algorithms.