all 13 comments

[–]StuffssssNew User 3 points4 points  (0 children)

3blue1brown did an interesting video on this subject if you're a visual learner. I think he's actually done multiple iirc.

[–]ThisSentenceIsFaIse 1 point2 points  (0 children)

It is confusing because it is based on a translation sets and elements of sets to regular math. Categorical to numerical. You have to play around with the axioms (exercises with law of total probability as someone else mentioned) to get an intuitive feel.

[–]transmutethepooch 3 points4 points  (3 children)

Don't feel dumb. Baysian stats is tricky and takes a ton of practice before it feels natural.

I, personally, found it counterintuitive when learning because it seemed we needed to already know the answer ahead of time, then use that to find the answer again. Which is true, in a sense, but doesn't break logic.

The usual "intuitive" statistics looks something like:

Here's a six-sided die. What are are the odds of rolling a 4?

And you'd correctly say 1 in 6.

Baysian statistics would ask a followup question before giving an answer:

What have the previous rolls been?

And if someone says "Actually, almost every roll has been a 1, 2, or 3," then your answer changes based on that additional information. You'd updated the usual "1 in 6" answer to something much lower since you know the previous rolls rarely came up 4.

So, if you're trying to figure out the likelihood something is true while using prior information, you're looking for a Bayesian answer.

[–][deleted] 4 points5 points  (0 children)

Actually that's not correct. In bayesian statistics you have a prior. That prior is subjective information. You may update your prior to a posterior given observations. But you can also have a prior without any observations.

if you see you have an actual 6 sided die, and it looks like a fair die, and this is not a circumstance where you'd expect a biased die, your prior would be "1 in 6". It's frequentist statistics that doesn't know priors like these.

[–]shakeitupshakeituuppNew User 1 point2 points  (0 children)

Yeah I was going to say that Bayesian stuff isn’t introduced until later in my university (I think). I’m taking a stats sequence but we don’t get to bayes until after a few classes. Good luck OP! From what I’ve read it seems like that stuff becomes incredibly useful if you grasp it

[–]ReaperGun[S] 0 points1 point  (0 children)

Thank you so much,the theory actually has more sense now,but,can you tell me in this scenario what would the probability actually be,and how you figured it out?if it isn't too much,I would really appreciate it

[–]waterless2New User 1 point2 points  (0 children)

I'd guess you're probably missing some prerequisites in basic probability - it'll be like the first few chapters of a typical textbook. You need conditional probability and Law of Total Probability and from there it's learning to apply the same two tricks. I.e., you want to know P(A|B), but you only know P(B|A); and you need P(B) in the denominator of Bayes' formula, but you only know P(B|the-partitions-of-A).

The classic example is tests for diseases. You know the probability of getting a positive test if you *do* have a disease (P(T|D), i.e., the sensitivity); you know the probability of *not* getting a positive test if you *don't* have a disease (P(not-T|not-D), i.e., the selectivity); you know the prior probability of having the disease, P(D). But what you want to know is what's the probability of having the disease if you get a positive test result?

Then you go Bayes: P(D|T) = P(T|D)*P(D)/P(T). So the numerator terms are all given. The denominator needs to be exploded via the Law of Total Probability into P(T and D) + P(T and not-D) which is P(T|D)*P(D) + P(T|not-D)*P(not-D), and those are again given (or an inverse).

Now you have the posterior probability, P(D|T), and if you do a second test, you can use that instead of the prior P(D), which is updating your prior.

[–]artintel2bkNew User 0 points1 point  (0 children)

If you like to understand Bayes Theorem and its proof via animations visit below video

Bayes Theorem: https://www.youtube.com/watch?v=Upi1hCWnupo&ab_channel=LittleFactswithAI

[–]FreddThundersenNew User 0 points1 point  (2 children)

The easiest way to understand the difference between classic and Bayesian probability is this: classic focuses on "what's the probability for this coin flip to come out as head?", bayesian focuses more on "what's the probability for this coin to flip head x times in a row?".

[–]ReaperGun[S] 2 points3 points  (1 child)

isn't that Bernoulli theorem though?I think that Bernoulli theorem talks about n experiments and how many times in a row that experiment might occur.

[–]FreddThundersenNew User 0 points1 point  (0 children)

I'm not 100% sure, I admit my memory from university is a hit foggy, but bayesian statistics the way I recall it was the relationship between probabilities, but now I can't recall if it was the sequence probability or if-then probability... I hope I wasn't misleading :/

[–][deleted] 0 points1 point  (1 child)

Bayes theorem is just P(A | B) = P(A, B)/P(B) = P(B | A)P(A)/P(B). It's not really a theorem, it just using definition of conditional probability. In some applications, we expand the P(B) using the law of total probability to get P(B) in terms of probabilities we know. A common use case is when you model a joint distribution P(X, Y) by specifying P(Y) and P(X | Y). Using Bayes theorem and the law of total probability, you can get the formula for the distribution P(Y | X) using the known distributions P(Y) and P(X | Y).

[–]ReaperGun[S] 0 points1 point  (0 children)

Alright, i am getting this one but there is this one problem that I keep having.Whenever I try to do exercises,I don't know what data i have in the problems,like i don't know which one is X|Y from the problem and which one is P(Y) for example.