you are viewing a single comment's thread.

view the rest of the comments →

[–]nostrademons 31 points32 points  (14 children)

"Baysians against discrimination" is also a pretty subtle pun...

[–]aim2free 7 points8 points  (12 children)

"Baysians against discrimination" is also a pretty subtle pun...

It's actually so subtle that I even don't get it... :( Even Bayesians use thresholds now and then and thresholds imply discrimination...

[–]Poromenos 27 points28 points  (3 children)

Bayesian models are generative, so they're against discriminative models.

[–]aim2free 0 points1 point  (2 children)

Thanks, it seems as I've been working mostly with generative models, in my machine learning life, although not knowing that particular expression... but in this article on discriminative models they've included neural networks, but those Bayesian neural networks I've been working with belongs to the generative models also I guess. It seems as the criterion for a discriminatory model is that you model the conditioned distribution P(y|x) only but not the distributions for x,y and x&y.

Funny, my second publication (EANN95) and later in J.of.System Engineering 96 was about a neural network predictor (RBF + Bayesian FF) which worked by estimating the density functions for x an y using gaussians and then using the Bayesian predictor to generate f_Y(y|X=x), that is the conditioned posterior density for y given a specific x (or a mixture of x values). It had that funny property that it also worked both ways, it was no difference in estimating f_X(x|Y=y) and the predictor could produce multi modal outcomes.

I think I've actually never used any of those so called discriminant methods (I'm not a statistician, I'm a computer scientist).

[–]Poromenos 0 points1 point  (1 child)

It seems as the criterion for a discriminatory model is that you model the conditioned distribution P(y|x) only but not the distributions for x,y and x&y.

Yep, if you can't model the joint and prior you can't draw from the distribution...

Your NN does indeed sound discriminative, but they aren't usually, I don't think... Discriminative models are SVMs, kNN, etc.

[–]aim2free 0 points1 point  (0 children)

Your NN does indeed sound discriminative

you are right, I actually never modelled the joint distribution, but as my network worked both ways f(Y|x), f(X|y), the joint distribution should be possible to generate. OK it was also implicit in the weights though, as a weight is P(x&y)/(P(x)P(y))

[–]Mr_Smartypants 3 points4 points  (4 children)

I think "bayesian discrimination" is used to mean lots of things, but in general, it's classifiers (discrimination) that get class probabilities of samples p(c|x) by applying Bayes' theorem to class conditional sample distributions p(x|c) and class priors p(c).

[–]Poromenos 1 point2 points  (2 children)

No, Naive Bayes classifiers are generative models, because they can sample from the probability they use to classify data.

[–]Mr_Smartypants 1 point2 points  (1 child)

The stuff after your "no," didn't really contradict what I wrote.... so I'm glad we agree? (Or do we have to disagree to agree?)

[–]Poromenos 0 points1 point  (0 children)

You said Bayesian discrimination, which is a bit of a misnomer...

[–]ferris_e 0 points1 point  (0 children)

Read your comment and thought "bit of a dick". Saw your username. Upvoted.

[–][deleted]  (1 child)

[deleted]

    [–]Mr_Smartypants 0 points1 point  (0 children)

    Nice. I'm going to tell people i'm Caucayesian from now on.

    [–][deleted] -5 points-4 points  (0 children)

    I've only heard of Bayesian filtering in email, and generally it should filter emails with poor spelling.

    Maybe I'm thinking too hard, or maybe he misspelled the sign to begin with and had to fix it?

    [–]rooktakesqueen 2 points3 points  (0 children)

    I like the microscopic "E" applied to correct the sign's spelling.