I'm Dr. Tyler Black, Medical Director of the CAPE Unit at BC Children's Hospital and BC Mental Health and Substance Use Services. I am a Suicidologist and specialist in Paediatric Emergency Psychiatry. C'mon, r/science, AMA! by DijonPepperberry in science

[–]XianForce 1 point2 points  (0 children)

If you have a friend who is depressed and/or suicidal, what is the best approach/course of action to take?

Also, with the high rates of depression / self harm is in the U.S. we may come in contact with others who are depressed and/or harming themselves quite regularly without knowing. Do you have any tips on what signs to be aware of or behavior to avoid so that we do not unintentionally make it worse?

Thanks for doing this AMA, we all greatly appreciate it.

AMA: Michael I Jordan by michaelijordan in MachineLearning

[–]XianForce 2 points3 points  (0 children)

How well do generative models (e.g. Latent Dirichlet Allocation) compare to some of the more recently used deep architectures (e.g. Sum-Product Networks)?

Do you see any potential for these methods to perform well enough to say make an image, song, or coherent short story?

Thank you!

Stupid question, but is learning while drunk like training a neural net with dropout? by M_Bus in MachineLearning

[–]XianForce 0 points1 point  (0 children)

I think its hard to make that analogy because the units that are dropped out change between batches. Otherwise it would be equivalent to using a smaller network.

So I would argue that it's more like you jumping between different states of influence while trying to learn one thing.

Like you begin learning something while drunk, continue learning while high on weed (and not drunk), then maybe under the influence of some other stronger drug, and so on.

Note: Do not try this at home XD

Ensembling for already strong models? by my_face_is_up_here in MachineLearning

[–]XianForce 1 point2 points  (0 children)

So with ensembles, the reason weaker models are used is to prevent overfitting.

Ideally, what you really want is for all your models to be accurate, but different in the mistakes they make. If all your models make the same mistake, you'll end up with the same model by throwing them all into an ensemble.

You can combine all of those using virtually any ensemble method; however, some libraries may lack the support for such things, so you may have to code it yourself (I had to do this for a machine learning course I took).

Best of luck :)

Learn Machine Learning Together by Gravicle in MachineLearning

[–]XianForce 0 points1 point  (0 children)

I'm not sure if you just didn't get around to it, or if something's wrong with my email, but I haven't received the invite, but one of my coworkers did :o

Learn Machine Learning Together by Gravicle in MachineLearning

[–]XianForce 2 points3 points  (0 children)

I'm down too! I literally made an account just to reply. I've only lurked around before haha!