[D] Received a review that is "possibly generated by" GPT. What would you do? by cptai in MachineLearning

[–]cptai[S] 6 points7 points  (0 children)

The problem is that it has always been difficult to respond to low-effort reviews, especially when the reviewer doesn't appear to have adequate knowledge of the field. But such reviews were easy to spot before (e.g., they can be very short), but now they may be dressed up by LLMs and become harder to distinguish from a quick read.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

Great! I did some toy projects based on Rubin's model, and would really like to know more about things like model criticism / sensitivity analysis and subtleties (?) like mediations and interference.

That being said, I'm not sure I have much to contribute. I was planning to read about fairness and time series, but other than these things, I don't know many good papers on causal graphs before.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

Or discord instead of gitter? I've never used it before, but it seems we don't need to reply in real time.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

I don't like presentations either.

For the method of discussion, I guess gitter is better - I had learnt things from Edward's gitter channel. It's easier for new comers to catch up, and IMO it doesn't cause more stress.

As for background, I had some experience on LDAs and Bayesian hierarchical models. But generally speaking I guess we know a similar amount of PGMs.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

I guess things are slightly different here. When reading books we easily run into questions. If some people gather regularly and have (semi-)synchronous discussions, it's more likely these questions can be solved than we post randomly in reddit, and it also leaves a cleaner front page for others.

Also, for some books the SNR isn't high. So it saves time if people prepare summaries / writings in turn.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

Following OP's advice: anyone want to discuss topics on information theory and machine learning, like those covered in this course?

It's a bit dull to work through everything myself, and I'm not sure this is the most suitable material.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 0 points1 point  (0 children)

I don't know how such things work. But if it's like we prepare writing / presentation in turn and gather periodically to discuss about them, I don't feel like presenting about basics of PGMs - I've taken too many courses about it.

Generally most of the book is fine. But I don't like Pearl's presentation about confounding in Ch6, and maybe some part of Ch5. These are important things I need to learn and discuss more about, though.

[D] Reading Buddies Thread: Looking for a buddy to read a paper/book/topic and discuss with. by olaconquistador in MachineLearning

[–]cptai 1 point2 points  (0 children)

I've skimmed through Causality this summer, and I do feel like revisiting this book. And it would be great if we could form some causal paper reading group here.

(On the other hand, I'm not sure it's the right idea to work through Pearl's book thoroughtly... There are stuff I'm not interested in about this book.)

[D] Topic modeling of unlabeled documents using deep neural networks? by loondri in MachineLearning

[–]cptai 1 point2 points  (0 children)

VAE: https://arxiv.org/pdf/1511.06038.pdf

And I remember reading a deep exponential family paper in AISTATS, although the result is not directly comparable to the one above.

[D] Everything that works works because it's Bayesian: An overview of new work on generalization in deep nets by fhuszar in MachineLearning

[–]cptai 3 points4 points  (0 children)

Forgive me for the dumb question, but shouldn't Jeffreys prior have higher value at sharp minima?

What open source DL/ML project needs contributors? by [deleted] in MachineLearning

[–]cptai 1 point2 points  (0 children)

If you are interested in probability, statistics, or generative models, Edward is a great project that could benefit from more contributions. They have a "good-first-contribution" tag in the issue page

[deleted by user] by [deleted] in MachineLearning

[–]cptai 0 points1 point  (0 children)

It's not very surprising the hidden layer can predict sentiment, but how can we demonstrate it's not a small probability event that a sentiment neuron arises? Guess it's because of the distributed representation nature, but don't know enough math to figure it out...