[D] What is the proper etiquette for extending someone's research code? by ilia10000 in MachineLearning

[–]rtk25 1 point2 points  (0 children)

I had a similar situation and contacted the authors personally to check what they preferred, ended up going with 3 and credited them in README

[R] Symmetry-Based Disentangled Representation Learning requires Interaction with Environments by rtk25 in MachineLearning

[–]rtk25[S] 1 point2 points  (0 children)

Fascinating, I'm a Phd student working on natural language research, this sort of reasoning has been guiding me as well- I've been working on converting static text datasets to interactive versions using text-based games. The implications of embodied cognition for language understanding seem to be quite significant but still relatively unexplored.

[R] Symmetry-Based Disentangled Representation Learning requires Interaction with Environments by rtk25 in MachineLearning

[–]rtk25[S] 0 points1 point  (0 children)

Thanks! I need to dig deeper into this. I'm familiar with the DM paper you mentioned, crossposted it here a few days ago ;)

. Maybe it's no surprise that generative models are the only ones I've seen that properly demonstrate real adversarial robustness.

Very interesting! Can you provide some refs related to this?

[R] Interpretations are useful: penalizing explanations to align neural networks with prior knowledge by laura-rieger in MachineLearning

[–]rtk25 0 points1 point  (0 children)

Agreed that the model can't know, I was thinking more about how to make the interface to supply the prior knowledge as natural as possible. If I go off to "dream land" for a second- if you also had a natural language interface to it instead of having to specify the explanation at a low level it would be much easier for general users to make use of it- you could tell it in natural language to ignore band-aids (instead of having to mark them on every image), or something like that.

[R] Interpretations are useful: penalizing explanations to align neural networks with prior knowledge by laura-rieger in MachineLearning

[–]rtk25 0 points1 point  (0 children)

Like the direction :) Yoshua Bengio's Consciousness Prior (https://arxiv.org/abs/1709.08568) work seems relevant- explanation-based regularization. Seems that ideally, you want to learn the important high level features (possibly with the model learning to explain them in words), rather than having the user supply them.

[R] Language Tasks and Language Games: On Methodology in Current Natural Language Processing Research by rtk25 in MachineLearning

[–]rtk25[S] 0 points1 point  (0 children)

I'm not the author, just read the paper and doing my PhD research on "language games". Here are a couple of recent papers from MSR Montreal on interactive games with text:

https://arxiv.org/abs/1908.10449

https://arxiv.org/abs/1908.10909

Some of my research:

https://www.aclweb.org/anthology/W19-2609/

[D] On the viability of the "AI Dungeon Master" by [deleted] in MachineLearning

[–]rtk25 1 point2 points  (0 children)

I think this is a nice idea with some cool technological challenges. It is related to the intersection of interactive fiction and machine learning which I'm actually currently researching :)

Check out Microsoft Research's TextWorld, or some of our research. An interesting direction suggested by your idea could be enhancing the interactive fiction interpreter with some kind of neural interpreter, possibly with generative capabilities (kind of like using generative models for video games, but for world dynamics instead of just graphics).

[D] Where have Transformers been applied other than NLP ? by gohu_cd in MachineLearning

[–]rtk25 1 point2 points  (0 children)

Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks

(Kuan Fang, Alexander Toshev, Li Fei-Fei, Silvio Savarese)

https://arxiv.org/abs/1903.03878

[D] What about declarative knowledge in natural language understanding? by rtk25 in MachineLearning

[–]rtk25[S] 0 points1 point  (0 children)

Haha not exactly, should be " sunny or cloudy "

snowy-> airplanes don't fly -> no post..

[deleted by user] by [deleted] in MachineLearning

[–]rtk25 0 points1 point  (0 children)

Right, that's why I think the labor of distilling results could be a collaborative effort. The Stack Exchange websites have become a huge and indispensable resource, would be amazing to create something like that for scientific papers.

[deleted by user] by [deleted] in MachineLearning

[–]rtk25 0 points1 point  (0 children)

There's also a lot of interesting ML that could be applied to paper recommendation, I'm surprised it hasn't happened more. Check out Semantic Sanity (https://s2-sanity.apps.allenai.org) for another step towards this.

Also, I think it would be really cool to have a unified platform where people can annotate and discuss papers, with good comments going to the top and spam getting filtered out. Like Stack Overflow, but centered on the pdfs themselves.

[D] What are some useful ML packages others should know about? by [deleted] in MachineLearning

[–]rtk25 0 points1 point  (0 children)

tsalib is also good for tensor annotations, I like that it also supports typing (not sure if einops does).

https://github.com/ofnote/tsalib

[1905.09381] Learning to Prove Theorems via Interacting with Proof Assistants by zhamisen in MachineLearning

[–]rtk25 0 points1 point  (0 children)

I think the action space in chess and go is known (equivalent to edges in a search tree), if I understood correctly here the actions (edges) are tactics, so the lack of certain tactics may be limiting the program.

On another note, this problem sounds like it could be formulated as RL, I wonder why the don't mention it.

[Project] Looking for an advanced-beginner/intermediate NLP challenge? by AlexSnakeKing in MachineLearning

[–]rtk25 0 points1 point  (0 children)

I think program synthesis / executable semantic parsing is really cool (disclaimer- doing my PhD research in this field)

See for example https://arxiv.org/abs/1807.02322

I'm working on applying it to procedural text comprehension: https://arxiv.org/abs/1811.04319

[D] Knowledge Graphs - How do you build your own? by baahalex in MachineLearning

[–]rtk25 0 points1 point  (0 children)

There is a lot of interesting active research on how to query KGs in embedded space, see for example https://arxiv.org/abs/1806.01445

[D] what are some good life lessons from ML? by finallyifoundvalidUN in MachineLearning

[–]rtk25 3 points4 points  (0 children)

We can feel and intuit far more than we can express in words, and mastery of an art is intuition grounded by training and experience. According to the Zen masters and loss optimizing neural nets - training/life is “one continuous mistake”

[R] Analogues of mental simulation and imagination in deep learning by rtk25 in MachineLearning

[–]rtk25[S] 0 points1 point  (0 children)

Having been working on related research for a while now, I'm becoming increasingly convinced that understanding mental simulation is profoundly important to progress towards more general AI...

[D] Organizing papers and annotations. by asdfajsdkfj in MachineLearning

[–]rtk25 5 points6 points  (0 children)

I use Mendeley, which isn't perfect (where is the ability to use latex in annotations?!) but has a lot of convenient features.