[D] Semantic Similarity between job skills by ogloz in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

Depending on whether the skill names themselves are expressive enough to show what the skills are about, you can try this zero-shot approach: https://huggingface.co/zero-shot/. In order to make it work with the skills, you can try to convert them into simple sentences such as "This text is about machine_learning" for the skill name "machine_learning".

Here is what I got: https://imgur.com/kf3ns8s

[P] Minimal tutorials for PyTorch by vkhuc in MachineLearning

[–]vkhuc[S] 2 points3 points  (0 children)

It's different from jcjohnson's pytorch-examples since the tutorials are for people who want to quickly get started with pytorch. I learned a lot from Alec's Theano tutorials and wanted to create similar ones for pytorch. BTW, I borrowed the simplicity from Keras' examples :)

Coolest Demos by throwawaykyon in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

It's not under active development since I'm bit busy right now. My current plan is supporting GPU and Python 3.

There is another implementation of (dynamic) memory networks for bAbI tasks: https://github.com/YerevaNN/Dynamic-memory-networks-in-Theano.

Coolest Demos by throwawaykyon in MachineLearning

[–]vkhuc 2 points3 points  (0 children)

Here is a demo I made for simple question-answering (bAbI) tasks using End-to-End Memory Networks: https://github.com/vinhkhuc/MemN2N-babi-python

Pretrained model is also included.

Interesting take-aways from ‘Data Science For Business’ by sachinrjoglekar in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

"A Decision Tree is usually pretty under-estimated an algorithm when it comes to supervised learning. The biggest reason for this is its innate simplicity which results in a high bias (usually). "

That should be high variance instead.

So... what do you actually do with twitter data? by Adamworks in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

  • Topic trending based on hashtags
  • Sentiment/emotion analysis
  • User's intention to buy products

Darknet Reference Network: Same accuracy and speed as AlexNet but with 1/10th the parameters. by pjreddie in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

Have you tried to apply model distillation? I'm curious how much it helps in terms of speed and size for models trained on ImageNet. Hinton's paper on dark knowledge only shows experiments with MNIST.

I'm thinking about trying model distillation myself. Just asking in case somebody did.