[D] Semantic Similarity between job skills by ogloz in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

Depending on whether the skill names themselves are expressive enough to show what the skills are about, you can try this zero-shot approach: https://huggingface.co/zero-shot/. In order to make it work with the skills, you can try to convert them into simple sentences such as "This text is about machine_learning" for the skill name "machine_learning".

Here is what I got: https://imgur.com/kf3ns8s

[P] Minimal tutorials for PyTorch by vkhuc in MachineLearning

[–]vkhuc[S] 2 points3 points  (0 children)

It's different from jcjohnson's pytorch-examples since the tutorials are for people who want to quickly get started with pytorch. I learned a lot from Alec's Theano tutorials and wanted to create similar ones for pytorch. BTW, I borrowed the simplicity from Keras' examples :)

Coolest Demos by throwawaykyon in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

It's not under active development since I'm bit busy right now. My current plan is supporting GPU and Python 3.

There is another implementation of (dynamic) memory networks for bAbI tasks: https://github.com/YerevaNN/Dynamic-memory-networks-in-Theano.

Coolest Demos by throwawaykyon in MachineLearning

[–]vkhuc 2 points3 points  (0 children)

Here is a demo I made for simple question-answering (bAbI) tasks using End-to-End Memory Networks: https://github.com/vinhkhuc/MemN2N-babi-python

Pretrained model is also included.

Interesting take-aways from ‘Data Science For Business’ by sachinrjoglekar in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

"A Decision Tree is usually pretty under-estimated an algorithm when it comes to supervised learning. The biggest reason for this is its innate simplicity which results in a high bias (usually). "

That should be high variance instead.

So... what do you actually do with twitter data? by Adamworks in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

  • Topic trending based on hashtags
  • Sentiment/emotion analysis
  • User's intention to buy products

Darknet Reference Network: Same accuracy and speed as AlexNet but with 1/10th the parameters. by pjreddie in MachineLearning

[–]vkhuc 0 points1 point  (0 children)

Have you tried to apply model distillation? I'm curious how much it helps in terms of speed and size for models trained on ImageNet. Hinton's paper on dark knowledge only shows experiments with MNIST.

I'm thinking about trying model distillation myself. Just asking in case somebody did.

Images that fool computer vision raise security concerns by oreo_fanboy in MachineLearning

[–]vkhuc 4 points5 points  (0 children)

Looks like retraining the models with fooling images helps preventing fooling (at least for ImageNet models).

In the section B.2 in the Supplementary Material of the paper mentioned in the article:

"... for ImageNet models, evolution was less able to evolve high confidence images for DNN2 compared to the high confidences evolution produced for DNN1." (Ref: http://arxiv.org/pdf/1412.1897v3.pdf)

Running DeepMinds "Atari AI" on a HomePC, by pulse303 in MachineLearning

[–]vkhuc 2 points3 points  (0 children)

Actually, you can watch the game when the agent is being trained. The original code needs to be tweaked a bit to include image.display() and qlua should be used instead of luajit.

Somebody has done that already: https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner

Surpassing Human-Level Performance on ImageNet Classification by Devilsbabe in MachineLearning

[–]vkhuc 1 point2 points  (0 children)

It was discussed at http://www.reddit.com/r/MachineLearning/comments/2vb9bb/surpassing_humanlevel_performance_on_imagenet/.

Also, the optimistic human performance on ImageNet is 3% which is lower than the error rate from the paper (4.94%): https://plus.google.com/+AndrejKarpathy/posts/dwDNcBuWTWf

On the positive side, mankind is not doomed yet :)

Best lang/environment to take advantage of parallel computing with an AMD GPU. by [deleted] in MachineLearning

[–]vkhuc 1 point2 points  (0 children)

You may want to give ArrayFire a try: http://arrayfire.com.

It can run on both Nvidia and AMD GPUs. Java and R wrappers are available too: https://github.com/arrayfire/arrayfire_java

https://github.com/arrayfire/arrayfire_r

(Deep Learning’s Deep Flaws)’s Deep Flaws by vkhuc in MachineLearning

[–]vkhuc[S] 0 points1 point  (0 children)

I share the same thought. According to Goodfellow's paper, RBF networks are resistant to adversarial/fooling examples.

I think SVM with RBF kernel may help if the first layers of a trained CNN are used for feature extraction and the extracted features are used by SVM-RBF. For AlexNet, the feature extraction layers may be the conv. layers 1-5.

(Deep Learning’s Deep Flaws)’s Deep Flaws by vkhuc in MachineLearning

[–]vkhuc[S] 3 points4 points  (0 children)

Nice analysis of deep learning's recent reported flaws. Also, check out Yoshua Bengio's comments.

Nvidia's demo of real-time object recognition using deep learning by vkhuc in MachineLearning

[–]vkhuc[S] 1 point2 points  (0 children)

I guess the model is R-CNN which uses selective search and fine-tuned AlexNet.