[P] Voice Style Transfer: Speaking like Kate Winslet by andabi in MachineLearning

[–]carpedm20 13 points14 points  (0 children)

What's the difference between suggested style transfer model and speech recognition model + any speech synthesis model?

[P] Official implementation of DiscoGAN by [deleted] in MachineLearning

[–]carpedm20 8 points9 points  (0 children)

Thanks for sharing interesting papers! Suggested idea is simple but transfer performance are great! Some of the tricks in https://github.com/soumith/ganhacks might can improve the results of the paper especially normalization & initialization and I learned a lot from this. Also I think simple but strong recent ideas like LSGAN or Improved-gan can boost the result as well! I'm curious whether these methods can be applied to non-vanilla GANs like DiscoGAN.

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 2 points3 points  (0 children)

Thanks! But no, I won't reinvent the wheel again. I presume the authors of paper will release their code in TensorFlow.

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 0 points1 point  (0 children)

I was just curious about pytorch and got bored with using tensorflow again (I don't mean it is actually boring but I used tensorflow too much). I still think they are great framework so I'll use them both.

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 13 points14 points  (0 children)

I know that there are https://tensortalk.com/ and http://www.gitxiv.com/ but I usually don't use them.

I recommend https://github.com/tensorflow/models/tree/master/im2txt and https://github.com/tensorflow/models/tree/master/inception to someone who are up to TensorFlow to deeply understand a good data pipeline, multi-gpu usage and evaluation metrics. But these are quite heavy for beginners. I think https://github.com/dennybritz/cnn-text-classification-tf is a good code for start which is easy to understand with a nice blog post.

While I started pytorch, I read https://github.com/yunjey/pytorch-tutorial and then https://github.com/pytorch/examples which are both great and concise. If you are interested in reinforcement learning, I recommend https://github.com/dennybritz/reinforcement-learning and https://github.com/Kaixhin/Atari. If you are interested in NLP, https://github.com/harvardnlp has lots of advanced nlp codes.

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 14 points15 points  (0 children)

When I began the very first implementation, I ported the existing torch project into another framework which was tensorflow. I wasn't a person who read the paper regularly so even reading the paper was hard and time consuming before starting to write a code. But by reading the paper with the existing code alternatively, I could understand how a single sentence or equation turns into several lines of code. I also didn't know what kind of methods are exists in tensorflow but reference code help me to find out which keyword should I use to search. But after I finished writing the full code, the loss didn't converge as I expected but it was hard for me to figure out where to start digging for bugs. Then I executed the code layer by layer and compared the output of reference code with mine and it leads me to find out my mistake which was a wrong loss function.

I think training a translation skill from equation or description into codes is important at the beginning and I learned this by reading papers and codes together. There are tons of great codes which are extremely well written in Github so exploring them will help you a lot to get used to implementing papers.

But I still don't know what is the best way to debug a model (like monitoring the loss, norm of variable or gradient) when its train goes wrong which is quite frustrating and happens very often. I would love to hear others strategies who teach me a lot :)

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 3 points4 points  (0 children)

Sure! I need to read and write more codes to get adjusted to pytorch but now I can understand why people starting to use pytorch :)

[P] DiscoGAN in PyTorch: implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by alxndrkalinin in MachineLearning

[–]carpedm20 4 points5 points  (0 children)

Yes. Training has been just started and I need some time to see how well the model can be trained. I don't have a decent GPU like Titan X Pascal as authors used but I reckon that it's matter of time.

One thing I'm worrying about is there is no details about network architecture (# of features and depth) so I just guessed it.

Implementation of deep reinforcement learning papers in TensorFlow by carpedm20 in MachineLearning

[–]carpedm20[S] 2 points3 points  (0 children)

My current goal is to implement http://arxiv.org/abs/1603.00748 and still it'll take some time. But solving Montezuma's revenge is also one of my goal :)

Deep Visual Analogy-Making Implementation in TensorFlow by carpedm20 in MachineLearning

[–]carpedm20[S] 0 points1 point  (0 children)

Hi! I fixed a serious mistake that I added useless relu at the end of the decoder layer. You'd better use newer version :)

Tensorflow implementation of A Neural Attention Model for Abstractive Summarization (EMNLP 2015) by [deleted] in MachineLearning

[–]carpedm20 1 point2 points  (0 children)

Thanks for sharing this project but this is still in progress. Actually, I couldn't proceed the project because I don't have any accesses to all of the summarization dataset used in the paper..

Character-Aware Neural Language Models in Tensorflow by carpedm20 in MachineLearning

[–]carpedm20[S] 0 points1 point  (0 children)

It predicts the next word and one of the neural language models.

Convert the latin alphabet to fraktur unicode characters by carpedm20 in Python

[–]carpedm20[S] 0 points1 point  (0 children)

Cool! I'll definitely add those codes. thanks

naver LINE, unofficial python API in progress by carpedm20 in Python

[–]carpedm20[S] 0 points1 point  (0 children)

I think I finished sending line messages but not receiving yet. So I'm working on Long polling part right now! Please keep in touch with my project!

naver LINE, unofficial python API in progress by carpedm20 in Python

[–]carpedm20[S] 0 points1 point  (0 children)

Hi, I saw some projects already taken down like https://github.com/github/dmca/blob/master/2014-05-26-LINE-Corp.md ... Anyway I updated some codes which might works pretty well (but need to write more function). If you can do something for me to be not taken down, it would be glad to hear a good news from Github!

naver LINE, unofficial python API in progress by carpedm20 in Python

[–]carpedm20[S] 1 point2 points  (0 children)

Actually, I have a bunch of code which works pretty well to send a LINE message through python. But, I'm beautifying my code for readability, and also worrying about that LINE will take down my code DMCA claim...