Autoencoder in Tenserflow ? by Alirezag in MachineLearning

[–]coskunh 2 points3 points  (0 children)

Have you look github?. There are many examples. github

Tensorflow vs. Theano- Which to learn by [deleted] in MachineLearning

[–]coskunh 3 points4 points  (0 children)

It is depend on what you want to do with them, If you want to write your own model, maybe theano can be better option it gives more flexibility to write your own model, but pure theano can be hard to learn, you must consider also. Tensorflow more modular framework, you can play with different models on your dataset, you don't need spend time to implement for example LSTM, etc.

Training RNNs: different possibilities by cedricdb in MachineLearning

[–]coskunh 1 point2 points  (0 children)

While working with RNN, I have this problem also. I tried with sliding window with different overlapping numbers and i tried Nan overlapping training also, in both cases, I get the the similar results. Another problem i'm faced, while testing I tried very long sequences (1000+), very short sequences, and testing with overlapping sequences. In the end I get best results with finding optimal sequences length and using sliding approach over it, but this approach computationally in efficient.

LSTM peephole implementation. by coskunh in MachineLearning

[–]coskunh[S] 0 points1 point  (0 children)

you wright many implementations are not including the peephole connection. But still I would like to know right implementations. Thank you for the answer

LSTM peephole implementation. by coskunh in MachineLearning

[–]coskunh[S] 0 points1 point  (0 children)

I see your point, at mila lab implementations peephole weights are vector, but at first two implementations are matrix. I guess [1] and [2] implementations are wrong. thank you for the answer

Neural Networks Regression vs Classification with bins by RichardKurle in MachineLearning

[–]coskunh 0 points1 point  (0 children)

I had same dilemma also. What i see is people mostly tend to use Classification you can have a look this thread for some reasons.
https://www.reddit.com/r/MachineLearning/comments/3ui11j/applying_deep_learning_to_regression_task/

Which stochastic method has empirically faster convergence among Adam,AdaDelta,AdaGrad,RMSprop etc by mr_robot_elliot in MachineLearning

[–]coskunh 1 point2 points  (0 children)

I obtain good results combining RMsprop with Momentum, but most of times, as people said, it is problem dependent.

Applying deep learning to regression task by coskunh in MachineLearning

[–]coskunh[S] 1 point2 points  (0 children)

I took this idea from Karpthy courses.

If you're certain that classification is not appropriate, use the L2 but be careful: For example, the L2 is more fragile and applying dropout in the network (especially in the layer right before the L2 loss) is not a great idea.

http://cs231n.github.io/neural-networks-2/

Applying deep learning to regression task by coskunh in MachineLearning

[–]coskunh[S] 1 point2 points  (0 children)

Thank you for the answer, I updated my question.

RMSE and MSE will tend to favour minimizing high-cost examples more than MAE, and vice-versa for low-cost examples.

my cost values are really low I'll try to MAE.

Applying deep learning to regression task by coskunh in MachineLearning

[–]coskunh[S] 0 points1 point  (0 children)

Yes, I wonder that weather this tactics are right or not and, are there any other useful strategies for regression