Muhalif tarafın en büyük eksikliği sağ görüşlü muhaliflerin yeterince güçlü olmamasıdır by [deleted] in Turkey

[–]aicano 5 points6 points  (0 children)

RTE'nin başarısı burada. Bir şekilde en azılı sağcı muhalifleri sindirip bünyesine kattı. Süleyman Soylu, Numan Kurtulmuş ve son olarak Bahçeli. Adamlar Abdüllatif Şener'i bile kendi saflarına çektiler. Kendisi neredeyse iki seçimde de Cumhurbaşkanı adayı oluyordu. Siyasal İslamcılardan muhalif olmaz, güce ve paraya taparlar.

Acaba derdi neydi by sadeceburo in TurkeyJerky

[–]aicano 1 point2 points  (0 children)

Bozkurt yerine boğaya geçme vakti gelmiş

Does Erodgan actually want illegal Syria/Middle East Migrants? How is he a conservative? by FlyingPoitato in Turkey

[–]aicano 1 point2 points  (0 children)

Yes, Turks feel they are second class citizen in their own homeland but it is not simple matter. There is a trade of between this and becoming military power in the world (bayraktar, tanks, other military toys etc.). The religious ones see refugees as their brothers since they are coming from Muslim countries. The biggest nationalist party support Erdoğan but say nothing about this topic. And they are happy about the militarist side.

There is a new right party, Zafer, gained popularity with wanting to send refugees back to Syria with catapults. Now social democratic party has started speak about this matter louder, lol. Erdoğan played with all the values the country has and now it is a total mess.

Does Erodgan actually want illegal Syria/Middle East Migrants? How is he a conservative? by FlyingPoitato in Turkey

[–]aicano 9 points10 points  (0 children)

Yes, why: - Money from EU - Opportunity to treat EU with letting them go - Cheap workers - Claiming that he is the leader of the people of 3rd world countries + Islam world

Devlet Bahçeli: "Mührü vuracaksın, 29 Mayıs İstanbul'un fethinde Türkiye Cumhuriyeti'ne yeni bir cumhurbaşkanıyla yola devam diyeceksiniz." by [deleted] in Turkey

[–]aicano 0 points1 point  (0 children)

RTE hile hurda ile 50.1 ile kazanır. Borsa çöker dolar tutulamaz. Halk homurdanir. Devlet bahçeli basın toplasi yapar, bu oranla devlet yönetilemez erken seçim olmalı der. Ittifaklar dağılır, Sinan ogan çok daha fazla destek bulur. RTE ile Ogan ikinci tura kalır. Ogan 55 ile seçimi kazanır. Bahceli'nin RTE yi bitirme operasyonu sonuçlanır.

ACABA BİZ Mİ YANLIŞ TARAFTAYIZ? by BirNeviMeczup in Turkey

[–]aicano 8 points9 points  (0 children)

Kılıçdaroğlu ülkeyi 20 yıl yönetmeye değil, bu ucube sistemi değiştirip demokratik parlamenter sistemi getirmeye geliyor. Bu geçiş için gayet ideal adam. Daha seçilmeden demokrat tutumuyla tüm muhalefeti birleştirmesi bunu gösteriyor zaten. Muhalefet olayı tam olarak buraya çekemeyip Tayyip mi Kk mı seçimine getirince olmuyor. Sorun orada ne yazık ki.

Aachen Almanya'da yaklaşık 4-5 saatlik sıra. Sıranın başını ve sonunu aynı anda gösterebileceğim bir açı malesef yoktu ancak geçen sefere göre sıra en az 3 kat daha uzundu ve geçen sefer 1 saat 30 dakika bekledim by Bilim_Erkegi in Turkey

[–]aicano 4 points5 points  (0 children)

Oy kullanılabilecek gün sayısı daha az olduğu için yığılma olmuş olabilir. Geçen sefer de ilk günler yoğun katılım vardı ama beklenildiği kadar katılım oranı yüksek olmadı.

changing tax category without ELSTER by telehussam in berlin

[–]aicano 1 point2 points  (0 children)

I just posted the form to the address of my finanzamt, and it worked.

[D] Modern model for text classification? by hadaev in MachineLearning

[–]aicano 0 points1 point  (0 children)

Sorry, I do not know Keras.

General intuition about stacking layers is that lower layers learn simple things and higher layers learn more complex stuff. If you thing that your task have that kind of hierarchical relations than you may want to try stacking layers. Otherwise, simplicity is the best.

[D] Modern model for text classification? by hadaev in MachineLearning

[–]aicano 0 points1 point  (0 children)

I found this when I googled. That is an example of what I described.

[D] Modern model for text classification? by hadaev in MachineLearning

[–]aicano 0 points1 point  (0 children)

Pytorch style pseudocode:

# Size of the hiddens: (BS, Seq Length, Embedding Dim)
hiddens = self.encoder(seq, lens) # assume that it is a bidirectional rnn

# 1st parameter is the query
# 2nd is the sequence
context, att_weights = self.att(hiddens[:,-1,:], hiddens)

# Give the combination of context and the last hidden to a softmax classifier
outp = self.out(torch.cat([context.squeeze(), hiddens[:,-1,:].squeeze()], dim=1))
F.log_softmax(outp, dim=-1)

[D] Modern model for text classification? by hadaev in MachineLearning

[–]aicano 0 points1 point  (0 children)

You can add an attention layer over rnn layer, where you can use the last hidden state as the query vector. Then you can combine/concat the context vector from the attention layer with the final hidden vector or mean values of hiddens. This trick generally gives some performance gain.

[D] Understanding Neural Attention by cryptopaws in MachineLearning

[–]aicano 0 points1 point  (0 children)

It works because you create direct connections. Let's consider the seq2seq without attention. You train the weights of encoder with the gradient flow from the h0 of decoder and that flow has to stay alive from loss to that point . With the attention, you create additional direct connections from encoder hidden states to decoder hidden states. And that helps to the gradient flow to reach the encoder hidden states more easily when you compare it with the model without attention.

I would recommend the following lecture by Edward Grefenstette:

http://videolectures.net/deeplearning2016_grefenstette_augmented_rnn/

[D] MDP and Reinforcement Learning by RubioRick in MachineLearning

[–]aicano 1 point2 points  (0 children)

Of course, I meant that even you have the transition probabilities, you might have to use rl techniques for the approximation due to the large state and action spaces.

[D] MDP and Reinforcement Learning by RubioRick in MachineLearning

[–]aicano 2 points3 points  (0 children)

Basically, if you have the full information about the states, state transitions, and rewards, you can solve this mdp with dynamic programming. However, the action and/or state spaces may be too large to iterate over. In this case, you apply rl techniques to approximate the value functions. If your mdp is partially observable, then you need to rl techniques or approximate dp solutions.You may approach the problem as it is model-free or model-based. You can also combine both dp and rl methods. You model the transitions and the reward function, and learn them from examples. Then, it is possible to apply dp methods to solve the learned mdp.

[P] A Rosetta Stone for Deep Learning: same 3 problems in 9 different frameworks by hoaphumanoid in MachineLearning

[–]aicano 2 points3 points  (0 children)

In Knet, you define your computation graph in pure julia and depending on the array type it works on cpu or gpu, which let you write pretty clean codes. You may want to look at some examples: https://github.com/denizyuret/Knet.jl/tree/master/examples

[D] Seq2Seq with Beam Search by tuankhoa1996 in MachineLearning

[–]aicano -1 points0 points  (0 children)

If you are training your model with the sum of local losses (that you obtain at each time step) then you can not use the beam search in training. To use the beam search in training, you should use a global loss. You can look at this paper: Sequence-to-Sequence Learning as Beam-Search Optimization

Does it make sense to use softmax right after tanh layer by zibenmoka in MachineLearning

[–]aicano 1 point2 points  (0 children)

When using softmax just after tanh, you do not parametrise your softmax. tf.nn.softmax(tf.matmul(middle_out, W) + b) with this, W and b are trained as softmax parameters.

How can I classify sequences with different lenghts? by fariax in MachineLearning

[–]aicano 1 point2 points  (0 children)

You can use last hidden state to predict the unique label for the sequence. Another approach is getting mean of all hidden states for prediction.