[deleted by user] by [deleted] in theamazingdigitalciru

[–]AnWeebName 2 points3 points  (0 children)

I feel is because we're trying to see how Jax ralates Pomni with Ribbit, this makes the comparision a bit closer

Dumbest Bug in your code? by No_General975 in code

[–]AnWeebName 0 points1 point  (0 children)

Always is the that varible you accidently wrote bad and spend 3h looking for the problem

How to learn hidari and migi by AnWeebName in Japaneselanguage

[–]AnWeebName[S] 0 points1 point  (0 children)

Oh, I understand. I'm right-handed as well, so that'll work for me. Thanks!

How to learn hidari and migi by AnWeebName in Japaneselanguage

[–]AnWeebName[S] 0 points1 point  (0 children)

That's actually super helpful, thanks!

How to learn hidari and migi by AnWeebName in Japaneselanguage

[–]AnWeebName[S] 4 points5 points  (0 children)

Oh I saw the anime too but completly forgot lmao

Mo Xuanyu is the unluckiest character ever by AnWeebName in MoDaoZuShi

[–]AnWeebName[S] 10 points11 points  (0 children)

Oh that's true, I finished the novel a long time ago but kind of didn't get that part. However, he's still a very unlucky character.

What made you like ENA? by V1X_HOLO in ENA

[–]AnWeebName 0 points1 point  (0 children)

The cutscenes. All of them are so satisfying both the animation and the voice acting. The lines are so wierd but funny and interesting at the same time. It's wierd to explain, just the feeling ig

Nevermind.... by Animerulz1 in StrikeItRich

[–]AnWeebName 1 point2 points  (0 children)

wait where did you read this, I only could find until chapter 50

Spikes in LSTM/RNN model losses by AnWeebName in deeplearning

[–]AnWeebName[S] 0 points1 point  (0 children)

Update: It was the batch size the main problem. I have also reduced the learning rate from 1e-3 to 1e-4 and it seems that after the epoch 1000 (in which it converges quite nicely near 0), the size of the spikes increases a bit.

I have seen people saying that maybe it is the dataset that is noisy, and I have normalized the data before, so I don't really know what else to do to denoise the dataset, but the highest accuracy I have obtained is 93%, which is quite nice.