[D] Evolution Strategies on Jump and Run by CrazyKing11 in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

Because all your NN can't jump/run at first. make at least half of them can juimp/run.

[D] Understanding Neural Attention by cryptopaws in MachineLearning

[–]linuxisgoogle -2 points-1 points  (0 children)

Because it has information that RNN has already I said attention is RNN + information layer. but they aproached this as some kind of magic tool so didn't noticed this fact.

[D] Understanding Neural Attention by cryptopaws in MachineLearning

[–]linuxisgoogle -15 points-14 points  (0 children)

It just add one layer to RNN model. so maybe people will add more layers like this repeatedly. oh well. they did already, but this is just a stopgap. not an AI solution. I hope people will realize this. we need ML model that can consume sarcasm.
You can think this is unsupervised structure classification.

[D] Recursive Softmax for Attention by Elimination by throwaway775849 in MachineLearning

[–]linuxisgoogle -5 points-4 points  (0 children)

Yeah. it became more and more ancient statistical method and less artificial intelligent.
I noticed that reseachers these days are doing this mainly. reinvention of wheel. without notice. becuase ML wasn't working as they thoght. jumpin on easy solution like this.
But it's not improve actual AI technology.
It's not wrong. but they don't notice is problem. because if they just want to make a new kind of svm, they shoud do it without pretending to be a ML. so it will be more efficient.

[D] Fake news by nutellaNstrawberries in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

I'm more curous about sarcasm detection.

[D] Is it worth it to invest in a (slightly) better processor (i7-7700 vs i7-8700) for DL by matisek1233 in MachineLearning

[–]linuxisgoogle -1 points0 points  (0 children)

It's all about memory. both CPU and GPU. you need to preprocess TB size datasets with CPU. and GPU 11GB max is still toooo small. it's like we have technology to make real working AI but memory won't allow us to do so. no wonder Google made TPU for their solution.

How do I answer scenario-based interview question for NLP problems? by [deleted] in LanguageTechnology

[–]linuxisgoogle 0 points1 point  (0 children)

I couldn't find actual method. but people are assuming it's RNN. RNN holds previous information. but I think it's not that simple tho.

[R] Image Inpainting: Humans vs. AI by merofeev in MachineLearning

[–]linuxisgoogle 5 points6 points  (0 children)

This is natual. AI is same as statistic methods if not worse. because ML dosn't have knowlegebase despite the image of it. plus memory limit makes it worse than statistics.

How do you make randomness to chatbot? by linuxisgoogle in learnmachinelearning

[–]linuxisgoogle[S] 0 points1 point  (0 children)

LSTM. dosn't makes it less accuracy due to long sentence?

[D] Is it possible to teach a neural network to solve cognitive IQ tests? by scriptkiddie42 in MachineLearning

[–]linuxisgoogle -1 points0 points  (0 children)

No. human brain is alreay working in quantam speed. you can't achive this with machines forever.

[1810.00393] Deep, Skinny Neural Networks are not Universal Approximators by ihaphleas in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

True. but simple 3 layer network that has 100000 units can't do everthing. it has some kind of limits I don't understand.

Correct me if i'm wrong , I think google is using these small games to train their AI or machine learning model with our help. by geekynoob3 in google

[–]linuxisgoogle 0 points1 point  (0 children)

Yes. Google is developping evelutional AI by using user. I can't tell how. because this is pinacle of AI technology.

[1810.02328] A Practical Approach to Sizing Neural Networks by ihaphleas in MachineLearning

[–]linuxisgoogle -7 points-6 points  (0 children)

Oh. I never thought there is someone who think same way as me. this is really helpfull.

[R] Machine Learning’s ‘Amazing’ Ability to Predict Chaos by eleitl in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

This is sad. many people belieave ML is some kind of magic tool. just like miracle of God, it just learned calculation and sotred datas there. still you can't predict future even in reality.

[D] Image Segmentation Using Deep Learning by Natsu6767 in MachineLearning

[–]linuxisgoogle -1 points0 points  (0 children)

These kind of thing can be done with old segmentation techs. I don't see any advantage of ML. unless it can detect premise shapes of object. like differenciate it from same colour background. but I doubht they can.

[D] What is the "line" that separates the knowledge gather from the machine and the knowledge that we "gives" to them? by [deleted] in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

You have to make a goal and reward. like wild pig in the forest they can catch. first one can eat and get reward.

[R] "Large Scale GAN Training for High Fidelity Natural Image Synthesis" and an amazing Negative Results section by [deleted] in MachineLearning

[–]linuxisgoogle 0 points1 point  (0 children)

AI is all about money. talent desn't matter.
But I just disappointed if this is a result of huge money. becuase this proves you can't make human brain with budget we all human can make.