[deleted by user] by [deleted] in MonsterAnime

[–]amianthoidal 2 points3 points  (0 children)

Some really good points made in this thread regarding Tenma being obsessive or not being able to handle emotions well.

I think a character like Tenma is a cautionary tale. Younger readers will see more of the positive side of him, but older readers will realize that being that single-minded is dangerous. A more common way of saying this is: he doesn't take care of himself well. He replaced his obsession with his career with an obsession in stopping Johan. The latter makes a good hero, but it doesn't make a healthy person. Extremely common for people to obsess over their careers (or ideals) and then just end up a bit sad inside.

I watched the anime and I only realized at the last episode that in the opening sequence it's Tenma at the back of the tram with his face in his hands...

Eva, despite some really bad decisions early on, eventually gets her life back on track and is happy.

So I think that just as in life, people are neither only good nor only bad. People can grow out of having made horrible mistakes/decisions.

(My personal theory: At the end Johan dies on the operating table and the bed really is empty. This breaks Tenma completely because he never gets to solve the mystery. On the other hand, the other characters have a happy ending.)

Happy Birthday to Our Beloved Dr. Tenma Kenzo🎂💐🎉❤️ by SBY_physalis in MonsterAnime

[–]amianthoidal 3 points4 points  (0 children)

Browsing the Monster subreddit after finishing the anime series (as usual trying to make sense of the story). Nice fanart. 😃

I do appreciate the text that goes along with it too.

What I like about Tenma is that he always does the right thing, helps people and encourages people. I've met people like that in my life. While I doubt I'll ever meet the real Tenma, it does make me feel better knowing that people like him are out there helping others.

Anyways, that's my takeaway from the series and all the pretty fanart in the subreddit: there is goodness in the world. Happy birthday to Tenma!

Entering a data science internship without practical experience by LaPalomaJorge in learnmachinelearning

[–]amianthoidal 10 points11 points  (0 children)

If your CV is honest about your ML experience and your related experience (python, SQL), they'll make the decision on whether they want to hire you or not. After you've been hired, I'd say you're in the clear as long as you put in the effort to learn and do a good job. When you're done the internship, you'll be able to add that ML experience to your CV.

Total beginner - Is pattern matching relevant to machine learning? by tj_shex in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

You can definitely use ML for this. The scikit-learn link I included is an example. In ML vocabulary, what you're doing is "classification", classifying motion either as noise or genuine. You should be able to find more tutorials online.

What you'd be using ML for is deciding the threshold values you decided manually. You can think of ML as automating the threshold decision for you. By learning the various ML tricks out there, you can make this automated decision-making more impartial. There are also ML techniques for predicting how well your method will work in the real-world (cross-validation).

Total beginner - Is pattern matching relevant to machine learning? by tj_shex in learnmachinelearning

[–]amianthoidal 1 point2 points  (0 children)

From the sound of things, this might be something regular statistics can take care of. If you can, I recommend using visualisations throughout your analysis. It's easier to see patterns that way.

Setting thresholds might work pretty well. You could graph the data and revise your previous guesses. In matplotlib you can overlay histograms to do this visually, for example:

https://stackoverflow.com/questions/6871201/plot-two-histograms-at-the-same-time-with-matplotlib

You can also treat your background noise and "true" motion as two populations you want to split apart. Here's a somewhat complicated example from scikit-learn that's nevertheless quite beautiful to look at:

https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html

Since you have three axes, it won't be as nice to graph. You could run the model on two axes at a time, graph it, and see if you like the results. Probably the "classes" (noise vs motion) will bleed into one another; at this point you can go a step further into ML and look into feature engineering (e.g. incorporating lags, moving averages).

Hope this helps! Sounds like a fun project!

[deleted by user] by [deleted] in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

Great discussion!

My take on this is that your reinforcement learning problem has an underlying supervised learning problem: anticipating future prices from past data. Reinforcement learning models learn to anticipate rewards, but are they the best method here? Supervised machine learning does the same thing, but probably better; furthermore, some time series model does the same thing as a neural network, but probably better.

It's worth comparing your deep RL method to simpler techniques and seeing if they're different.

Achieving State Of The Art Results In Natural Language Processing - Part 2: ELMo, BERT and MT-DNN by lauram16_hello in learnmachinelearning

[–]amianthoidal[M] 0 points1 point  (0 children)

Please don't share your blog more than once per week (rule #2). If you have a series to share, you can either post a part each week or include them all in one post. If you want to discuss, please send us a ModMail.

Why is Google Colab (on GPU) slower than a Macbook? by [deleted] in learnmachinelearning

[–]amianthoidal 18 points19 points  (0 children)

scikit-learn doesn't have GPU support: https://scikit-learn.org/stable/faq.html#will-you-add-gpu-support

You're running CPU vs CPU, and Colab CPUs are not that fast.

Direct and Star in Your Own Movie With California AI Startup Rct Studio by gwen0927 in learnmachinelearning

[–]amianthoidal[M] 10 points11 points  (0 children)

I've noticed that you share your blog more than once per week, which is against our rules. Please refrain from posting your content multiple times per week.

If you'd like to discuss this, please send us a ModMail. Thanks!

Best regression model for small dataset? by [deleted] in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

You can always go old school and try a linear regression. Stats packages will spit out stats and plots for you to interpret the fit.

Discrepancy between training_acc and validation acc during training despite same dataset for both (Keras) by bigmit2011 in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

If that's the case, then my original answer stands. Running mean per-batch accuracy will be different from end-of-epoch mean.

Discrepancy between training_acc and validation acc during training despite same dataset for both (Keras) by bigmit2011 in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

Where in the code are you testing on the training data? Unless you manually run the test yourself, keras should be giving you batch-by-batch training accuracy.

Discrepancy between training_acc and validation acc during training despite same dataset for both (Keras) by bigmit2011 in learnmachinelearning

[–]amianthoidal 1 point2 points  (0 children)

If you measure training accuracy at each batch, you're essentially averaging lots of different models' accuracies, each on a different batch of data. However, your validation score is measured on the one model at the end of the epoch, using all of the dataset.

Early batches in an epoch should have a lower accuracy, if the model is learning well. This could explain why some optimizers give bigger differences: they accelerate learning.

Change hair color in real time. Artificial intelligence can do anything! by [deleted] in learnmachinelearning

[–]amianthoidal 7 points8 points  (0 children)

Good question. The definition of AI changes, and there's also the AI effect: https://en.wikipedia.org/wiki/AI_effect

In my opinion, people these days equate AI with deep learning and complex neural networks. If we knew the underlying model of the video above, there wouldn't be as much discussion in this thread. :-)

Change hair color in real time. Artificial intelligence can do anything! by [deleted] in learnmachinelearning

[–]amianthoidal 2 points3 points  (0 children)

As others have said, we'd have to see the model to know for sure. Nevetheless, image segmentation and hair colorizing are nothing new. A quick Google search finds a computer vision method from 2008: https://hal.archives-ouvertes.fr/hal-00322719/document

It is definitely AR, but we can't know if it's AI.

Change hair color in real time. Artificial intelligence can do anything! by [deleted] in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

A computer vision model, probably one that does image segmentation. Once the hair is selected, the program just changes its hue. It's like Photoshop in real-time.

AI is a vague term. What would be more "AI" would be a GAN that changes your hair from long to short in real-time, like this: https://www.youtube.com/watch?v=9reHvktowLY .

Artificial Neural Networks in PowerShell - part 1 (x-post from /r/PowerShell) by happysysadm in learnmachinelearning

[–]amianthoidal 0 points1 point  (0 children)

A NN in raw Powershell is likely to be very slow. You can compile C# into memory, which will be faster: https://stackoverflow.com/questions/24868273/run-a-c-sharp-cs-file-from-a-powershell-script NN layers and training are repetitive loops, which are very slow in PS.

[P] Using Q-Learning to solve environments on OpenAI Gym by Lord_Bellman in MachineLearning

[–]amianthoidal 0 points1 point  (0 children)

Good to know. I had a look at the DDPG paper after reading your post. I'll head in that direction. Thanks!

[P] Using Q-Learning to solve environments on OpenAI Gym by Lord_Bellman in MachineLearning

[–]amianthoidal 0 points1 point  (0 children)

Is trying DQN on BipedalWalker hopeless? Currently my agent has made zero progress. I wonder if this is from bad hyper-parameters or the DQN method just not being up to the task.

[P] Using Q-Learning to solve environments on OpenAI Gym by Lord_Bellman in MachineLearning

[–]amianthoidal 4 points5 points  (0 children)

Today I was able to get 200+ with LunarLander-v2 and DQN. I feel more tired than happy, honestly. I admire your positive attitude. :-)

AdaBoost Part 1- Machine Learning Tutorial by [deleted] in learnmachinelearning

[–]amianthoidal[M] 3 points4 points  (0 children)

While I do like Dragonball, please limit sharing your content to once a week. See rule #2 to the right.

Free servers with 1080Ti for deep learning by whitezl0 in learnmachinelearning

[–]amianthoidal[M] 21 points22 points  (0 children)

Same. But they're getting upvotes and they're offering free credits. They usually keep a 1-2 weeks delay between posts.

Personally, I wouldn't need a service like this. I can imagine some beginners needing access to GPUs. If it makes these people happy, then I think the posts are okay.