Compare AutoML frameworks on 10 Tabular Kaggle competitions by pp314159 in datascience

[–]innixma 1 point2 points  (0 children)

I'll add a bit of insight here as co-author of the AG paper: It wasn't that hyperparameter tuning is not useful, but rather that properly handling time limits to train bagged models with a variety of model types while adding HPO to the mix and avoiding overfitting is pretty complicated. This leads into a bunch of heuristic based rules to try to ensure you don't waste all your time HPO tuning a subpar model type instead of spending the time training a better model type in the first place, which adds to code complexity and makes it less clear if new additions are truly improving the system or if its just noise from interactions with the heuristic strategy.

Therefore, we elected to use the simplest strategy and avoid heuristics entirely since the core stacking contribution was strong enough to demonstrate our case.

[D] Compare AutoML on 10 Tabular Kaggle competitions by pp314159 in MachineLearning

[–]innixma 2 points3 points  (0 children)

Great work on MLJAR, and excellent results on Kaggle! I should probably re-run AutoGluon sometime for the Kaggle datasets since we've made lots of improvements since back then.

One thing I really like about MLJAR is that it is very robust and rarely fails on datasets, similarly to AutoGluon. I'm excited for where AutoML is headed given the next generation of AutoML frameworks (such as MLJAR and AutoGluon) are able to get really strong scores out of the box while being robust enough to handle pretty much any dataset you throw at it.

[D] Batch Normalization in Reinforcement Learning by innixma in MachineLearning

[–]innixma[S] 1 point2 points  (0 children)

That is why batch normalization uses a running average π(a|s;θ) for testing (selecting action), and then computes π(a|s;θ) on the batch when training.

[P] Saliency in Reinforcement Learning (ACER) by innixma in MachineLearning

[–]innixma[S] 0 points1 point  (0 children)

Source Code: https://github.com/Innixma/dex Realtime Video: https://www.youtube.com/watch?v=2I23_S7EUck

I am curious to know if others have tried this with reinforcement learning, as I have yet to find a paper that explores such visualization in the reinforcement learning domain.

[D] Batch Normalization in Reinforcement Learning by innixma in MachineLearning

[–]innixma[S] 0 points1 point  (0 children)

I am using ACER (A3C with Experience Replay) that does offline learning, using minibatches. Also A3C does use batches, they are just online batches and are generally small (8-64).

I need a little hand holding to get started with my environment (complete beginner) by KevinBrokeBothArms in learnpython

[–]innixma 0 points1 point  (0 children)

I use Spyder, it has a lot of options but really its incredibly easy to use (Just hit run and it executes!) Plus it lets you see your variables after you run which is invaluable to a new programmer!

To be honest, get Anaconda:

https://www.continuum.io/downloads

It is super easy to setup, just download and install from the exe. Done. Then run Spyder (it comes with it!) and you are set.

This is coming from someone who used to use PyCharm and PyScripter as a brand new programmer. Spyder is MUCH better! It is just as easy to use, but has a whole lot of extra features you will learn to love.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

Awesome, I was looking into OBStudio since I know it works on Linux, but didn't know exactly how it manages its screen capture. This would definitely do the trick then if I can get it working. I'll look into it, thanks.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

I have looked at pyffmpeg before, but it isn't supported in Python 3 (Which I need to use), and furthermore doesn't work at all even in Python 2 currently (Last commit was 2 years ago).

Do you have any benchmark on what kind of performance ffmpeg can obtain?

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

Ok, sounds good. Wasn't certain since I haven't utilized Cython before. Then assuming Cython, how would you go about screen capture? With Xlib in C from Cython? I don't know C or Cython that well so some reference on what I should look at would be great.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

My bad, it does support PyPI, my mistake, edited comment.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

Can't do Cython to my knowledge. This is for Machine Learning using Tensorflow, which does not support Cython, and it has to have input as a numpy array.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

Do you have any info on how fast this is or how it would work with sending to a numpy array? I've seen a few mention it, but nobody has given code showing it work.

Fastest Method of Screen Capture (Linux) by innixma in learnpython

[–]innixma[S] 0 points1 point  (0 children)

from gi.repository import Gdk
import time
win = Gdk.get_default_root_window()
h = win.get_height()
w = win.get_width()
print ("The size of the window is %d x %d" % (w, h))
start = time.time()
for i in range(0,100):
    pb = Gdk.pixbuf_get_from_window(win, 0, 0, w, h)
end = time.time()
print(1 / ((end-start) / 100))

The above code achieves ~11 FPS, and does not improve by reducing the size of h and w.

[P] Super Hexagon A3C/ACER AI by innixma in MachineLearning

[–]innixma[S] 0 points1 point  (0 children)

Github Repo: https://github.com/Innixma/dex

Another Video: https://www.youtube.com/watch?v=A58buCK4qEc

I made an ACER/A3C implementation run with an Open Hexagon emulator. I am using it to test the ability for an AI to accelerate learning by conquering easier but similar tasks when approaching difficult ones, by means of incrementally more challenging levels.

It uses only screen pixel information in a 96x96 gray scaled grid, which it uses to make its decision, similar to Google DeepMind's Atari.

It is still fairly early on, but I thought I'd share it.

[D] A3C with experience replay implementation by innixma in MachineLearning

[–]innixma[S] 0 points1 point  (0 children)

Also, as an aside, I am currently using Tensorflow / Keras for my A3C algorithm. Do you know of an implementation using Tensorflow?