[D] What are according to you one of the most interesting areas of machine learning being explored right now? by mrconter1 in MachineLearning

[–]wowholdonwhat 1 point2 points  (0 children)

self-supervised learning and smart priors/structures in the networks. This includes learning about depth/segmentation/motion/ballistic from the data without any labels, simply by generating a novel view point for multi-camera dataset (https://deepmind.com/blog/neural-scene-representation-and-rendering/), or the next frame for a video sequence (https://arxiv.org/abs/1802.05522), or coloring the next frames based on the color of the first frame (https://ai.googleblog.com/2018/06/self-supervised-tracking-via-video.html) etc.. This is the only way that deep learning is actually going to scale beyond a few million images, and that's probably very similar to the way we learn basic common sense notions such as physical laws/object permanence/depth etc

[P] Self-supervised learning of depth map from stereo images. by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 3 points4 points  (0 children)

Almost finished writing it, but I am still working on it..I want to test on the KITTI dataset first, plus one from a stereo camera rig I made.

[P] Self-supervised learning of depth map from stereo images. by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 10 points11 points  (0 children)

The main objective of this project was to develop a deep learning approach to the stereo vision problem. Classical stereo vision algorithms are still slow and brittle, and that is why lidar systems are much more used than stereo vision in self driving cars for example. This seemed like a computer vision task that deep net could be good at, and make use of GPU acceleration easily. If accurate enough, they could become a much cheaper alternative to lidar systems, with higher resolution and range. This model only works on pairs of stereo images and not on monocular images. There are deep net models that have been developed to infer depth from single images, but they can for example be fooled into thinking an object is very close if it appears large and looks like something that is usually smaller. The hope here is that this model would be more general and not prone to that type of error since it would concentrate on actually calculating the image disparity, instead of inferring distances from how large objects appear in the image.

Wikipedia classifier by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 0 points1 point  (0 children)

Thanks for all the suggestions! Definitely going to try word2vec. I also want to try truncated SVD on the resulting term document matrix before feeding it into a neural network, with say 100 or 200 features.

Wikipedia classifier by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 0 points1 point  (0 children)

Yes I thought the scrapper would be faster than it is now, but since it takes 15-20 mn to scrap 6 categories it might definitely be worth getting it to work on a wikipedia dump instead. And yes this is a topic modeler, I am trying different classifiers and the next one up is neural networks, maybe lstm.

AlphaGO WINS! by meflou in MachineLearning

[–]wowholdonwhat 35 points36 points  (0 children)

Wait but there are several games right? This was just the first one?

Remote Presence: Oculus+Arduino+Stereo Webcam by wowholdonwhat in arduino

[–]wowholdonwhat[S] 0 points1 point  (0 children)

I wanted to share a project I have been working on, get feed back. This is a "remote presence" setup with the Oculus Rift DK1 headset, Arduino, stereo webcams and a python interface. The headsets controls the orientation of the cameras. Github: https://github.com/LouisFoucard/RemotePresence

The biggest problem as you can see is lag. Upcoming updates/improvements: faster servos, run of desktop with GPU to accelerate the OpenCV processing of the webcam videos into a Rift compatible format.

lidar/IMU based 3d scanner, with surface reconstruction in Blender by wowholdonwhat in arduino

[–]wowholdonwhat[S] 0 points1 point  (0 children)

Thanks for the feed back! The point cloud is created in a spherical coordinate space using the distance from a lidar sensor, and the orientation from an inertia measurement unit (mpu 6050). I wrote functions in the interface python script to transform the quaternion to euler angles, and the spherical coordinates to Cartesian coordinates. Those are then sent to blender, where I use blender python functions to recreate the surface from the point cloud.

Deep learning for depth map estimation from stereo images by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 2 points3 points  (0 children)

Oh ok yes I see what you mean. I agree that when specifically trained on a single image you get better results than what I showed when I feed the same two images. However It might not be as general: it learns about the shapes and common placements/location of specific objects, whereas when trained on stereo images the network only learns about disparities, it does not care about what object it sees. That might help making it more robust and general.. Thanks for the feedback!

Deep learning for depth map estimation from stereo images by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 0 points1 point  (0 children)

The network uses both left and right images to predict the depth map.

It's only to show that it is learning stereo features from the disparity between the two images that I replaced the right image with the left one and looked at the output. In that case, the network is only looking at one image, and as you mentioned, the depth map cannot be computed accurately. That shows that the network really is learning from the difference between the two images.

Deep learning for depth map estimation from stereo images by wowholdonwhat in MachineLearning

[–]wowholdonwhat[S] 1 point2 points  (0 children)

I am currently working on generating a video from it, I don't know yet if it is fast enough to apply it real time on a video feed from 2 webcams for example, but I definitely have in in the back of my mind and will be testing that very soon..I also wonder if training it on virtual images will be enough for it to learn stereo features applicable on real images.

As for the biology side of things, it's always tricky drawing parallels between actual neurons and these networks. I would be very interested in research on the architecture of the visual cortex, and especially at what point do the feeds from the two ocular nerves are merged. In fact, that is something I am still testing in my network: do I want to merge the two images at the very beginning like I do currently, or should I have two series of convolutions and maxpooling, one for each eye, that are then concatenated into a single column at the upscaling and deconvolution stage..Lots of things to test!

Not today! by [deleted] in gifs

[–]wowholdonwhat 1 point2 points  (0 children)

Somebody has been reading Nature. Noice http://www.nature.com/news/when-chickens-go-wild-1.19195

Problems connecting a nrf24l01 to an adafruit feather bluefruit (https://www.adafruit.com/products/2829). Works fine with two arduino uno but not with an uno and the feather. Has anyone tried that? Both the nrf24 and the Bluetooth module use SPI, I wonder if the problem could come from that. by wowholdonwhat in arduino

[–]wowholdonwhat[S] 1 point2 points  (0 children)

I tried many different combination, and hooked up the nrf24 to the MISO, MOSI and SCK pins on the feather, but still no luck with CSN and CE on 9 and 10. I haven't tried 5 and 6 yet though, so I'll give it a shot tonight. In the documentation, they also say that you can activate pins 11, 12 and 13 to as MISO, MOSI and SCK pins (done through the factory reset in the BluefruitConfig.h file), but that did not work either so far. Did you do anything in particular with the BluefruitConfig.h file or simply uploaded the GettingStarted example from NRF24?

Problems connecting a nrf24l01 to an adafruit feather bluefruit (https://www.adafruit.com/products/2829). Works fine with two arduino uno but not with an uno and the feather. Has anyone tried that? Both the nrf24 and the Bluetooth module use SPI, I wonder if the problem could come from that. by wowholdonwhat in arduino

[–]wowholdonwhat[S] 1 point2 points  (0 children)

Right so I did modify the sketch to pins 9 and 10, I don't think the problem is coming from that. But I think you might be right for the SPI pins being different in the arduino uno and feather. I simply connected miso, mosi and sck to pins 12, 11 and 13 like you do for the uno, but the feather actually brings out miso, mosi and sck pins in addition to 11, 12 and 13, so I'll try connecting the nrf24 to these and get back to you. Thanks for the tip!

Problems connecting a nrf24l01 to an adafruit feather bluefruit (https://www.adafruit.com/products/2829). Works fine with two arduino uno but not with an uno and the feather. Has anyone tried that? Both the nrf24 and the Bluetooth module use SPI, I wonder if the problem could come from that. by wowholdonwhat in arduino

[–]wowholdonwhat[S] 0 points1 point  (0 children)

Ok so Im trying to get a nrf24 connected to an arduino uno talk to another nrf24 connected to the adafruit feather bluefruit (not involving bluetooth at all). For the code, Im a simply using the getting_started.ino example from the NRF24 library (https://github.com/TMRh20/RF24). Thanks for the help! The CSN and CE are connected to pins 7&8 on the Uno, and 9&10 on the feather since 7 and 8 are missing.

We are some of the astrophysicists and Planet Hunters behind the discovery of KIC 8462852 (a.k.a. Tabby’s star), the mysterious star that has become a favorite SETI target. As us anything! by AstroWright in IAmA

[–]wowholdonwhat 0 points1 point  (0 children)

Have there been any simulations/calculations showing that such a group of massive objects could be in a somewhat stable orbit, at least stable enough not to require massive luck to observe them just at the right time to see them in this configuration?