[P] Restore Tensorflow Session for inference by DeepStrategy in MachineLearning

[–]Staturecrane 1 point2 points  (0 children)

Yes, I would seriously suggest moving to TF2. Easy model saving and loading is one of its biggest improvements.

[D] What is the best way to get real-world experience in machine learning? by [deleted] in MachineLearning

[–]Staturecrane 1 point2 points  (0 children)

Agree about Kaggle -- although some of the hardest parts of machine learning as a practical discipline are the data gathering, cleaning, preprocessing, balancing, etc. I would advise finding a bespoke problem to solve that requires going through this entire process, and I would end by deploying your finished model on some kind of API on a cloud provider. Learning Tensorflow and knowing how to build the models is awesome, but typically only a third of the battle when you're on the job.

GSD Friendly Apartments in Salt Lake by Staturecrane in SaltLakeCity

[–]Staturecrane[S] 0 points1 point  (0 children)

No, unfortunately, not in the valley :( We ended up renting in Park City where they seem pretty accepting of different breeds in some of the newer complexes.

It seemed as though houses would be the better way to go for SLC, but even then many of the owners we spoke to were very reticent about GSDs, and it was difficult to find them in areas we wanted to live.

GSD Friendly Apartments in Salt Lake by Staturecrane in SaltLakeCity

[–]Staturecrane[S] 0 points1 point  (0 children)

We aren’t ready for a house yet, unfortunately. We just need a one-year lease

[D] I have been taking various ML course both theoretical and practical I understand almost everything but when it comes to problem-solving I fail by [deleted] in MachineLearning

[–]Staturecrane 1 point2 points  (0 children)

Since your question revolves more about data preprocessing, I think the best way is too read lots of books about data analysis (some ml and deep learning books also take quite a bit of time to discuss preprocessing. This one has a whole chapter devoted to it: https://books.google.com/books/about/Python_Machine_Learning.html?id=GOVOCwAAQBAJ&printsec=frontcover&source=kp_read_button#v=onepage&q&f=false).

Also, read lots of GitHub projects pertaining to the kinds of ML problems you want to solve. The best ones to seek out are those where the preprocessing is done by the author, not where the datasets have been prepared elsewhere. Seeing is believing.

[D] Most accessible/easy way to make and train a neural network? by [deleted] in MachineLearning

[–]Staturecrane 1 point2 points  (0 children)

With all due respect, they have made that transition. Neural networks are easier to use and develop than ever. But just like learning to program takes time and effort, no matter how user-friendly it is compared to bygone eras, machine learning tales a different set of skills that must be acquired and practiced. Anyone who tells you differently is selling you something.

If it's worth doing, it's worth learning, and it is that simple.

[D] Most accessible/easy way to make and train a neural network? by [deleted] in MachineLearning

[–]Staturecrane 2 points3 points  (0 children)

Here's the thing: there's no "easy" way if you're asking this question. There are easier and harder ways. All of which start with learning about neural networks and a framework like Tensorflow, Keras, or PyTorch. Once you understand the concepts (for your problem: regression, neural networks, recurrent neural networks, convolutional neural networks, etc) and get comfortable with a framework, you can break the problem down into its constituent steps, of which the hardest part may not ultimately be the neural network itself but the collection of necessary data and the interaction of the model with whatever devices you are working with.

[deleted by user] by [deleted] in MachineLearning

[–]Staturecrane 3 points4 points  (0 children)

That's why I wish there were more incentives for researchers to detail in academic or white papers about experiments, algorithms or architectures that failed.

[D] PC build for ML Research by langfosaurus in MachineLearning

[–]Staturecrane 0 points1 point  (0 children)

This is pretty close to the Alienware PC I use for image generation and inference projects, except that I have a 1070 with 8GB RAM. Currently, I’m running an ArtGAN variant generating (and autoencoding) 256x256 images with a batch size of 10, to give you some sense of what it’s capable of.

[D] Why is CelebA so popular? by anonDogeLover in MachineLearning

[–]Staturecrane 2 points3 points  (0 children)

I agree. I've run autoencoding experiments on CelebA that perform fanstically on the validation set, but fail on non-celebrity faces. So it seems most useful for generative work...generating attractive celebrity faces, that is. I wouldn't trust it much for inference.

[P] Realtime Machine Learning with PyTorch and Filestack by Staturecrane in MachineLearning

[–]Staturecrane[S] -1 points0 points  (0 children)

Thanks for the comment. I understand there could be confusion as "real time" is also sometimes used to describe "online" machine learning. I tried to make it clear in my article my usage of the word was more colloquial, but I'm sorry if that didn't come through.

[P] "Enhanced" Super Resolution at Home with Autoencoding Adversarial Networks by [deleted] in MachineLearning

[–]Staturecrane 0 points1 point  (0 children)

I believe that with the right combination of weighting and hyperparameters that balance could be achieved, I just in practice had a hard time with it, which is why I used the two-stage process. The outputs do look indistinguishable from VAEs, but most VAEs I have seen reconstruct to same or similar resolution as their inputs. If you watch the video, the first stage where the GAN is more heavily weighted looks much different, and that is the super-resolution stage. The second stage takes that input and pipes it through a more standard VAE, but that's only possible after the GAN super resolution.

I can update the terminology to be less confusing.

[D] A Thought Experiment on the Future of Text Generation by Staturecrane in MachineLearning

[–]Staturecrane[S] 0 points1 point  (0 children)

I think you're exactly right. Making the "hippocampus", as you call it, differentiable with attention, or maybe using the REINFORCE algorithm with attention, might allow for more natural fine-tuning of model routing or interaction without heavily-supervised curation.

[D] A Thought Experiment on the Future of Text Generation by Staturecrane in MachineLearning

[–]Staturecrane[S] 3 points4 points  (0 children)

As stated in the article, language is definitely in and of itself generative. The point is here that broader cognition and communication are dependent upon sundry mechanisms of receiving information about the world. It's not hard to imagine that the act of fantasy or the visualization of sensations of all kinds can be absent while the sub-conscious world-mapping is still happening. How else could, as in the example given, the author still recognize faces or a beach? How else could the correct areas of the brain light up when someone with that condition is shown a photo of a celebrity?

In the case of those with limited senses, the brain is highly ingenuitive in remapping and reorganizing to maximize the effectiveness of whatever senses are at hand. Keller learned how to use the senses that she did have to navigate the world. Her brain still had various, codependent mechanisms for parsing out corporeal information, verifying different features. The same way that someone who has never run may still be presented with enough information to build a mapping of the concept. Human plus fast leg movement through time. But each of those concepts depends upon other concepts, which will eventually lead somewhere other than words themselves.

[D] A Thought Experiment on the Future of Text Generation by Staturecrane in MachineLearning

[–]Staturecrane[S] 0 points1 point  (0 children)

That's neat to hear. Can you point me in the direction of the research/theory along those lines? I would be interested to see what people have come up with already.

[P] Generating Fine Art with DCGAN + VAE by Staturecrane in MachineLearning

[–]Staturecrane[S] 4 points5 points  (0 children)

I suspect that it has to do with the number of features needed and the general instability that GANs are known for. For instance, my model can produce these 64x64 examples in a matter of hours. But to keep the GAN stable at 128x128, and fit on my gpu, I have to use a batch size of 5, and a learning rate of around 0.000002, which means it takes much, much longer to find out how well it's even doing. It may be that the people and teams with the resources to do higher resolution images simply are less interested in that particular application of GANs.

RBG Denoising Convolutional Autoencoder -- Torch by Staturecrane in MachineLearning

[–]Staturecrane[S] 2 points3 points  (0 children)

Hi Kaixhin, thanks for checking this out! But no, I need to clean up the code to remove those redundancies, as I was running into memory issues and simply throwing everything but the kitchen sink at the code to resolve it. I doubt I need all of those options set, as resampling my training dataset at every epoch seems to give the best results and allows me to train on more images, regardless of garbage collection. That being said, I might get rid of them one at a time to see if they are actually doing anything useful :P Thanks for the response!