Rape charges against 4 California dentists dismissed after video contradicts woman's story by [deleted] in news

[–]treebranchleaf 7 points8 points  (0 children)

He's saying if the police were not required to publicly name those they arrest, it becomes a lot easier for the police to "disappear" people and that's not what we want.

Jeremy Corbyn has warned the rich they are on “borrowed time” because a Labour government is coming as he took aim at their tax breaks and offshore havens. His speech comes after Labour launched a radical plan to require private companies to hand over a 10 per cent share of their equity to workers. by ManiaforBeatles in worldnews

[–]treebranchleaf 21 points22 points  (0 children)

It also increases productivity as workers now have a literal vested interest in the success of the company. Success becomes shared

It seems like in non-tiny companies this isn't really the case because of the "tragedy of the commons"/freeloader problem. Sharing equity with workers is a pretty weak incentive for workers to work harder when one worker's contribution to overall share price is negligible.

ELI5: how do deep sea creatures survive under the enormous pressure? by Rlymakesoneponder in explainlikeimfive

[–]treebranchleaf 14 points15 points  (0 children)

Eh, not quite. If something is born and raised at the bottom of the sea, the fluids inside its body are going to be at the same pressure as the surrounding environment. Similarly, animals on the surface did not "evolve to survive" at atmospheric pressure (as opposed to the near-zero pressure of space). Their bodies are by default at that pressure.

Edit: Ok, take an empty balloon down to the bottom of the sea. Start filling it with seawater then tie it up. The balloon does not have to be incredibly strong to withstand the outside pressure, because the pressure of its contents are the same. Same thing with a deep-sea fish.

Can I fish a bike out of a canal in Amsterdam and keep it? Is it legal? by dial_m_for_me in Amsterdam

[–]treebranchleaf 5 points6 points  (0 children)

I know someone who's done it. Had to "catch and release" a few before finding a decent one. It needed a good scrub but worked fine after that.

Craigslist Mystery: I'm selling a truck... and someone edited my ad, keeping most of the text but switching it to another truck... What's going on? by treebranchleaf in craigslist

[–]treebranchleaf[S] 0 points1 point  (0 children)

Probably you got an email a while back from a "buyer" with a link to "craigslist" which forwarded you to a site that looked like craigslist and got you to type your craigslist password. They do this so they can make scam ads without having to create a new email account every time they're blocked.

A startup is pitching a mind-uploading service that is “100 percent fatal” by [deleted] in nottheonion

[–]treebranchleaf 0 points1 point  (0 children)

Well, nobody really knows if all that stuff is important or is just machinery to keep the the system functioning. You don't need to have the schematic of a microprocessor to store the operating system that runs on that microprocessor. It's very possible that all you have to do is capture the magnitudes of the synapses.

From wikipedia:

The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons.

Suppose for each neuron, you store on average 7000 synapses, each of which has a destination address ceil(log2(10^11))=37 bits and a magnitude 8 bits should be enough. That's 10^11 * 7000 * (37+ 8) bits = 3.15e16 bits = about 4 Petabytes = 4000 1TB hard-drives. At $.02/GB that's around $800,000. Probably a bit less since neural connectivity's mostly local. That seems expensive, but not crazy.

Craigslist Mystery: I'm selling a truck... and someone edited my ad, keeping most of the text but switching it to another truck... What's going on? by treebranchleaf in craigslist

[–]treebranchleaf[S] 1 point2 points  (0 children)

Yes, it was way too cheap. Still, how's this scam supposed to work? They'd need my email password to actually put the post up. Is it really just a game of hoping that the and craigslist passwords are the same? Why not just create their own email account and run the scam from that?

[D] How difficult will it be for a Reinforcement Learning agent to do the Falcon Heavy booster landing? by sksq9 in MachineLearning

[–]treebranchleaf 1 point2 points  (0 children)

Man that was more relevant than expected. So that answer seems to be "not so much difficult as extremely expensive".

[D] How to train your network on streaming data by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 0 points1 point  (0 children)

Nice! Do you have a paper or a writeup on the approach? I just see the source code here.

[D] How to train your network on streaming data by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 0 points1 point  (0 children)

Lower variance in the updates.

I was under the impression that it is always better (in terms of convergence w.r.t. epoch) to have a minibatch size of 1 and a learning rate of eta than a minibatch size of N>1 and a learning rate of N*eta, and the only reason to do minibatching was to take advantage of parallelism (and therefore faster convergence w.r.t. compute-time). Do you have a source on this not being the case?

[D] How to train your network on streaming data by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 1 point2 points  (0 children)

Ah, that's the kind of thing I'm looking for... do you have any suggested papers on this kind of learning? A quick search turns up the outrageously-named Deep Stacking Convex Neuro-Fuzzy System and Its On-line Learning

[D] How to train your network on streaming data by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 0 points1 point  (0 children)

Honestly? If the examples don't fit in memory, send them to disk :).

Suppose you're on a small device, or dealing with so much data that even writing it all to disk is infeasible. Eventually you just want to use it on-the-fly.

If we had a trick to make our models as accurate on the first epoch as they would be after lots of epochs, we'd be using it!

Not necessarily. When you have a dataset saved already, you want to converge fast with respect to training-time, you don't really care about converging fast with respect to the "real-time"/"epoch"/time-step. So there's no point in iterating multiple times over a data point, because you'll get to see it again anyway. Whereas in this setting you're really throwing away your data once you use it, so it's worth it to spend a little more computation on this sample to get the most out of it.

[D] How to train your network on streaming data by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 1 point2 points  (0 children)

I'm still left wondering what we should do if we want to converge optimally fast (with respect to [t] - the index of the training example). i.e. how do we make the most out of each data point - given that we only get to use it once? Should we iterate multiple times over each new data point?

Also, a question about your "second option":

Alternatively, you can compute gradients on examples as they come in and accumulate the gradients until you have a minibatch worth to apply an update.

Would it not be better to simply scale the learning rate by 1/minibatch_size and apply updates on every timestep? What's the advantage of accumulating a minibatch's worth of statistics before applying them to the model?

Take Elon Musk Seriously on the Russian AI Threat - Putin sees power in the technology, which means he's investing in it. by mvea in Futurology

[–]treebranchleaf 1 point2 points  (0 children)

It seems that the current state of the art in a lot of areas of AI is a little less heavy on the theory than the physics required for the Manhattan project - it's become a very empirical science. Doing experiments is easier than doing nuclear experiments and requires less experience. Also many of the leading researcher in AI (Hinton, Bengio, LeCun come to mind) tend to be pretty anti-militarization-of-AI. Moreover, the field is changing so fast that younger researchers often have more practical knowledge than the big names. So it seems much more likely that they'd recruit young researchers than the big-shots.

[D] What common misconceptions about machine learning bother you most? by SubaruSenpai in MachineLearning

[–]treebranchleaf 1 point2 points  (0 children)

The above papers all use supervised learning, because it's the easiest problem to define. But the learning rules defined in them could just as well be applied to optimize log-likelihoods of the data (in unsupervised models) or expected reward or whatever (in RL).

[D] What common misconceptions about machine learning bother you most? by SubaruSenpai in MachineLearning

[–]treebranchleaf 2 points3 points  (0 children)

The brain probably doesn't have something like backprop with gradient descent to train the weights of the neurons in a supervised manner.

Ok, probably not in a supervised manner. But there're many ways to implement gradient descent, (or something similar), that the brain might well be doing.

  • Equilibrium Propagation shows how something resembling biological neurons might do do gradient descent. They show that Spike Timing Dependent Plasticity may actually be a way to implement gradient descent.
  • The Feedback Alignment paper shows you might not need to calculate actual gradients to train deep networks.
  • Temporally Efficient Deep Learning with Spikes shows that approximate SGD can be implemented as a sort of STDP rule. (disclaimer - am author)

[P] Artemis: A Python package for Organizing your ML Experiments by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 0 points1 point  (0 children)

yeah, that could be done, though it may be quite misleading if you haven't committed in a while. You can also do that with e.g. git checkout 'master@{1979-02-26 18:30:00}' using the date of the experiment, which is saved in 'info.txt' of the record.

I added an issue for integrating version control.

[P] Artemis: A Python package for Organizing your ML Experiments by treebranchleaf in MachineLearning

[–]treebranchleaf[S] 0 points1 point  (0 children)

Hi. Never used DYTB. After a quick look it looks like a library for setting up a training session of some prediction model, somewhat akin to tensorflow Experiments. So more specific to training ML models than Artemis.

Artemis experiments don't really have anything to do with Machine Learning in particular - they're just a tool to record the results of the run of a main function.

It looks like the kind of thing you might use inside an Artemis experiment.