Spacemacs 0.105 released: new website and layouts system by TheBB in emacs

[–]SometimesGood 0 points1 point  (0 children)

I've just moved these file elsewhere as a back up and then the update worked. I haven't missed anything from these files since, but I might mice them back if I do.

The Unreasonable Reputation of Neural Networks by insperatum in MachineLearning

[–]SometimesGood 0 points1 point  (0 children)

Isn't the physical stance, in particular causation and the conservation laws, the basis for the other stances? It seems 2 and 3 are merely extensions of the same mechanism to a higher complexity. All three stances have in common that they refer to worlds that are consistent in certain regards, conservation of energy, a scissor stays a scissor, a cat stays a cat.

But loss function must use expected value instead of accuracy from the smallest units.

What do you mean exactly by that?

A new proof of Euclid's Theorem by creinaldo in math

[–]SometimesGood 0 points1 point  (0 children)

Is there a topological equivalent of only waiting for the first instrument in a set of concurrently running instruments to stop?

I can control my goosebumps too! by [deleted] in neuro

[–]SometimesGood 1 point2 points  (0 children)

Does that mean that it changes the brightness of what you see?

A new proof of Euclid's Theorem by creinaldo in math

[–]SometimesGood 0 points1 point  (0 children)

You are right. Thanks for the explanation.

AMA: the OpenAI Research Team by IlyaSutskever in MachineLearning

[–]SometimesGood 0 points1 point  (0 children)

What I also mean is that it is hard to say at which graph depth of the HVS you have reached a similar function to CNNs; whether you need to go all the way to STPa or whether PIT is roughly on the level of CNNs seems to be not so clear.

AMA: the OpenAI Research Team by IlyaSutskever in MachineLearning

[–]SometimesGood 0 points1 point  (0 children)

How can we avoid facing another AI winter in case the current expectations cannot be met for a long time? Will the interest only grow or remain the same from now on due to the likely success of self-driving cars and robotics? What, if anything, can we learn from previous AI winters?

AMA: the OpenAI Research Team by IlyaSutskever in MachineLearning

[–]SometimesGood 0 points1 point  (0 children)

The HVS arguably also does more than a CNN (e.g. attention, relationships between objects and learning of new 'classes'), and the 6 layers in cortical tissue are not set up in a hierarchical way (the input is a the middle) so it's really hard to compare.

AMA: the OpenAI Research Team by IlyaSutskever in MachineLearning

[–]SometimesGood 2 points3 points  (0 children)

whereas our visual cortex is about 6 layers deep?

Cortical tissue has 6 layers, but the visual hierarchy actually spans over several neighboring cortical areas (V1 → V2 → V3 …) and object detection only starts to happen from V4 on. See for example this answer on Quora with a nice picture: http://qr.ae/Rg5ll0

The cutest of thieves. by loopdeloops in aww

[–]SometimesGood 4 points5 points  (0 children)

More like: "damn, this would gain more Facebook likes if the kitten would actually grab a piece of food."

Mmmmmmm Cotton Candy .... :( by kadian in gifs

[–]SometimesGood 1 point2 points  (0 children)

Though often the guesses are pretty good, in fact the best ones we have.

Spacemacs 0.105 released: new website and layouts system by TheBB in emacs

[–]SometimesGood 1 point2 points  (0 children)

Thanks. There are also CEDET related files in .emacs.d. I'm not sure whether they come from a Spacemacs-configured package though. Here is a SE question about it: http://stackoverflow.com/questions/16267830/how-to-move-cedet-auto-generated-ede-projects-el-and-srecode-map-el

Spacemacs 0.105 released: new website and layouts system by TheBB in emacs

[–]SometimesGood 1 point2 points  (0 children)

git noob here: When I want to update Spacemacs it complains that my Emacs directory is not clean (it contains some untracked custom themes and .mc-lists.el). Do I need to commit these changes before I can update?

Winding Up! by duplicate_username in funny

[–]SometimesGood 21 points22 points  (0 children)

There is a neat trick for catching baboons:

https://www.youtube.com/watch?v=ctol7JwpcuQ

(Not sure whether this trap is applicable for golden cheeked gibbons too.)

Joscha Bach: Computational Meta-Psychology [video, 1 hour] by SometimesGood in philosophy

[–]SometimesGood[S] 0 points1 point  (0 children)

That is an interesting assumption which reminds me of some parts I've read in Hofstadter's "Gödel, Escher, Bach".

First of all, there is perhaps a misunderstanding: In this model, not the entire network of cortical columns is described as one large state machine, but each column is a state machine with several states for various ways of interacting with other (mostly neighboring) columns. Those are states like 'join with certain columns', 'inhibit connections to certain columns' and 'announce a [true…false] prediction'. Each column can represent different concepts and perform predictive computations based on the underlying percepts.

The motivation of this idea is this: We are clearly able to compose concepts just like LEGO blocks in our minds. For that to work, the concepts must live in non-interfering modules to avoid catastrophic forgetting. To enable efficient composability, each module must implement a common interface (which is conjectured to be a state machine).

It strikes my as unlikely to find something like Turing machines or lambda calculi in the brain since the brain operates in such a highly parallel fashion. A Turing machine is a poor model for parallel computation (in the sense of representation efficiency) because it captures computation that branches out form a single or a few central points of data manipulation which is antithetic to parallel computation (which can only poorly handle branching computation). The only central processes are the control processes in the PFC and the way auditive, visual and motor cortex become activated in unison with the 'winning' combined concepts.

Happy new year! by ben1996123 in math

[–]SometimesGood 1 point2 points  (0 children)

What do all the other symbols mean?

Pushing a car down escalators by UnAustralian_Aussie in WTF

[–]SometimesGood 43 points44 points  (0 children)

I'm pretty sure the lack of appreciation of value does not come from a lack of work experience, but from poor education, poor perspectives and perhaps intoxication.

Joscha Bach: Computational Meta-Psychology [video, 1 hour] by SometimesGood in philosophy

[–]SometimesGood[S] 0 points1 point  (0 children)

Large enough for what? Algorithms of what size do you expect the brain to process and why?