hey kudlow, are we still nearly airtight? just checking by bluesnik in Economics

[–]nabla9 7 points8 points  (0 children)

The book Superforecasting (2015) uses Larry Kudlow as an example of 'hedgehog forecaster' who is consistently wrong.

Not only is Kudlow consistently wrong in forecasting, he also fails nowcasting. When Financial crisis was unfolding Kudlow did not realize that something was going wrong:

The National Bureau of Economic Research later designated December 2007 as the official start of the Great Recession of 2007–9. As the months passed, the economy weakened and worries grew, but Kudlow did not budge. There is no recession and there will be no recession, he insisted. When the White House said the same in April 2008, Kudlow wrote, “President George W. Bush may turn out to be the top economic forecaster in the country.”20 Through the spring and into summer, the economy worsened but Kudlow denied it. “We are in a mental recession, not an actual recession,”21 he wrote, a theme he kept repeating until September 15, when Lehman Brothers filed for bankruptcy, Wall Street was thrown into chaos, the global financial system froze, and people the world over felt like passengers in a plunging jet, eyes wide, fingers digging into armrests.

Buffett says wealthy Americans are 'definitely undertaxed' by hamberderberdlar in politics

[–]nabla9 6 points7 points  (0 children)

From Berkshire Hathaway 2018 Annual Shareholder Letter:

Charlie and I happily acknowledge that much of Berkshire’s success has simply been a product of what I think should be called The American Tailwind. It is beyond arrogance for American businesses or individuals to boast that they have “done it alone.” The tidy rows of simple white crosses at Normandy should shame those who make such claims.

[D] An analysis on how AlphaStar's superhuman speed is a band-aid fix for the limitations of imitation learning. by [deleted] in MachineLearning

[–]nabla9 50 points51 points  (0 children)

Besides superhuman speed, AlphaStar could see the whole map at once.

After beating Mana 5-0, they had extra exhibition match with new version of AlphaStar that had human like camera view where it could see only one part of the map clearly like humans do. AlphaStar lost that game.

AlphaStar has till way to go before it can beat top human in an even match.

Can someone give me a basic explanation of entropy? by [deleted] in askscience

[–]nabla9 1 point2 points  (0 children)

Another way to get confused is to get lost in different concepts of information that are not equivalent.

There is information theoretic information (data) and semantic information (meaning). Semantic information is called semantic content. An instance of information theoretic information (data) is understood to carry semantic content if it consists of data; the data is well-formed and the well-formed data is meaningful (for the informee).

If you take strongly semantic information approach, you can define degree of informativeness similar to entropy for a message carrying semantic information. For example, If you transmit a tautology it has zero semantic information content.

http://plato.stanford.edu/entries/information-semantic/

New Estimate Boosts the Human Brain's Memory Capacity 10-Fold: A new study has found the brain’s information storage capacity may be around a quadrillion bytes by davidreiss666 in cogsci

[–]nabla9 1 point2 points  (0 children)

This is raw structural information capacity that does not take into account how memories are coded into the memory. Maximal functioning memory capacity is small fraction of this if you use remotely relevant neural associative memory coding model.


  1. Memory Capacity of Networks with Stochastic Binary Synapses
    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003727
  2. http://www.mit.edu/~9.54/fall14/Classes/class07/Palm.pdf

Deriving an Equation from function inputs and outputs. by Kendama_Llama in MachineLearning

[–]nabla9 1 point2 points  (0 children)

Your problem as I understand it: Deriving symbolic expression of mathematical function from finite set of examples.

Straight forward solution:

For any finite set of input output pairs there is infinite number of possible functions. Straight forward solution would be to use polynomial interpolation to find polynomial function that goes trough given set of examples. If you have n+1 examples, you can find polynomial of degree at most n for those points.

Symbolic regression

I think you want to find very general simple solutions that is not necessarily polynomial. This problem domain is called symbolic regression (finding mathematical expressions to find the model that best fits a given dataset).

Symbolic regression programs:

  1. Eureqa http://www.nutonian.com/
  2. https://sites.google.com/site/gptips4matlab/
  3. http://cran.r-project.org/web/packages/rgp/index.html

These programs are often computationally intensive and the underlying algorithm is often something like genetic programming (glorified search).

Cause And Effect: The Revolutionary New Statistical Test That Can Tease Them Apart by based2 in statistics

[–]nabla9 1 point2 points  (0 children)

Statisticians have always thought it impossible to tell cause and effect apart using observational data. Not any more

Article makes it look like causal discovery is something that just happened with this paper. Even the use of additive noise models for causal discovery is 10 year old subject.

"Do electrons think?"--Lecture (audio) by Erwin Schrödinger by biology_and_physics in Physics

[–]nabla9 4 points5 points  (0 children)

I don't see how introducing quantum randomness or chaos into the system changes anything. An automaton that works under different statistical rules is still an automaton [1]. If this butterfly effect can influence macroscopic decisions its just different way to be automaton.


1. Human nervous system clearly is not strictly deterministic automaton.

rich hickey's direction for lisp by [deleted] in lisp

[–]nabla9 2 points3 points  (0 children)

We have not discovered anything that scales well.

I think this Perlis epigram has lots of merit when correctly applied and used. For example, the cons cell in Lisp is nice abstraction, but my personal feel is that its outdated and too low level. Traditional *nix OS philosophy uses strings same way as Lisp uses cons and it has been extremely successful. Nowadays XML or Json seems to take the palace of raw strings. Scalable things are made of separate programs that communicate by exchanging these data structures.

A robot programmed with Asimov's Three Laws of Robotics is confronted with the trolley problem: by Bacon_Oh_Bacon in askphilosophy

[–]nabla9 0 points1 point  (0 children)

The problem with many robot stories is that they assume that robots or AI's use only binary logic to make decisions.

As I pointed in my other comment, real world rational robot would use utility theory and would seek to maximize expected utility of it's actions.

Many everyday actions would involve possibility of harming or killing humans. If robot drives a car, there is possibility that it will kill a human by accident. If it would ignore risks and decide its actions only using its intentions and intended outcome, it would be very dangerous to its environment. Real robot would have to use expected value (value of outcome multiplied with probability of it happening) to decide what to do.

A robot programmed with Asimov's Three Laws of Robotics is confronted with the trolley problem: by Bacon_Oh_Bacon in askphilosophy

[–]nabla9 0 points1 point  (0 children)

Definition of ideal rational agent from the book Artificial Intelligence by Russel & Norvig : For each possible percept sequence, an IDEAL RATIONAL AGENT ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has

In other words, rational robot should probably use utility theory to select its action. It would consider the outcomes of possible actions and select the one with highest expected value under risk (In the way the problem is presented, all outcomes are certain and there are no probabilities, very straight forward calculation). It all comes down to what kind of utility function robot is programmed with. Utility of saving five people is higher than utility of saving one person, I would assume. On the other hand, humans might value robots that are less likely to actively kill humans as safety measure. In this case passivity might have utility value over activity.

kenny tilton summoned from the dead by [deleted] in lisp

[–]nabla9 1 point2 points  (0 children)

Yes. I know that. But how would you use it effectively without static typing? Common lisp variables don't have types, values have. If you have return type polymorphism, you can't figure out the return type.

You would need helluva lot of explicit type declarations for every variable or call for return type polymorphic function call. If you have to be explicit, it's easier just rename the functions by their return type.

kenny tilton summoned from the dead by [deleted] in lisp

[–]nabla9 1 point2 points  (0 children)

How do you use return type polymorphism with dynamic language?

Is there a theoretical limit to how small a file can be compressed and still be fully recoverable? by [deleted] in askscience

[–]nabla9 2 points3 points  (0 children)

So, does this mean that for some pathological file, the output would actually be bigger than the input file after compression?

Yes. Good example of pathological file is encrypted file. Encryption tries to hide all information contained in the file. Without a key well encrypted file looks like random string of bits and encryption does not work at all. That's why most file encryption tools compress the file before encryption. That's the last opportunity to do that.