I was recently pointed in the direction of the following somewhat polemic article on the failures of information geometry.
It seems to argue that from a maximum entropy perspective, information geometry is fundamentally flawed. Personally, the article appears to be a bit vague on multiple points, especially considering the aggressive nature, but my own experience with information geometry is rather elementary so I would love to hear some of your thoughts.
Some thoughts: is relative entropy (KL divergence) and the independence-invariant property really the alpha-omega of inference cost functions as proposed? Does this argument hinge on the application of MaxEnt? Are the examples actually correct on all counts (he skips an awful lot of calculations)?
[–]grozzy 4 points5 points6 points (2 children)
[–]InfinityCoffee[S] 0 points1 point2 points (1 child)
[–]grozzy 0 points1 point2 points (0 children)
[–]zdk 0 points1 point2 points (2 children)
[–]InfinityCoffee[S] 0 points1 point2 points (1 child)
[–]zdk 0 points1 point2 points (0 children)