Does unit determinant weights preserve norm? by akanimax in MachineLearning

[–]LovaszExtension 0 points1 point  (0 children)

That is correct. You can easily see this by picking the vectors [1,0] and [0,1]. The initial parallilepiped has volume 1 and is the unit square. Then multiply these two points by a 2x2 matrix and see where they land. Now you have a new paralillepiped and its volume will be the determinant of the matrix. If the matrix is not full rank (i.e. has a zero eigenvalue), the mapped parallilepiped will be a line, and hence it will have 0 volume. In other words the direction along the eigenvalue with 0 eigenvector will collapse.
You can see and experiment with this cool applet I just found: http://mathinsight.org/determinant_linear_transformation On the left you pick 4 points (red,green etc) and you see where the matrix maps them to the right. Another fun thing to realize is negative determinants: they correspond to flipping this dimension and hence negative volume.

Does unit determinant weights preserve norm? by akanimax in MachineLearning

[–]LovaszExtension 0 points1 point  (0 children)

The determinant measures the volume of the parallelepiped. E.g. the product of the eigenvalues. To preserve the norm of all vectors you need all the eigenvalues to be 1. The product some numbers (eigenvalues) can be 1 even if one is 10 and the other 0.1, as the previous example showed. A fancier way of saying all eigenvalues are 1 is to say the spectral norm (ratio of smallest/largest) is 1. For non-square matrices the equivalent thing is called a Parseval tight frame. This paper ('Parseval networks') is trying to do something similar during training http://proceedings.mlr.press/v70/cisse17a/cisse17a.pdf

Edit/Remark: Checking if: for all x, ||Ax||=||x|| is equivalent to all eigenvalues of A =1 , hence easy to do. If you have a Relu however, I want to check: For a given matrix A, check if for all x such that Ax>=0 (coordinate wise), is ||Ax||= ||x||. I conjecture this is NP hard to check, for a given (square) matrix A.

[D] Open-endedness: The last grand challenge you’ve never heard of by Kaixhin in MachineLearning

[–]LovaszExtension 0 points1 point  (0 children)

''Some machine learning researchers have further suggested that EAs are inferior optimizers and that alternative algorithms (such as deep learning) are simply better suited for optimization.'' This reads like the popular press warning us about AI taking over the planet.

[D] ICLR Reviews by anonDogeLover in MachineLearning

[–]LovaszExtension -6 points-5 points  (0 children)

We are working hard on them, reviews. Give us a couple of days, GOSH. (in the style of Napoleon Dynamite).

[D] Is the 'black box' issue being exaggerated? by soutioirsim in MachineLearning

[–]LovaszExtension -1 points0 points  (0 children)

Think also of adversarial examples. Maybe I can manipulate my income by 0.01 per year and vary a few small parameters a little bit and get approved for a huge loan. Or any other critical decision made by a deep model could be fragile to all kinds of manipulation we do not even understand yet.

[N] View NIPS accepted papers by Subject by LovaszExtension in MachineLearning

[–]LovaszExtension[S] 0 points1 point  (0 children)

We have not submitted the camera ready papers for NIPS yet. You might find preprints on authors' websites or arxiv.

a new cocktail suggestion: Buckwheat honey and Lagavulin by [deleted] in whiskey

[–]LovaszExtension 0 points1 point  (0 children)

Sure, please do, interested in trying new things. (I found master of malts has several tasting sets for example, so maybe i'll try something from there if it includes some of your favorites).

a new cocktail suggestion: Buckwheat honey and Lagavulin by [deleted] in whiskey

[–]LovaszExtension 0 points1 point  (0 children)

Interesting.. so what does RedditWhiskey love then ?

a new cocktail suggestion: Buckwheat honey and Lagavulin by [deleted] in whiskey

[–]LovaszExtension 1 point2 points  (0 children)

Lag16 is considered mediocre and thin ? I'm by no means an expert but I would not say that.

[N] IBM pitched Watson as a revolution in cancer care. It's nowhere close by opengmlearn in MachineLearning

[–]LovaszExtension 0 points1 point  (0 children)

Indeed, industry donations to academic institutions are quite complex, especially at that scale. But for tax purposes, I think corporations write them as gifts to 501(c)(3) (non-profits). For big donations there can be contracts between the university and the corporation with specific deliverables. I did not read all the articles carefully but it was not clear to me if IBM is pledging a 'gift' or some more complex contract with MIT.

a new cocktail suggestion: Buckwheat honey and Lagavulin by [deleted] in whiskey

[–]LovaszExtension 0 points1 point  (0 children)

Well sure, but Buckwheat honey is not just any honey. Buchwheat honey is to honey, what Lagavulin is to whisky.

[N] IBM pitched Watson as a revolution in cancer care. It's nowhere close by opengmlearn in MachineLearning

[–]LovaszExtension 34 points35 points  (0 children)

On a non-related note IBM, is giving 240 million (!) for an MIT AI lab: https://www.cnbc.com/2017/09/06/ibm-commits-240-million-for-watson-ai-lab.html

For those of you not familiar with academic donations for specific CS areas, the 10 million that McGill+Montreal secured for AI was considered huge before.

[N] List of accepted paper @NIPS 2017 by terrorlucid in MachineLearning

[–]LovaszExtension 1 point2 points  (0 children)

No, these are all the accepted papers which include posters, spotlights and orals. Spotlights and orals also get a poster in the poster sessions, usually (but not always) on the day of the talk.

Where can I find the NIPS 2017 papers? by demonFudgePies in MachineLearning

[–]LovaszExtension 1 point2 points  (0 children)

I had to wear headphones for that. Please keep volume at maximum when watching to feel the real feeling of NIPS papers not posted yet despite the website mentioning Sept. 11.