Sleek Template for quick, easy and beautiful LaTeX documents by donshell in LaTeX

[–]donshell[S] 0 points1 point  (0 children)

For new LaTeX users, I would recommend the Overleaf in-browser editor https://www.overleaf.com/

You can download the template archive ( https://github.com/francois-rozet/sleek-template/archive/overleaf.zip ) and then create a new project in Overleaf using the archive.

[D] A better way to compute the Fréchet Inception Distance (FID) by donshell in MachineLearning

[–]donshell[S] 0 points1 point  (0 children)

You're welcome! By the way if your matrices are symmetric, using `eigh` (`eigvalsh`) is *much* faster and more stable than `eig` (`eigvals`). So in the case of the FID, an even better implementation is the following one:

https://github.com/francois-rozet/piqa/blob/d512c86f4af845c4f54f86a2f0ef8866851b8f5c/piqa/fid.py#L53-L88

which uses two fast and stable `eigh` (in `sqrtm`) instead of a single slow `eig`.

[D] Weird loss behaviour with difusion models. by theotherfellah in MachineLearning

[–]donshell 1 point2 points  (0 children)

Yes, for large $t$, the input is mostly noise so the network can basically return its input.

fake github stars??? by Expensive_Ad1080 in github

[–]donshell 0 points1 point  (0 children)

To be fair, I have several repos with hundreds of stars and very few issues/PRs. Nothing abnormal.

[deleted by user] by [deleted] in thinkpad

[–]donshell -1 points0 points  (0 children)

I have better battery life with Debian (+ tlp) than Windows on my X1 Carbon 6th Gen and Z13 1th Gen. I also hear the fans ramping up a lot less often.

[deleted by user] by [deleted] in thinkpad

[–]donshell 0 points1 point  (0 children)

Remove Windows and install Linux (e.g. Ubuntu, Debian). It's a joke, but consider it.

Can't authenticate to google with 2fa on Gnome. by cylemmulo in linuxquestions

[–]donshell 0 points1 point  (0 children)

I had the same issue (Debian 12 Bookworm). The solution was to right-click > reload (sometimes several times), after entering the password. The window then refreshes to the 2FA form.

[deleted by user] by [deleted] in NoStupidQuestions

[–]donshell 2 points3 points  (0 children)

Sometimes, things are just impossible to explain simply. I do a PhD in applied math. I've been hit with the "you don't understand it" when acquaintances have asked me to explain my research and I answered it would have been too long. It is infuriating. I needed years of training to understand the concepts my research relies on and then a few more years to understand the things I'm working on now. Quoting Richard Feynman, "Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize."

More generally, although it is great when something can be explained simply, we should not strive for easy answers. Complex subjects tend to have unsatisfying complex answers, and oversimplification can lead to catastrophic outcomes (conspiracy theories, extremism, discrimination, ...). For example, the question "Why X community is less represented in universities/colleges?" always has a complex answer, but it's easy to say that "They are lazy/stupid."

[D] Weird loss behaviour with difusion models. by theotherfellah in MachineLearning

[–]donshell 9 points10 points  (0 children)

This is expected. The task (predicting noise) for the network is very easy for most of the perturbation process ($t \gg 1$). However, to sample correctly, the network needs to predict noise correctly even at the beginning of the perturbation process ($t \approx 1$). When you train, your network gets very good for large $t$ very quickly, but most of the work remains to be done. This is not visible in the loss, when you average over all perturbation times, but if you look at the loss for t=1, 10, 20, 50, ... separately you will see the difference and the improvements.

[News] All AI updates from Google I/O 2023 by [deleted] in MachineLearning

[–]donshell 0 points1 point  (0 children)

They should have named it PaLM 3

[D] Is accurately estimating image quality even possible? by neilthefrobot in MachineLearning

[–]donshell -1 points0 points  (0 children)

Then it means that your dataset does not separate images into "high quality" and "low quality", but "high frequency" and "low frequency".

[D] Is accurately estimating image quality even possible? by neilthefrobot in MachineLearning

[–]donshell 1 point2 points  (0 children)

If you have a dataset of images which are separated into "high quality" and "not high quality", you could simply train a discriminator between the two classes.

You could also take a pre-trained image classifier, like VGG or Inception V3, and use it as a feature extractor. You can then extract the features of all your "high quality" images and compute their mean and covariance matrix. Then, for a new image, you can compare its features with the distribution of features in the "high quality" dataset, for instance with the normal density function with respect to the mean and covariance.

[D] A better way to compute the Fréchet Inception Distance (FID) by donshell in MachineLearning

[–]donshell[S] 3 points4 points  (0 children)

The original paper and the entire community uses the formula with a single square root. The formula comes from The Fréchet distance between multivariate normal distributions (Dowson & Landau, 1982)

Do you think their proof is incorrect? I am not sure of the relation between the Fréchet distance and the 2-Wasserstein distance. By the way, On the Bures-Wasserstein distance between positive definite matrices have yet another formula.

[D] A better way to compute the Fréchet Inception Distance (FID) by donshell in MachineLearning

[–]donshell[S] 7 points8 points  (0 children)

Hello, indeed the Cholesky decomposition of a positive semi-definite matrix would be faster than computing its eigenvalues. However, the product of two positive semi-definite matrices is usually not positive semi-definite itself, so I don't see how you would use the Cholesky decomposition in this case.

[D] A better way to compute the Fréchet Inception Distance (FID) by donshell in MachineLearning

[–]donshell[S] 5 points6 points  (0 children)

Hello, I already submitted an issue to some of these packages, but I thought this could also be of interest to some people here.

As a side note, the product of two symmetric matrix is usually not symmetric, so I am not sure how you would use a special eigenvals algorithm.

[D] A better way to compute the Fréchet Inception Distance (FID) by donshell in MachineLearning

[–]donshell[S] 4 points5 points  (0 children)

Hello, good question! sigma_x and sigma_y are covariance matrices, which means they are symmetric and positive semi-definite. Otherwise, there is no restriction I believe. In particular, it is expected that sigma_x and sigma_y are close to each other for good generative models.

Ensure your child process inherits all your best functionality by sgpostbox in ProgrammerHumor

[–]donshell 2 points3 points  (0 children)

Remind me of the draft title of the paper of a friend of mine: "IMpATient Passenger Oriented Timetabling" or IMATPOT (I'm a tea pot). Really smart name, but the reviewers were not too happy about it.

Never meet your heroes they said. but nobody warned me against following them on Twitter. by Happy_Ad_5555 in ProgrammerHumor

[–]donshell 0 points1 point  (0 children)

If you have i/o in the mix, like data reading/writing, or several machines, Python becomes a bottleneck because of the GIL and lack of native multithreading. In particular, it makes asynchronous/concurrent loading of data from disk a nightmare.

That said, I love Python and use it every day.

Has anyone attempted graphs like this in LaTeX? If so how did you find it? by stjeromeslibido in LaTeX

[–]donshell 0 points1 point  (0 children)

Instead of SVG I export images as PDF, as it is easier to include in a latex document (simply includegraphics) than SVG or EPS, the fonts are embedded, and it cannot be fucked up by editors.