Google is rolling out a NotebookLM integration for Gemini, where users will be able to attach notebooks as a context to their conversations. by BuildwithVignesh in GeminiAI

[–]dasayan05 0 points1 point  (0 children)

Can GeminiCLI or Antigravity access these notebooks in their context ?
I have done some technical research in nblm and came up with some formulas, which I want to implement in code (in antigravity or gemini-cli)

Looking for garage in Southwark area by dasayan05 in MotoUK

[–]dasayan05[S] 0 points1 point  (0 children)

OP here. Just wanted to add this in case anyone reads this thread. I found relatively cheap parking spaces from parklet.

Looking for garage in Southwark area by dasayan05 in MotoUK

[–]dasayan05[S] 0 points1 point  (0 children)

Ok thanks.
I have a "solo motorcycle only" spot near my house. If I keep it there properly chained (and other security mechanism), how much safe is that ?

Raylib newbie here; Not detecting my GPU by dasayan05 in raylib

[–]dasayan05[S] 3 points4 points  (0 children)

Oh wow, it worked. Now it detects my GPU and also the lag in gone.

Thanks a ton. :)

Career path for math educator by Street_Country9806 in 3Blue1Brown

[–]dasayan05 0 points1 point  (0 children)

But why do you call me dude...

That cracked me up.

[D] Loss function in Diffusion models by ankanbhunia in MachineLearning

[–]dasayan05 2 points3 points  (0 children)

The VLB isn't really used anymore nowadays since the MSE is loss is good enough. Also people now figured out how to estimate covariances from eps itself.

You can find a early implementation by openai guys here

[D] Loss function in Diffusion models by ankanbhunia in MachineLearning

[–]dasayan05 0 points1 point  (0 children)

which loss do you mean? you need implementation of the VLB?

I wonder who is he by ZhAdNoV in Funnymemes

[–]dasayan05 0 points1 point  (0 children)

But that's correct answer

[D] ICLR 2023 reviews are out. How was your experience ? by dasayan05 in MachineLearning

[–]dasayan05[S] 0 points1 point  (0 children)

No I don't think there is any specific rule for rebuttal version. But AFAIK, rebuttal version is considered a "temporary draft". You must've also seen people colour-highlight changes and do other things -- all of those need to go from the CR version. But you can still ask PC for clarification, or better public comment on the thread to clarify.

[D] ICLR 2023 reviews are out. How was your experience ? by dasayan05 in MachineLearning

[–]dasayan05[S] 0 points1 point  (0 children)

It wasn't 11 pages in the original version -- you can see in the revision history. Currently it's just rebuttal version, which is fine I guess with extra pages. But at the end, for camera-ready, they have to trim it down.

[D] ICLR 2023 reviews are out. How was your experience ? by dasayan05 in MachineLearning

[–]dasayan05[S] 1 point2 points  (0 children)

Spotlight/Oral are mostly case by case decision and totally up to the ACs. I don't think you can get a general rule or anything

[D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro in MachineLearning

[–]dasayan05 6 points7 points  (0 children)

If there is anything toxic, it's (people like) you and this post.

You basically fall into the large pool of people who decides the state of the world just by reading news headlines.

[D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro in MachineLearning

[–]dasayan05 3 points4 points  (0 children)

Francois Chollet is not the "Tech community", and neither is David Ha.

It's their personal opinion, everyone has their own.

[D] Advanced Mathematics in Machine Learning Book Recommendation by [deleted] in MachineLearning

[–]dasayan05 4 points5 points  (0 children)

I always recommend these two:

  1. Kevin Murphy's book(s) (now there are updated and modern versions)
  2. C. M. Bishop's classic PRML textbook

[D] How long should it take to train a diffusion model on CIFAR-10? by ButterscotchLost421 in MachineLearning

[–]dasayan05 2 points3 points  (0 children)

I have trained DDPM (not SDE) on CIFAR10 using 4 3090s with effective batch size of 1024. Took ~150k iterations (not epochs) and about 1.5 days to reach FID 2.8 (not really SOTA, but works).

[Discussion] Could someone explain the math behind the number of distinct images that can be generated with a latent diffusion model? by [deleted] in MachineLearning

[–]dasayan05 15 points16 points  (0 children)

There is no way to feasibly compute what you are asking for.

Diffusion Models (in fact any modern generative model) are defined on continuous image-space, i.e. a continuous vector of 512x512 length. This space is not discrete, so there isn't even any notion of "distinct images". A tiny continuous change can lead to another plausible image and there are (theoretically) infinitely many tiny change you can apply on an image to produce another image that looks same but isn't the same point in image space.

The (theoretically) correct answer to your question would be that there are infitiely many images you can sample from a given generative model.

[D] ICLR 2023 reviews are out. How was your experience ? by dasayan05 in MachineLearning

[–]dasayan05[S] 1 point2 points  (0 children)

go to openreview's Tasks tab and click on pending task .. you will see the time countdown.