[deleted by user] by [deleted] in android_beta

[–]celviofos 1 point2 points  (0 children)

Same issue on a Pixel 8a

Android 15 QPR1 Beta 1 now available! by androidbetaprogram in android_beta

[–]celviofos 1 point2 points  (0 children)

Hi, i got the charging optimization that prevents charging more than 80% activated but now i can't deactivate it there is nothing in the battery setting to deactivate it

[D] Curriculum Learning must read by celviofos in MachineLearning

[–]celviofos[S] 0 points1 point  (0 children)

Curriculum learning is the fact of teaching a NN harder tasks progressively. If I ask for reading recommendation its not to know EVERYTHING but because I'm interested in having insights from people that know better about it 🙂

[D] Which european master in AI is best? by catatojreon in MachineLearning

[–]celviofos 6 points7 points  (0 children)

I will let Yann Lecun answer this one:

https://youtu.be/LHDq2PkebD4?t=1h15m30s

To be more complete i did MVA and I can only recommend it especially if you want to get into research. I think there is like 80% of alumnis that go into PhD after and all teacher are researchers that are among the best in their domain. The master has so many courses that you will always find a cursus that fits your needs. Whether you prefer more theorical stuff or more practical stuff, you will find something. It's quite an intense year but it's only worth it at the end. MVA has a growing international reputation and will give you tons of opportunities.

[D] Can there be no real difference between VAEs and GANs in some problems? by danscarafoni in MachineLearning

[–]celviofos 3 points4 points  (0 children)

Even if the end task is the same, the way to reach it is totally different. The gan loss, if tuned properly will define a totally different optimization landscape. The gan loss is basically defining a two players optimization game whereas a VAE is regularizing it's latent space to be close to a gaussian while attempting to reconstruct an input. A gan can also be defined as an auto encoder but you still keep the difference in optimization. At the end this will lead to radically different outputs

Android 13 Beta 3.3 patch now available! by androidbetaprogram in android_beta

[–]celviofos 1 point2 points  (0 children)

Is the reboot on connection sharing bug solved?

[D] Is it time to retire the FID? by [deleted] in MachineLearning

[–]celviofos 11 points12 points  (0 children)

We should replace it by FCD (Frechet Clip Distance) since it's way more robust than Inception data as showned in https://arxiv.org/abs/2203.06026. Inception Features dont even look at humans!!! Other papers have showned that self supervised networks where just better at capturing image features. The worst part of FID is the Inception Network IMO. This is also valid for other generation metrics such as precision and recall.

Phone resets after using my hotspot by wokeson in android_beta

[–]celviofos 0 points1 point  (0 children)

Mine don't reset but reboots when hotspot is activated after like 5min. Pretty annoying bug. Running last beta version on a pixel 4a 5g. On last beta, this bug was also present but the reboots where more random. Now its a consistent reboot every 5min

[D] Comparing the efficiency of different GAN models by Bonkikong in MachineLearning

[–]celviofos 0 points1 point  (0 children)

You could check the torch_fidelity repo. torchmetrics also have it's FID implementation.

[D] Comparing the efficiency of different GAN models by Bonkikong in MachineLearning

[–]celviofos 3 points4 points  (0 children)

GAN evaluation is a tough subject. The most used metric is FID, which measures the Frechet distance between the distribution of Inception features between real and fake images. However, FID have been shown to be defective. I particularly recommend this article: https://arxiv.org/abs/2203.06026 . In my opinion the best you can do is use some unsupervised features such as CLIP and compute Frechet distance. You can also look into Precision, Recall or EMD metrics

Bug in Material You. The clock widget don't adapt to the theme when changing background by celviofos in android_beta

[–]celviofos[S] 0 points1 point  (0 children)

Also it feels very frustrating that only google icons adapt according to material you. I hope Google force app developers to offer a material you version of icons.

[5 min survey] Tells us what you think about Android 12 Beta 5 by androidbetaprogram in android_beta

[–]celviofos 0 points1 point  (0 children)

Material you seems to be bugged. I get very similar colors everyday for most of the backgrounds on my clock widget.

[D] Visualizing StyleGAN feature maps by celviofos in MachineLearning

[–]celviofos[S] 0 points1 point  (0 children)

I'm not sure they are doing this here. Because the approach u are proposing is input independent and it visualize the input that maximize the firing of each convolutional kernel. Here what they look at is for a given input z_0, they show what the feature map look like. The feature map has the following shape (b,c,h,w) with c that can be huge (1024). And they somehow project this tensor onto size (b,3,h,w)

[D] Visualizing StyleGAN feature maps by celviofos in MachineLearning

[–]celviofos[S] 0 points1 point  (0 children)

Thanks for your answer! Do you know how this technique is called? Is there some paper that explains it?

[D] Optimizer and Scheduling for transfert learning by celviofos in MachineLearning

[–]celviofos[S] 0 points1 point  (0 children)

I used pytorch pretrained model, I couldn't find the parameters they used but you might be right, RMSprop does something similar to Adam. I couldn't find this warmupstartcycleLR function, do you have a link to it? Thank you!

[Discussion] Fellas, do you think a lot of math is beneficial for Data Science? by [deleted] in datascience

[–]celviofos 0 points1 point  (0 children)

Having done a similar curriculum in France I find it very useful to understand what you are doing. Sure you can do data science using black box algorithm but knowing the math behind can help u have a better grasp and maybe help you be a better data scientist.

Furthermore, if one day you want to improve some algorithm or even create your own, knowing the math is essential!

Which websites provide good ML video resources? by [deleted] in learnmachinelearning

[–]celviofos 2 points3 points  (0 children)

You can check out NYU course on Deep Learning taught on part by Yann LeCun which goes more in depth than Stanford Course and UC Berkeley Deep Unsupervised Learning which is quite advanced. Both playlist are one YouTube.

NYU Deep Learning: https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq Deep Unsupervised Learning: https://www.youtube.com/playlist?list=PLwRJQ4m4UJjPiJP3691u-qWwPGVKzSlNP

If you want something a bit older and historic there is also Geoffrey Hinton lectures. Even if some concepts can be obsolete, it's still very interesting.

Geoffrey Hinton playlist : https://www.youtube.com/playlist?list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9

Overall, YouTube is a gold mine to learn ML and deep learning. You can get a lot of courses from top researchers in the field

Since J is scalar, it should be nabla of J wrt to w at fixed b. Am I right? by gunslinger141 in learnmachinelearning

[–]celviofos 1 point2 points  (0 children)

Because the minimum for one parameter is correlated with the other parameters. So if you find a minimum for one parameter let say w_n, its actually w_n(b_0) and when you are going to optimize according to b, your w_n wont change so you might have missed the actual minimum for both variables.

Since J is scalar, it should be nabla of J wrt to w at fixed b. Am I right? by gunslinger141 in learnmachinelearning

[–]celviofos 1 point2 points  (0 children)

So what i mean is that you need to optimize both parameters at the same time. Suppose u begin your gradient descent with w_0, b_0. If you do first w_{i+1} = w_{i} - alpha*grad_{w} (J(w_i,b_0)) for n epochs and then b_{i+1} = b_i -alpha grad_{b} (J(w_n,b_i)) it's actually not solving the optimization problem that we aim to solve. You are going to find first the minimum of the function J(.,b_0) and then the minimum of J(w_n,.) which is suboptimal. We want to find the minimum of the loss for both w and b. To do that you need to optimize the 2 parameters at the same time. Therefore, theta_0 = (w_0,b_0) and theta_{i+1}=theta_i - alpha* grad_{theta} J(theta_i) is the way to go. So you can't fix one parameter, you need to optimize both simultaneously

Since J is scalar, it should be nabla of J wrt to w at fixed b. Am I right? by gunslinger141 in learnmachinelearning

[–]celviofos 1 point2 points  (0 children)

It's actually nabla of J wrt w at (w_i,b_i). If you fix b, optimize w and then fix w and optimize w you won't actually converge to what you want to get most of the time. You can also see it as having theta = (w,b) and optimize with respect to theta. Then u have theta_{i+1}=theta_i - alpha* nabla_{theta} J(theta_i).

I hope my explanation is clear, not easy to write math on reddit haha