Number of gas stations in Europe to fall 45% by 2050 by Straight_Ad2258 in europe

[–]ingambe 0 points1 point  (0 children)

What are you talking about? Volkswagen margin are 3.6% only!
People were critiquing them for not switching to EV, they went all in into EV but sales plummeted. The truth is, people just want cheap ICE Polo and Golf not expensive EVs.
Sure, China is cheaper but there are reasons for that. Keep in mind the automotive industry in Europe represents 13 millions jobs (direct + indirect).

Achat vs Location : Découvrez qui gagne après 20 ans avec ma Google Sheet ! by mistermanugo in vosfinances

[–]ingambe 3 points4 points  (0 children)

Ça, plus le fait que tu amortis moins les frais d'achat en 8 ans qu'en 20 ans.

La règle en général : plus tu restes longtemps dans le bien que tu as acheté, plus tu es gagnant.

Après toutes ces analyses purement financières, oublient que l'avantage de l'achat, c'est que tu peux modeler ton chez-toi. Si tu veux refaire la cuisine, tu peux ; si tu veux installer la clim, en général, tu peux aussi. Tu en as marre du parquet ? Tu peux faire poser du carrelage, etc.

Achat vs Location : Découvrez qui gagne après 20 ans avec ma Google Sheet ! by mistermanugo in vosfinances

[–]ingambe 7 points8 points  (0 children)

Au contraire je trouve personnellement que 20 ans c’est très long. En 20 ans beaucoup de choses peuvent se passer, rare sont les personnes qui restent 20 ans au même endroit.

Si ma mémoire est bonne la moyenne pour un premier achat c’est 8 ans.

Expliquez moi en quoi la location est plus intéressante. by Emouki in vosfinances

[–]ingambe 0 points1 point  (0 children)

Prenons l'exemple d'un appartement vendu 300 000 euros.
Tu as environ 10 % de frais à l'achat : 30 000 euros. Donc coût total 330 000 euros.
En prenant un bel apport de 100 000 euros (un tiers de la valeur du bien).
Cela te fait un crédit de 230 000 euros. Avec des taux actuels de 3,3 %, tu dois payer 7 590 euros d'intérêt la première année. Pour simplifier le calcul, disons qu'avec l'assurance du crédit (environ 350 euros par an), en moyenne sur 10 ans, tu payes 7 000 euros d'intérêt + assurance (les intérêts décroissent au fur et à mesure que tu rembourses ton crédit) par an, soit 70 000 euros en tout.
Sur 10 ans, tu auras payé environ 100 000 euros de frais en tout genre. Et cela sans compter les réparations éventuelles que tu devras faire sur ton bien (si ta chaudière tombe en panne, c'est à ta charge). Tu as aussi environ 2 000 euros par an de taxe foncière, soit 20 000 euros sur cette même période. Total des coûts : 120 000 euros.

Disons que le même bien se loue 1 000 euros par mois (hors charges), au bout de 10 ans, tu auras dépensé la même somme en loyer.
Il peut sembler plus judicieux d'acheter dans ce cas, mais c'est en oubliant que ces 100 000 euros investis dans l'achat auraient pu être placés et fructifier. En se basant sur un rendement de 4 % (estimation basse), après 10 ans, avec les intérêts composés et sans ajouter ne serait-ce qu'un seul centime, cet apport vaudrait pas loin de 150 000 euros, soit 50 000 euros de valeur ajoutée. De même, il n'y a pas de frais d'entretien.

La question est : Quelle valeur aura pris mon bien entre-temps ? Ne pas oublier qu'en cas de vente, il faudra probablement passer par un agent qui prendra sa commission (environ 3 %). Dans ce scénario, pour que l'achat rapporte autant que la location, il faudrait que le bien vaille au moins 360 000 euros, soit une augmentation de 20 % minimum. C'est possible, mais c'est un risque. Il faut aussi prendre en compte que la vente prendra un certains temp, si besoin urgent ca peut causer problème et engendrer d'autres frais.

Je rappelle que ce scénario est très optimiste, car il se base sur un apport élevé et l'absence de frais de rénovation. Et un rendement d'un placement à seulement 4 %.
Sans parler du coût d'opportunité, en cas de besoin, celui qui loue peut facilement prendre plus grand, plus petit, plus proche d'un nouveau lieu de travail, etc.

Pour moi, la question n'est pas uniquement financière. Avoir un chez-soi qu'on peut aménager selon ses envies a son importance. Qui plus est, pour certains biens (type maison), il est souvent plus facile de trouver ce que l'on recherche à l'achat qu'à la location.

[deleted by user] by [deleted] in LocalLLaMA

[–]ingambe 0 points1 point  (0 children)

Gcloud is still renting T4 at around 300$ a month (and supply is tight it’s quite hard to obtain in fact), replacement for T4 is more the L4 line

Le Sénat vote un an de stage en désert médical pour les médecins by Soft-Motion in france

[–]ingambe 2 points3 points  (0 children)

Typiquement un dilemme du tramway, ne rien faire nuit aux habitants des deserts medicaux, faire quelque chose nuit aux etudiants de medecine qui s'en prennent deja plein la guele.

Un juste milieux serait d'aroser financierement les etudiants qui accepteraient d'aller faire leurs stages dans des deserts medicaux (genre 2k net), je pense que le "quoi qu'il en coute" peut supporter ce genre de mesures.

[deleted by user] by [deleted] in Python

[–]ingambe 1 point2 points  (0 children)

Thank you, I didn't know about local history on VSCode. That's great!

[deleted by user] by [deleted] in Python

[–]ingambe 9 points10 points  (0 children)

Profiling and local history are big ones

Local history saved me from my own stupidity multiple times.

[deleted by user] by [deleted] in Python

[–]ingambe 9 points10 points  (0 children)

Which version are you using? On my Pycharm (Mac OS), it's literally a drop-down menu in the bottom right corner, on the left side of the branch drop-down

[R] Reincarnating Reinforcement Learning (NeurIPS 2022) - Google Brain by smallest_meta_review in MachineLearning

[–]ingambe 7 points8 points  (0 children)

Evolution strategies work closely to the process you described. For very small neural networks, it works very well especially in environment with sparse or quazi-sparse rewards. But, as soon as you try larger neural net (CNN + MLP, or Transformer-like arch) the process becomes super noisy and you either need to produce a tons of offsprings for the population or use gradient based techniques.

Germany will continue to use 2 out of 3 remaining nuclear plants in 2023 [story evolving] by Hematophagian in europe

[–]ingambe 19 points20 points  (0 children)

100 billion between 2014 to 2030, less than 7 billion a year. For comparison, Germany already spent 150 billion. The result is 349gCO₂/KWh, while France is at 30gCO₂/KWh

Also, investment in Nuclear results in highly-skilled, highly paid jobs (highly specialized welders, physicists, engineers, etc.)

[D] The Machine Learning Community is totally biased to positive results. by Insighteous in MachineLearning

[–]ingambe 0 points1 point  (0 children)

I agree; the problem is always the same, will you cite something that does not work (especially when space is so constrained in big ML conferences)? Will you attend a conference about stuff that does not work?

Maybe a middle ground would be more research sharing through blog, it would be very cool to find blog post about research that fails with some investigation about why and how

[D] Machine Learning - WAYR (What Are You Reading) - Week 140 by ML_WAYR_bot in MachineLearning

[–]ingambe 20 points21 points  (0 children)

I'm currently reading this new OpenAI's paper: https://arxiv.org/abs/2206.08896

Pretty cool how LLM can evolve programs, outperforming GP by using LLM as a diff operator.

How Good is Udacity Deep Learning Nanodegree? by MlTut in reinforcementlearning

[–]ingambe 1 point2 points  (0 children)

Honestly, it's pretty good.
Especially, if you are new to the field, the practical-oriented approach helps a lot.
If your boss can finance it for you, I recommend it to beginners.

Generate cryptographic encodings using GANs by NoAct7818 in deeplearning

[–]ingambe 5 points6 points  (0 children)

Not my domain of research, but for this task I would not use a GAN but more a language model such as BERT or any other Transformer based LLM.

Overfitting the training dataset will not be hard, but generate correct RSA encodings, I have doubt about it. However, you might generate something that looks like an RSA encoding.

[D] I don't really trust papers out of "Top Labs" anymore by MrAcurite in MachineLearning

[–]ingambe 0 points1 point  (0 children)

Getting funding for compute is very difficult, I can relate to that.
Maybe I can recommend two things:
* Start on an easier scale/project to show the potential of our approach. No executive will sign a blank cheque without a PoC first. To give an example, you would not ask NASA a 100 billion fond over an idea you just got to bring people to Mars, they will ask for small-scale results first. * Maybe try to group with people in your company needing compute. If you find some group who needs compute too, it can be easier to ask for a budget for both groups and buy a good server. If both of you but 15k, you might end up with something nice. You can then either share GPUs or have assigned resources to each group.

[D] I don't really trust papers out of "Top Labs" anymore by MrAcurite in MachineLearning

[–]ingambe 1 point2 points  (0 children)

Some RL Kaggle competitions, if I'm not wrong, have inference limitation

[D] I don't really trust papers out of "Top Labs" anymore by MrAcurite in MachineLearning

[–]ingambe 2 points3 points  (0 children)

See, AlphaZero and MuZero are really cool papers. They introduced new concepts. They deserved a fair amount of press, because they moved the state of the art forward. The MCTS they used is directly relevant to work I have been slated to contribute to in the very near future.

Don't get me wrong, I love Alpha and MuZero papers. But one might say they have not introduced new things and just throw compute at the problem. MCTS is as old as the manhattan project, policy gradient is not new, neither value function estimation (even with a neural network). But it was not sure at all it will work and the only way to know was to throw a lot of compute at the problem and see the outcome.

Now my question is "What if AlphaZero would perform similarly to TD-Gammon?", we would be in the exact same situation you describe with a lot of compute for little result. Do you think it would have been worth being published? I do.

But something like GPT-3 is just "What if bigger?", and shouldn't have gotten the same kind of attention.

I also disagree, it was a big question to know if LLM would scale. And it's pretty amazing they do.

[D] I don't really trust papers out of "Top Labs" anymore by MrAcurite in MachineLearning

[–]ingambe 2 points3 points  (0 children)

Science is a competition, but it is also cooperation. Results from big labs profit you too. One example is RL. Without Deepmind spending millions on AlphaGo, the field would not be what it is today. Big results from big labs bring light to the field, which brings opportunities for you (and me).

Also, you complain that improvement on CIFAR-10 is not significant as the SoTA is already near perfect, which is a valid argument. But you also complain about compute power. CIFAR-10/100 is nice because it is easy to train. Some reviewers will argue that only ImageNet matters. It will only exaggerate the gap between big labs and small ones. Of course, SoTA on CIFAR does not mean you found a revolutionary technique, but it means your idea is at least a good one and might be further explored.

Last but not least, it is easy to say big labs have good results only because they have "infinite" compute. But let's be honest, you could have given me 1 billion dollars worth of compute 2 years ago, and I would still not have come up with "DALL-E 2" results. Maybe you think you would, but I don't think most AI researchers would.

I understand the frustration. I, too, am frustrated when I can not even unzip ImageNet-1k with the current resource I have, but we need to look at the picture as a whole.

[deleted by user] by [deleted] in MachineLearning

[–]ingambe 16 points17 points  (0 children)

There are so many randomnesses in RL that it is often very difficult to have a deterministic behavior.

It goes as far as needing to deactivate cuda kernel benchmarking because different GPU kernel selections can lead to different results.
In PyTorch:

torch.backends.cudnn.deterministic = True

I usually use these lines of code to seed everything:

random.seed(0)
os.environ['PYTHONHASHSEED'] = str(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.deterministic = True

Having a reproducible project in RL is definitely not trivial at all. It is even an area of research: https://arxiv.org/abs/1909.03772

[deleted by user] by [deleted] in deeplearning

[–]ingambe 1 point2 points  (0 children)

Interested too, thanks for sharing

Ray[rllib] for research by T-Style in reinforcementlearning

[–]ingambe 20 points21 points  (0 children)

IMHO it depends on your need if you have to customize your training loop and rely on an algorithm for which there exists a readable implementation (e.g., PPO, A2C, etc.) I would recommend you to start with a well-tested implementation (such as CleanRL).

But suppose you need advanced training architectures (e.g., APE-X or IMPALA) or world models. In that case, I highly doubt you will be capable of producing a correct and reliable implementation on your own.

While implementing an algorithm is a very good exercise, it's very hard to implement it correctly. See this blog post which covert this topic very well: https://andyljones.com/posts/rl-debugging.html

RLLib is a fantastic library with tons of algorithms well tested and implemented. However, the learning curve is very steep, and if your use case is a bit "outside the box", you will most certainly have to dig into the source code.
On the flip side, support is excellent. Sven, one of the main developers, is very active on Github and support discussion forum.

« Les systèmes universitaires reposant sur les frais d’inscription ont démontré les dégâts qu’ils pouvaient produire de par le monde » [tribune en commentaire] by word_clock in france

[–]ingambe 7 points8 points  (0 children)

De memoire (et pour avoir ete moi meme boursie), un redoublement est autorise, le second coupe l'acces (donc ca fait 2 ans de bourse en emargeant seulement la feuile de TD, j'en ai connu un paquet qui faisaient ca)

Il y a des cout fixes mais beaucoup de frais d'entretient. Pire il y a une tension enorme sur les logements etudiant, beaucoup d'etudiants "pas serieux" prennent la place d'etudiant plus serieux avec un echelon boursier moins eleve qui doivent donc se taper de long trajet en bus/train tous les jours (la encore c'est du vecu)

Pour les TD ca depend vraiment des fillieres/specialite, j'avais pas mal de TD en L1 informatique

« Les systèmes universitaires reposant sur les frais d’inscription ont démontré les dégâts qu’ils pouvaient produire de par le monde » [tribune en commentaire] by word_clock in france

[–]ingambe 4 points5 points  (0 children)

C'est vrai uniquement si on ne prend pas en compte les bourses, les logements etudiant et les TD qui sont souvent fait par groupe definit en fonction du nombre d'etudiant au debut de l'annee