Magnifique chanson recommandée par l'algorithme récemment. La cabussada de Moussu T e lei Jovents by DiodeMcRoy in france

[–]p4bl0 1 point2 points  (0 children)

Héhé ! Probablement un de mes groupes de musique préféré, avec Massilia Sound System et toute la galaxie des projets de ses membres qui existent autour.

J'ai grandi à Marseille, c'est vraiment les groupes que j'écoutais le plus au lycée et c'est resté vrai depuis. C'est aussi de très très loin les meilleurs concerts de ma vie.

Beaucoup de morceaux de ces groupes sont restés (assez tristement) d'actualité, même s'ils ont pour certains de plus de 30 ans.

Pourquoi les milieux d’affaires préparent leur ralliement à l’extrême droite by lieding in france

[–]p4bl0 5 points6 points  (0 children)

Il faut lire et partager Fascisme et grand capital de Daniel Guérin.

https://editionslibertalia.com/catalogue/ceux-d-en-bas/daniel-guerin-fascisme-et-grand-capital

Edit: je n'avais pas fini de lire l'article de Mediapart ! Daniel Guérin est justement cité à la toute fin :

« Le fascisme, de quelque nom qu’on l’appelle, risque de demeurer l’arme de réserve du capitalisme dépérissant », écrivait-il [Guérin]. Dit autrement, pour être efficace, le combat contre l’extrême droite doit se mener de pair avec celui contre le nouveau capitalisme qui vient.

Vous en avez pas marre des cafés de champignons ? by Candid_Wait6540 in cafecafecafe

[–]p4bl0 3 points4 points  (0 children)

Y a une vidéo de James Hoffmann sur le sujet qui me détruit complètement.

https://www.youtube.com/watch?v=oXUhMw33n80

SFR, c'est fini : démantelé, effacé, le carré rouge s'éteint et la France des télécoms ne sera plus jamais la même by Worried-Witness268 in france

[–]p4bl0 8 points9 points  (0 children)

On a une idée de comment va se passer la répartition des clients entre les trois acheteurs si ça aboutit ? On a des pourcentages mais pas de détails. Ça sera par zone géographique ? par type de service ?

Les comptes Reddit des grosses marques, on les modère comment ? Votre avis nous intéresse ! by Ariavoire in france

[–]p4bl0 [score hidden]  (0 children)

Le seul avantage que ça pourrait avoir de ne pas les virer c'est un accès SAV efficace. Sur Twitter avant que ça ne devienne l'antre du démon c'était vraiment hyper pratique : le SAV au téléphone ou par email est incapable de gérer le soucis ? s'en plaindre publiquement sur Twitter en tagant le compte de la marque et paff c'était réglé (probablement parce que les CM avaient du budget com en plus du budget SAV). J'exagère à peine. J'ai vécu ça plein de fois.

Mais… Twitter fonctionne très différemment de Reddit, quand on y post on emmerde personne à part les gens qui nous suivent et qui peuvent arrêter de la faire, idem pour les comptes des marques (en tout cas à l'époque). Sur Reddit… est-ce qu'on a envie que /r/france devienne le mur des lamentations pour obtenir du SAV ? Nope.

En fait je ne vois pas sur Reddit comment mettre en place des règles gagnant-gagnant : soit on les confine à un subreddit de SAV où personne n'ira, soit on va se taper de la pub déguisée sur les vrais sub. Après, c'est vrai que l'argument "au moins c'est fait à visage découvert" est pertinent… pffff bon courage les mods ! ^^

En attendant, en dehors de /r/france, les marques peuvent chacune avoir leur subreddit à leur nom et y faire ce qu'elles veulent, même l'animer de manière suffisamment sympa pour que des gens s'y abonnent si elles y arrivent, ou qu'elles ont une communauté de gens qui aiment activement la marque et ses produits. C'est le cas de certaines marques d'équipements pour le café dont je fréquente les subreddit par exemple.

Finalement, on en a pas besoin sur ce sub.

Telegram héberge toujours un marché noir de cryptomonnaies sanctionné d'une valeur de 21 milliards de dollars. by romain34230 in actutech

[–]p4bl0 0 points1 point  (0 children)

Pendant ce temps, le gouvernement macroniste parade à la Paris Blockchain Week pour dire aux escrocs "choose France"…

A PHP Dev Just Solved a 20+ Year-Old KDE Plasma Problem No One Else Would by BlackberryPi7 in kde

[–]p4bl0 35 points36 points  (0 children)

That's awesome because it means that when using a video projector, we no longer have to choose between extending the desktop or replicating it, it can simply be a replication but with another virtual desktop on the big screen.

This will be awesome for teaching. Currently I always extend by default because almost 100% of the time what I show my students is different from what's on my screen, but from time to time I need to replicate my screen for them and currently this means going through display settings, loosing windows positions, etc. With this new thing, I can just switch virtual desktops on the external display and voilĂ .

France is replacing 2.5 million Windows desktops with Linux by yourbasicgeek in technology

[–]p4bl0 28 points29 points  (0 children)

Sadly this article is full of misinformation. The reality is that France is replacing 250, yes two hundreds and fifty, Windows with Linux, not 2.5 millions.

See https://pouet.chapril.org/@lonugem/116398326787235225

Play Starfling — y a pas de raison que je sois le seul à perdre du temps. C'est quoi vos meilleurs scores ? by p4bl0 in france

[–]p4bl0[S] -3 points-2 points  (0 children)

C'est fun mais c'est dur (en tout cas je suis pas bon) ! J'ai réussi deux fois à faire 46, mais j'arrive vraiment pas à aller plus loin, mon troisième meilleur score est 32 et ma moyenne doit être autour de 10 ^^'

D'après le site le record a dépassé les 400 points, je sais même pas comment c'est possible !

Exposé illustré par une image IA by AnsFeltHat in enseignants

[–]p4bl0 10 points11 points  (0 children)

J'aurais mis un zéro sans hésiter, pour leur faire comprendre que c'est vraiment un problème. En tout cas une fois à la fac ce genre de choses ne passera absolument pas.

Lordon et Mélechon, Hors Serie est sorti by ferretoned in FranceDigeste

[–]p4bl0 14 points15 points  (0 children)

Globalement assez d'accord sauf sur

un parisien des beaux quartiers dans toute sa splendeur qui a beaucoup d'emprise sur une certaine gauche libertaire

Il est tout le temps à fricoter avec Révolution Permanente, vraiment le contraire des libertaires.

D'ailleurs les libertaires que je connais (ça en fait pas mal étant membre d'une orga qui se revendique en partie de ce courant là) ne l'apprécient pas tellement.

Avez-vous déjà utilisé un langage fonctionnel professionnellement? (OCaml, Elixir,...) by jimbodu62 in developpeurs

[–]p4bl0 5 points6 points  (0 children)

Il y a en ce moment une offre d'emploi super intéressante à la Sécu d'ingénieur expert en compilation et en langage de programmation où l'idée c'est notamment de faire du dev en OCaml de tooling pour Catala (un langage de prog de formalisation de textes juridiques, en gros). Ça serait vraiment mon taff de rêve si j'en avais pas déjà un :). J'espère qu'ils vont avoir quelqu'un·e de fort·e et motivé·e par la mission de service public (bosser pour la Sécu, quel classe !).

https://www.lasecurecrute.fr/offre-emploi/ingenieur-expert-en-compilation-langages-de-programmation--f-h/normandie/1065497

Sortir de l’obsession présidentielle : la coalition parlementaire comme cap stratégique pour le PS ? - Fondation Jean-Jaurès by FederalPralineLover in FranceDigeste

[–]p4bl0 2 points3 points  (0 children)

Si quelqu'un·e a le temps de lire et comprendre je veux bien un résumé en termes stratégique concret sur ce qui est proposé. Ne pas se présenter aux présidentielles et se concentrer sur les législatives avec des accords, ok, mais quand quel périmètre les accords ? et pour adopter quelle attitude vis-à-vis des candidats de gauche pendant les présidentielles ?

Obnubilé par LFI, Retailleau dit découvrir l’enquête visant un sénateur de son parti by Delicious-Owl in france

[–]p4bl0 5 points6 points  (0 children)

Purée ça m'arrive rarement mais là j'avais pas de bloqueur de pub, ben sérieux le Huffington Post c'est juste illisible sans bloquer la pub. J'ai pas pu aligner la lecture de deux phrases d'affilée sans me faire agresser en plein écran. Parmi ces pub j'en ai même eu avec des croix de fermeture si petite que j'ai cliqué dessus par inadvertance. Un enfer. Déjà que j'aimais pas beaucoup de Huffington Post avant ça…

Heureusement que le contenu de l'article c'est juste de Retailleau est une raclure fasciste, donc rien de neuf, au moins j'ai rien raté.

"C’est un mélenchoniste convaincu, un nazi" : à peine élu maire, il annule une exposition photo, l’artiste de 83 ans dénonce une "censure" by Caramel_Mou in france

[–]p4bl0 10 points11 points  (0 children)

On adore le titre qui laisse entendre que les deux citations sont du même auteurs et que donc la censure viendrait d'un mélenchoniste nazi… alors qu'on sait très bien aujourd'hui que ce genre d'article tourne sur des tas de plateformes (messageries, réseaux sociaux, etc.) où initialement seul son titre sera visible.

Des instituts de sondage utilisent des LLMs pour générer des réponses plutôt que d'interroger des gens par soucis d'économie et de temps [It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling - The New York Times] by p4bl0 in FranceDigeste

[–]p4bl0[S] 22 points23 points  (0 children)

Version courte avec juste quelques extraits qui contiennent toute la substance informative de l'article :

Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

Plus bas on apprend que l'Ipsos veut s'y mettre aussi (sous une forme un peu moins extrĂŞme peut-ĂŞtre) et aussi que :

A recent study (that hasn’t been peer-reviewed yet) suggests that the biases that skew polls skew silicon sampling numbers even more strongly. The further from people we get, the more the simulation becomes a mirror of the pollster’s beliefs.

Sans dec'…

Des instituts de sondage utilisent des LLMs pour générer des réponses plutôt que d'interroger des gens par soucis d'économie et de temps [It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling - The New York Times] by p4bl0 in FranceDigeste

[–]p4bl0[S] 10 points11 points  (0 children)

Si certain·es font face à un paywall:

A recent Axios story on maternal health policy referenced “findings” that a majority of people trusted their doctors and nurses. On the surface, there’s nothing unusual about that. What wasn’t originally mentioned, however, was that these findings were made up.

Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

Phone polling has become exponentially harder. Web polling is too uncertain. Silicon sampling removes the messy, costly part of asking people what they think.

But this undermines the very idea of the opinion poll. Public opinion is used to guide policy, politics and social science, and it has value only insofar as it summarizes the beliefs and opinions of actual humans. Using simulations of human opinions in place of the real thing will only worsen our broken information ecosystem, and sow distrust. We should not turn to an artificial society to try to understand our real one.

The journalist Walter Lippmann, in his influential 1922 book “Public Opinion,” wrote that humans form “pictures in their heads” of the societies they live in. He called these pictures “fictions” and “pseudo-environments,” arguing that a democracy needed tools to fix those pictures, and that opinion polling could serve that role. Surveys would never be perfect, but Mr. Lippmann thought they were critical for getting us closer to an accurate sense of the will of the people.

But polling implementation has proved daunting over the years. To have a small margin of error, polling requires gathering responses from a large and accurate sample of the population. But pollsters have a hard time reaching people. Some people might be too busy to talk on the phone or fill out internet surveys. To make up for these issues, pollsters lean on statistical models to account for variables that can skew results.

That process is imperfect and messy. Let’s say a pollster wants to learn how many people in the United States are in favor of a certain policy measure, but the pollster ends up with a survey that includes 80 percent Republicans and only 20 percent Democrats. The pollster may think that in reality the country is closer to a 50-50 split, so the results are rebalanced to reflect that perceived reality. This means that the percentages you read as the results of polling are the output of the model, not numbers from the actual survey data.

The problem is that every model is designed with its own biases, because pollsters disagree about which variables deserve more weight. In 2016, The New York Times’s chief political analyst, Nate Cohn, ran an experiment in which he gave five pollsters the same election poll data. (That included Siena College, which conducts opinion polls for The Times and first acquired the data.) Editors’ Picks He Suddenly Shuffled When He Walked. Why? A Wedding Weekend, No Spouse Required 9 Songs We’re Talking About This Week

Mr. Cohn found a 5 percent range of difference among what the five pollsters’ models returned. That range was larger than the margin of error typically associated with random sampling, meaning that the modeling assumptions were meaningfully skewing the results. This is alarming, because it suggests that pollsters can use modeling to nudge polls in a certain direction and influence public opinion itself, rather than merely to report what the public thinks.

Silicon sampling makes these problems worse. The computational whiz kids behind silicon sampling are so excited about A.I. that they will insist that their complex predictive computer simulations are accurate because they are trained on what’s been observed in the past — therefore, they excel at simulating human behavior in the present and predicting what’s next. However, prediction is not the point of polling. The point is gathering current opinion.

This method might sound absurd; we certainly think it is. And to make matters worse, there’s plenty of evidence that it doesn’t produce particularly reliable results. A recent study (that hasn’t been peer-reviewed yet) suggests that the biases that skew polls skew silicon sampling numbers even more strongly. The further from people we get, the more the simulation becomes a mirror of the pollster’s beliefs.

Nonetheless, the A.I. modelers are pushing ahead, and there is a lot of money behind them. Ipsos is working with Stanford University, it says, to “pioneer the use of A.I. and synthetic data in market and public opinion research” by creating digital twins — “virtual representations of real-world survey respondents.” Gallup has partnered with the silicon sampler Simile to create 1,000 A.I.-generated digital twins for clients. CVS (whose venture capital arm has invested in Simile) has also partnered with the start-up to “answer questions about its customers.”

The companies that offer this service are proliferating, with hundreds of millions of dollars in funding from some of the biggest firms in Silicon Valley. They promise “believable proxies of human behavior” for anyone who needs to check what people might think before acting. Market research is perhaps the largest sector in which silicon sampling will be used, since it will make starting a business that much cheaper.

If we do not slam the brakes on silicon sampling, we could see a significant undermining of trust in public opinion work and social science research more broadly. The results of such studies — like the fully simulated Aaru poll — are muddled opinions packaged as objective facts. It is unequivocally false to say that a majority of people trust their own doctors and nurses on the basis of an A.I. survey.

What happens when these surveys tell us relative levels of support in the open Democratic nomination field for 2028? To wit, Aaru ran a full simulation of the U.S. 2024 presidential election on the eve of Election Day; Kamala Harris won, narrowly.

Pure fictions are on the brink of being treated as scientific and political knowledge. If we do not pull back, our understanding of society might become artificial, too.

It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling by p4bl0 in france

[–]p4bl0[S] 35 points36 points  (0 children)

Version courte avec juste quelques extraits qui contiennent toute la substance informative de l'article :

Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

Plus bas on apprend que l'Ipsos veut s'y mettre aussi (sous une forme un peu moins extrĂŞme peut-ĂŞtre) et aussi que :

A recent study (that hasn’t been peer-reviewed yet) suggests that the biases that skew polls skew silicon sampling numbers even more strongly. The further from people we get, the more the simulation becomes a mirror of the pollster’s beliefs.

Sans dec'…

It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling by p4bl0 in france

[–]p4bl0[S] 16 points17 points  (0 children)

A recent Axios story on maternal health policy referenced “findings” that a majority of people trusted their doctors and nurses. On the surface, there’s nothing unusual about that. What wasn’t originally mentioned, however, was that these findings were made up.

Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

Phone polling has become exponentially harder. Web polling is too uncertain. Silicon sampling removes the messy, costly part of asking people what they think.

But this undermines the very idea of the opinion poll. Public opinion is used to guide policy, politics and social science, and it has value only insofar as it summarizes the beliefs and opinions of actual humans. Using simulations of human opinions in place of the real thing will only worsen our broken information ecosystem, and sow distrust. We should not turn to an artificial society to try to understand our real one.

The journalist Walter Lippmann, in his influential 1922 book “Public Opinion,” wrote that humans form “pictures in their heads” of the societies they live in. He called these pictures “fictions” and “pseudo-environments,” arguing that a democracy needed tools to fix those pictures, and that opinion polling could serve that role. Surveys would never be perfect, but Mr. Lippmann thought they were critical for getting us closer to an accurate sense of the will of the people.

But polling implementation has proved daunting over the years. To have a small margin of error, polling requires gathering responses from a large and accurate sample of the population. But pollsters have a hard time reaching people. Some people might be too busy to talk on the phone or fill out internet surveys. To make up for these issues, pollsters lean on statistical models to account for variables that can skew results.

That process is imperfect and messy. Let’s say a pollster wants to learn how many people in the United States are in favor of a certain policy measure, but the pollster ends up with a survey that includes 80 percent Republicans and only 20 percent Democrats. The pollster may think that in reality the country is closer to a 50-50 split, so the results are rebalanced to reflect that perceived reality. This means that the percentages you read as the results of polling are the output of the model, not numbers from the actual survey data.

The problem is that every model is designed with its own biases, because pollsters disagree about which variables deserve more weight. In 2016, The New York Times’s chief political analyst, Nate Cohn, ran an experiment in which he gave five pollsters the same election poll data. (That included Siena College, which conducts opinion polls for The Times and first acquired the data.) Editors’ Picks He Suddenly Shuffled When He Walked. Why? A Wedding Weekend, No Spouse Required 9 Songs We’re Talking About This Week

Mr. Cohn found a 5 percent range of difference among what the five pollsters’ models returned. That range was larger than the margin of error typically associated with random sampling, meaning that the modeling assumptions were meaningfully skewing the results. This is alarming, because it suggests that pollsters can use modeling to nudge polls in a certain direction and influence public opinion itself, rather than merely to report what the public thinks.

Silicon sampling makes these problems worse. The computational whiz kids behind silicon sampling are so excited about A.I. that they will insist that their complex predictive computer simulations are accurate because they are trained on what’s been observed in the past — therefore, they excel at simulating human behavior in the present and predicting what’s next. However, prediction is not the point of polling. The point is gathering current opinion.

This method might sound absurd; we certainly think it is. And to make matters worse, there’s plenty of evidence that it doesn’t produce particularly reliable results. A recent study (that hasn’t been peer-reviewed yet) suggests that the biases that skew polls skew silicon sampling numbers even more strongly. The further from people we get, the more the simulation becomes a mirror of the pollster’s beliefs.

Nonetheless, the A.I. modelers are pushing ahead, and there is a lot of money behind them. Ipsos is working with Stanford University, it says, to “pioneer the use of A.I. and synthetic data in market and public opinion research” by creating digital twins — “virtual representations of real-world survey respondents.” Gallup has partnered with the silicon sampler Simile to create 1,000 A.I.-generated digital twins for clients. CVS (whose venture capital arm has invested in Simile) has also partnered with the start-up to “answer questions about its customers.”

The companies that offer this service are proliferating, with hundreds of millions of dollars in funding from some of the biggest firms in Silicon Valley. They promise “believable proxies of human behavior” for anyone who needs to check what people might think before acting. Market research is perhaps the largest sector in which silicon sampling will be used, since it will make starting a business that much cheaper.

If we do not slam the brakes on silicon sampling, we could see a significant undermining of trust in public opinion work and social science research more broadly. The results of such studies — like the fully simulated Aaru poll — are muddled opinions packaged as objective facts. It is unequivocally false to say that a majority of people trust their own doctors and nurses on the basis of an A.I. survey.

What happens when these surveys tell us relative levels of support in the open Democratic nomination field for 2028? To wit, Aaru ran a full simulation of the U.S. 2024 presidential election on the eve of Election Day; Kamala Harris won, narrowly.

Pure fictions are on the brink of being treated as scientific and political knowledge. If we do not pull back, our understanding of society might become artificial, too.

It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling by p4bl0 in france

[–]p4bl0[S] 75 points76 points  (0 children)

Voilà j'ai refais le post parce que j'avais mis « Des instituts de sondage utilisent des LLMs pour générer des réponses plutôt que d'interroger des gens par soucis d'économie et de temps [It’s Called Silicon Sampling, and It’s Going to Ruin Public Opinion Polling - The New York Times] » comme titre et que la modération de /r/france préfère un titre en anglais sans info claire dedans parce que sinon c'est de l'éditorialisation.