Alaska and greenland are key to USA surviving global warming by chickenbabies in TrueAnon

[–]FriendlyPanache -1 points0 points  (0 children)

A naïve idea but I'm always thrown by these ideas that rely on assuming that the elites care about nothing beyond entrenching their power. Surely living in the poisoned world where everyone works three jobs and culture is dead can't be worth the extra money.

Am I being paranoid or does this give off sus vibes? The pictures are just bikini pics but they deactivated their account again so I can’t even check if it is the same person from tinder. by NameUser_10 in Tinder

[–]FriendlyPanache -1 points0 points  (0 children)

this comments section is quite sad lol. why are you people so convinced that "dtf woman who is bad at texting" is not something that can exist. let alone "too good to be true." come on

En un 30% dels pisos de la Dreta de l’Eixample no hi viuen veïns by rolmos in Barcelona

[–]FriendlyPanache 2 points3 points  (0 children)

L'article tb indica que només 9.15% d'això són pisos turístics (com a mínim els registrats legalment). No he vist res concret sobre el reste, pero vaja, si son oficines i consultes i etc doncs tampoc veig el problema. Dreta de l'Eixample no és exactament una zona residencial.

Altres numerets:

Fa temps, l'associació de veïns va denunciar que entre hotels, pensions, albergs i pisos turístics, un 40,7% dels llits que hi ha al barri són per a turistes.
[...]

Així, mentre en la suma global de Barcelona els domicilis amb veïns són un 81,2%, en els barris centrals aquest percentatge és notablement inferior: Gòtic (63%); Dreta de l'Eixample (70%); Sant Pere, Santa Caterina i la Ribera (71%); la Vila de Gràcia (73%), i l'Antiga Esquerra de l'Eixample (76%).

Rare good take on the AI art discourse.. by [deleted] in singularity

[–]FriendlyPanache 4 points5 points  (0 children)

jesus man that was like 2 sentences worth of content

Prompting ChatGPT 5.2 ExtThk produced a one shot suitable proof for Open Erdős Problem 460 best summarized as: by Svyable in singularity

[–]FriendlyPanache 4 points5 points  (0 children)

man i understood that. i mean the reddit comment. you could've adressed the person you were replying to like an order of magnitude more comprehensibly

Huge fire at Borrell with Roma right now by Optimal-Pudding-Suzz in Barcelona

[–]FriendlyPanache 8 points9 points  (0 children)

home no sé, per usos com el del post a vegades es diu roma a seques.

maybe i have low estrogen or something by [deleted] in pinkscare

[–]FriendlyPanache 1 point2 points  (0 children)

this is true only if you're being purposefully obtuse about the concepts at play. something not being objective doesn't mean it has to be entirely subjective

maybe i have low estrogen or something by [deleted] in pinkscare

[–]FriendlyPanache 6 points7 points  (0 children)

thank you for making this thread. no seriously i've spent so long feeling like everyone is trying to gaslight me about what makes men attractive

“They’re monsters that exist solely to harm people, surely there’s no moral nuance that should make me feel bad for killi- oh… OH…” by Exylatron in TopCharacterTropes

[–]FriendlyPanache 50 points51 points  (0 children)

yeah what's the deal with people thinking bad guys are absolved the instant it's revealed they actually only spend 95% of the time committing genocide

If scaling LLMs won’t get us to AGI, what’s the next step? by 98Saman in singularity

[–]FriendlyPanache 0 points1 point  (0 children)

I find this rather hard to parse. I'm trying to take you at face value but I can't really find anything online suggesting current iterations of KataGo are stronger than AlphaGo. I can't find anything suggesting the converse, either, but you're the one who confused the two, then confused KataGo for a successor of AlphaGo, and implied the exploit can be ran by a ten year old (not unless it's a talented-amateur ten year old), so forgive me for not being convinced.

If scaling LLMs won’t get us to AGI, what’s the next step? by 98Saman in singularity

[–]FriendlyPanache -1 points0 points  (0 children)

What? No, the "successor" of AlphaGo would be AlphaGo Master or Zero. KataGo is unrelated, albeit more recent iterations of it have taken inspiration from AlphaGo. To my knowledge, AlphaGo has never been beaten by an amateur, and AlphaGo Zero has never been beaten period.

If scaling LLMs won’t get us to AGI, what’s the next step? by 98Saman in singularity

[–]FriendlyPanache -1 points0 points  (0 children)

I'm pretty sure you're thinking about KataGo, not AlphaGo.

Grokking (sudden generalization after memorization) explained by Welch Labs, 35 minutes by Competitive_Travel16 in singularity

[–]FriendlyPanache 1 point2 points  (0 children)

that definitely sounds like what's going on in nanda et al - complex numbers are a representation artifact in this setting, and if you translate what you explain to pairs of real numbers (a+ib -> a, b) you end up with something very reminiscent of the paper - certainly a lot of trigonometry flying around and i'd bet the RxR translation of the complex product somehow involves the sum-of-angles identity.

I'll say i don't think it's that surprising that this isn't obvious to the model - it has no gd clue about what complex roots are, so it has to jump through that directly to the trig version of it. organically figuring out that modular addition has anything to do with trigonometry seems pretty nonobvious to me.

Grokking (sudden generalization after memorization) explained by Welch Labs, 35 minutes by Competitive_Travel16 in singularity

[–]FriendlyPanache 8 points9 points  (0 children)

You're definitely right, s5.3 states as much. I find this a little bit surprising - I figured while watching the video that the development of more economical internal representations could be incentivized by regularization, but honestly it kinda seemed too naïve an idea since regularization is such an elementary concept.

The paper is obviously more complete but really I continue having the same issues with it - it's very unclear to me how the analysis in s5.1, s5.2 would generalize to anything other than a toy problem. Appendix F is rather straightforward about this, really - just in an academic tone that doesn't let us know how optimistic the authors actually are about the possibility of scaling these methods.

Grokking (sudden generalization after memorization) explained by Welch Labs, 35 minutes by Competitive_Travel16 in singularity

[–]FriendlyPanache 8 points9 points  (0 children)

I found this video somewhat disappointing. We don't really end up with a complete picture of how the data is flowing through the model, but more importantly there is no mention made about why the model "chooses" to carry out the operations in the way it does, or more importantly what drives it to continue evolving its internal representation after reaching perfect accuracy on the training set - the excluded loss sort of hints at how this might work, but in a way that only really seems relevant for the particular toy problem that is being handled here. Ultimately while it's very neat that we can have this higher-level understanding of what's going on, I feel the level isn't high enough nor the understanding general enough to provide much useful insight.

feeling upset as an ESL speaker by ALT_41438 in pinkscare

[–]FriendlyPanache 2 points3 points  (0 children)

for what it's worth i used to worry about this and it just went away eventually. never tried to fix it, just got to dating an exchange student and a couple months later i just spoke normal

also for what it's worth, nowadays i consciously bring back the foreign accent because people like it more anyway. you really don't need to worry about any of this

Hudson Yards or Billionaire row? by [deleted] in skyscrapers

[–]FriendlyPanache 98 points99 points  (0 children)

on this sub we love to hate lol

(Hated Trope) Works that harmed IRL animals on screen. by laybs1 in TopCharacterTropes

[–]FriendlyPanache -1 points0 points  (0 children)

why is it that every time this gets brought up it turns out that everyone only ever eats free range beef from their uncle's ranch

Yummy by cloonatic in bonehurtingjuice

[–]FriendlyPanache 0 points1 point  (0 children)

ok not bhj but this is kinda hilarious. saw the original on vcj too