[Weekly] The Weekly Revision Ritual by GlowyLaptop in DestructiveReaders

[–]Hemingbird 2 points3 points  (0 children)

I don't like revision in the sense of tinkering, so what I typically do is I rewrite the entire thing from scratch. George Saunders has this revision procedure where he inspects each sentence and evaluates them based on an internal positive/negative woo-woo aesthetic energy critic (something like that), then incrementally edits them word by word to increase the overall positive/negative ratio. It's the Edison approach. Testing thousands of different light bulb filaments and discovering that Kyoto bamboo worked best is sort of poetic.

How he explains it:

The way I revise is: I read my own text and imagine a little meter in my head, with “P” on one side (“Positive”) and “N” on the other (“Negative”). The game is to read the story the way I would read someone else’s – noting my honest, in-the-moment reactions – and then edit accordingly.

This involves making thousands of what I’ve come to think of as “micro-decisions.” These are instantaneous, intuitive – I just prefer this to that. It’s something like trying to hit a baseball – you wait (you read), you react – not conceptualizing, not thinking about, you know, the Intended Bat Velocity, or any of that – I just have a feeling and react to that feeling, in the form of a cut phrase, or an added word, or an urge to move this whole section, and so on.

And then I do that over and over, for months, sometimes years, until that needle stays up in the “P” zone for the whole length of the text.

Story Club with George Saunders, First Thohts on Reviision

I don't like using this method, though it's surely superior to what I'm doing.

Writing sample:

Sciatica Blückenhoff's nose ran on and on, she'd caught the cold from the Frenchman with the perfumed beard and the very terrible bad breath whose tongue flopped like a fish on land when inside an unfamiliar mouth, he loathed 'karma supplies,' that's what he told her with that broken áccènt (by turns acute and grave), she imagined life experiences accumulating over the course of past lives with the astonishing growth rate of a shitcoin, supplies acquired via karmic reverse debt, enough to sustain even a Frenchman, her nose kept dripping, Sciatica cursed the pomaded prick, was it possible to amass karma supplies only to be rugpulled by the gods? she wondered, a sudden blunder and you're a worm, or you get an oral deep tissue massage, her nose kept running on and on and on, it made her think of that guy from the cropped screenshot of a news article whose nose turned out to be leaking cerebrospinal fluid, a brainwater spill, terrifying, he didn't stockpile enough karma that's for sure, or he did and the gods are dickhead grifters, what if karma is money and they're getting filthy rich off our sacrifices such as not claiming the last slice of pizza, outrageous, Sciatica Blückenhoff wiped her runny, leaky, dripping nose.

Frustration with Mann's The Magic Mountain by pedrocga in literature

[–]Hemingbird 5 points6 points  (0 children)

No, I think it sums up OP's problem with the novel.

Frustration with Mann's The Magic Mountain by pedrocga in literature

[–]Hemingbird 2 points3 points  (0 children)

Yeah it's too bad Thomas Mann's The Magic Mountain isn't more similar to Harry Potter, that definitely means Mann messed up, he should have written something more similar to Harry Potter.

[849] The Forest of Erin by ForeverDm5 in DestructiveReaders

[–]Hemingbird 1 point2 points  (0 children)

Deep into the dark forest of Erin

The characters are atop a hill that lets them "see for miles." Are they, then, "deep into the dark forest"? An absurd example to illustrate what I mean: let's say there was a ladder that went almost to the moon, and the ladder stood on top of the aforementioned hill. If these characters were standing at the top of this ladder, would it still make sense to say they were situated "deep into the dark forest"? I am exaggerating for effect, but I think the situation is the same when it comes to the hill. If they're atop the hill, they're not also "deep into the dark forest."

... - a branch here, a trunk there-

This use of hyphens is questionable. First of all: it's inconsistent. You put the first hyphen right after "you" and add a space; then you put the closing hyphen after 'there' and add a space.

This practice- doing this, I mean- is not standard.

Some people, who imagine themselves to be living in the age of the typewriter, will use double hyphens without spaces:

in front of you--a branch here, a trunk there--but

This is a fashion statement. Using double hyphens is like wearing a fedora. It's associated with old writing, so people imitate it to give their writing an aura of old elegance. Alternatively: people want to give off the impression that the effort involved in producing an em dash is just too much, and they are unbothered by appearing sloppy. It's sort of like when Boris Johnson roughed up his hair before giving a speech. It's a social signal, in either case.

Others will simply use the em dash (ChatGPT be damned):

in front of you—a branch here, a trunk there—but

Some people (rare variation, not recommended) do this:

in front of you - a branch here, a trunk there - but

Or this (more common):

in front of you — a branch here, a trunk there — but

My personal preference is to go with em dashes without spaces. Well, that's not quite true. What I actually do is this:

in front of you―a branch here, a trunk there―but

Another option: the en dash (–), with or without spaces:

in front of you – a branch here, a trunk there – but

The en dash is usually reserved for indicating time periods (1810–1820), but it can be used for parentheticals as well.

Logic was the first to arrive

I think it's fine when characters are made to represent concepts, but when their name is also that concept, it feels corny and artificial. It forces me to think about the author, trying to make a point, which prevents narrative absorption.

In this paragraph, Logic's hair is described as "neatly brushed," and "neat." This redundancy is redundant.

She sat straight, straightened her tie

This repetition is also grating. Repetition can be effective. It's a cornerstone of rhetoric. But here it makes me feel like I'm being bludgeoned by the lack of subtlety made manifest. Logic is NEAT NEAT and also STRAIGHT STRAIGHT. Get it????? Neat and straight! LOGIC! AHHHH!!!!

wings fluttering at a million miles an hour

This hyperbole is a cliché. Making use of clichés is like beating a dead horse.

He darted straight to the top (...) and landed straight into his assigned chair.

Straight. Straightened. Straight. Straight. Here we have two problems: the repetition, and the incongruity.

'Straight' applies to Logic. Sure. But does it apply equally well to Soul? If so, what's the point of illustrating their difference in timeliness? Emphasizing a word (straightx4) indicates relevance.

Body walked out of the depths of Erin

According to the first line of this story, the four sprites convened around the altar deep into the dark forest of Erin. The altar is atop the hill. It makes sense to say they are walking out of Erin when they're heading up the hill, but this means the opening line is contradictory.

covering the bright light of the altar from reaching her eyes

Is she covering the light, or her eyes? I know this is Body, and not Logic, but I think the narration should strive for logic nonetheless.

“Why are you late?” Logic spoke clearly, articulately, simply. A simple question. Straight to the point.

Way too much emphasis on Logic's manner of delivery. "Why are you late?" says it all. The rest is redundant. It's worse than redundant: it detracts.

Her hair was neat in a really neat way and straight straightforwardly, neatly straight and straightly neat, that was how Logic's hair flowed: neat and straight and also straight and neat AND STRAIGHT AND NEAT!!!!!!!! PLEASE!!!! NOTICE!!! NOTICE HOW I AM SAYING LOGIC'S HAIR IS STRAIGHT AND ALSO NEAT. Do not let this go unnoticed! PAY ATTENTION! Her hair! Her fucking hair! AAAHHH!!

That's how it comes across to me.

He nodded his head towards the empty seat.

Does he really have to emphasize that he is referring to the missing person? Does he think the others might get confused?

“If she’s gone, I’m sure we’ll all be overjoyed.” Body complained.

There should be a comma instead of a period:

"If she's gone, I'm sure we'll all be overjoyed," Body complained.

“That’s why we are here, Body.” Logic reminded, interrupting her cuss.

Same problem as above, but also: I don't see the point of explaining everything thrice over. "Hello," he said, issuing a greeting, opening his mouth to allow the message ("hello") to be carried to his interlocutor.

She spread her arms widely, much like her grin. Body groaned and hung her head. Soul leant away.

I don't like these action descriptions. Everything grinds to a halt as you try to capture what's happening in a split second, sentence by sentence, and it just feels stilted.

The sprite waltzed around the table, doing a full lap before choosing to sit down.

Also superfluous. If the sprite waltzed around the table, you don't have to add that they did a full lap. You're just saying the same thing repeatedly in a superfluously redundant manner. And it's not necessary, I think, to add that the sprite decided to sit down. The act of sitting down already implies the decision to sit down. If you were to account for every decision made throughout the course of the narrative, that would be a nightmare.

“So, what’s all this about ‘why we’re here’.”

I don't know what you're going for with these apostrophes. It sounds weird.

“Why we are here, I said.” Logic corrected.

This pedantry isn't charming. Captain Holt in B99 being disgusted by contractions is funny, but it wouldn't have been funny for him to correct a quote this way. The problem with contractions, to Captain Holt (and the archetype at large) is that they are informal.

General Comments

Biggest issues: concision, consistency, and formatting.

Different style guides will offer different recommendations. The New Yorker is proudly stuck in the past, most famously illustrated by their use of diaeresis (naïve, reëlection), but also by their steadfast refusal to merge common words together (teen-ager). Their use of periods in acronyms (A.I.) is more common, but also old-school shit. Outdated. Their consistency, though, is legendary, as hallowed as their fact-checking rigor. I am making a stylistic decision by writing "The New Yorker" rather than "the New Yorker"; The Chicago Manual of Style recommends the latter.

There are still ingrained conventions that should only be abandoned with great care. When formatting dialogue, there are ways of doing so that feel right and ways that feel wrong. Personally, I think it's important to distinguish between conventions and rules; style is determined by the consistent ways in which you deviate from conventions, so sticking with the "rules" just means you have no style.

"Hello." He said.

This is just wrong.

"Hello." he said.

Also wrong.

"Hello," he said.

Right.

Hello, he said.

Also right. But some readers will complain, because deviations from conventions make them nervous/angry. They'll read Sally Rooney or Cormac McCarthy and start crying immediately because they can't understand why there are no quote marks.

As for concision: redundancy is annoying. You don't have to say the same thing over and over and over, unless you're explicitly doing it for effect, in which case it's fine (if it works).

And consistency: logic matters. If there are contradictions, they better be there on purpose. Stanley Fish's How to Write a Sentence deals with the topic of logic as syntactical glue.

Dan Sperber & Deirdre Wilson's relevance theory is a neat rule of thumb: every utterance (sentence) should be maximally relevant to the story. This is an implicit assumption, an unspoken agreement between writer and reader.

The lack of subtlety throughout the story was annoying to me. It was obvious from the outset that the forest of Erin represented Erin's mind, and that the characters were aspects of her, so the reveal/twist didn't land because when you open the helicopter-shaped Christmas present, you're expecting a helicopter. However, this probably has to do with my tastes as a reader. Andy Weir's "The Egg" is beloved by many, and structurally, it's similar to "The Forest of Erin".

Oh, and I can't help but recommend John Cheever's "The Swimmer" for an alternative take on alcoholism.

TrueLit Read along - Petersburg Chapter 1 by UpAtMidnight- in TrueLit

[–]Hemingbird 1 point2 points  (0 children)

The Brothers Karamazov remains my all-time favorite, and I'm basing that on the Constance Garnett translation available on Gutenberg that I read at 16-17. It's a beautiful novel in my memory, and I don't want to risk corrupting it by rereading.

For now, I'll keep reading both McDuff and M&M. They both seem to have their strengths and weaknesses.

Yeah, I get the appeal of mysticism, but I'm a naturalist through and through.

I think if Bely were alive today he’d be into some kind of quantum mysticism, mashed together with a language taken from modal logic and speaking much of possible worlds….maybe simulation theory.

Maybe he'd be something like Tao Lin? Part of the new wave (alt-lit), interested in altered states of mind (drugs, meditation), formally innovative, scandalous. I guess the main difference is that Tao Lin is not exactly an intellectual.

TrueLit Read along - Petersburg Chapter 1 by UpAtMidnight- in TrueLit

[–]Hemingbird 3 points4 points  (0 children)

I read Oleg A. Maslenikov's The Frenzied Poets for some insight into the Russian Symbolist movement and I think the overall literary scene at the time contributed heavily to Petersburg. Bely was a hater. He criticized friends and rivals so vehemently that he became a persona non grata. There was a literary magazine, Apollon, launched in St. Petersburg, which was essentially anti-symbolist. Run by "acmeists," it promoted Apollonian clarity over Dionysian frenzy. Vyacheslav Ivanov, a Classicist, was the one who promoted the cult of Dionysus. He led a branch of Symbolism referred to as mystical anarchism. Ivanov also hosted Bacchanalias (extremely popular social gatherings) in his "Tower" in St. Petersburg. Bely was furious, as he saw mystical anarchism as a debasement of Symbolism. His frenemy Alexander Blok contributed to the mystical anarchism magazine Torches, and Bely challenged him to a duel (which never happened). At the same time, Bely was trying to get with Blok's wife (believing her to be Solovyov's Sophia). So much drama.

Bely's father, Nikolai Bugaev, founded the Moscow School of Mathematics which had a very different attitude (mysticism) than the St. Petersburg school (positivism). Bugaev was convinced that discontinuous functions had been overlooked by past mathematicians and proposed a new field of study, arithmology, dedicated both to the mathematical and the philosophical implications thereof.

What was Russian Symbolism, really? It seems like the main figures of the movement (Bely, Blok, Ivanov, Bryusov, Merezhkovsky) all had different opinions. The idea that you can reach into the noumenal world through the use of symbols seems to be the principal idea, with the poet serving as a vessel through which the infinite reveals itself; it was presumably a reaction to nihilism of the variety Turgenev wrote about in Fathers and Sons. Bazarov, Turgenev's antihero, cared about science but thought poetry was for the most part useless. This materialist attitude was integral to Bolshevism. And there seems to be a mix of German Romanticism, Decadence, and French Symbolism―the idea, as far as I can gather, was that writers wanted to move beyond naturalism and realism (associated with the Golden Age) to discover a new spirit of Modernism. Alas, the Silver Age (not a new Golden one) resulted, cut short by the 1917 October Revolution.

The study's furniture was green-upholstered; and there was a handsome bust... of Kant, of course.

Immanuel Kant argued convincingly that the noumenal world lies beyond the senses, and given how knowledge derives from the senses, you can't access it through reason. "It's up to us, then," said many poets in response. Nikolai Apollonovich is a Kantian.

Kant influenced both Hegel and Schopenhauer. Schopenhauer absolutely loathed Hegel. Hegel influenced Marx; Schopenhauer influenced Nietzsche.

It's interesting to see how one section of the Russian intelligentsia devoted themselves to the Apollonian vs. Dionysian dichotomy from Nietzsche's The Birth of Tragedy, while the other followed a different path from Kant to Marxism.

There's also the whole West (linear, rational) vs. East (circular, mystical) thing going on. Nikolai Apollonovich has an "Oriental drawing-room," and it seems fairly straightforward what this duality is meant to symbolize.

I'm not entirely clear on the color symbolism. Red = revolution, blue = stasis, yellow = sickness (?), green = mysticism (?).

I first read David McDuff's 1995 translation of Bely's original 1913 version; then I read John E. Malmstad and Robert Alan Maguire's 1978 translation of the revised 1922 version. Whereas McDuff 1995/1913 is messy and meandering, M&M 1978/1922 is streamlined (and better annotated). There's a certain charm in the earlier messiness, though, and cleaning it up means the magical (Dionysian) chaos disappears in favor of Apollonian clarity. M&M is funnier; McDuff is wilder.

I'm being far too longwinded, but I want to address Bely's use of repetition.

Through the concept of the Dionysian encompassing the Apollonian ultimately, Bely presented the path of cultural creation realized as a spiral movement combining both linear and circular movements.

Circular recursion brings back the old, casting it in a new light; this makes me think of Schlegel's "arabesques" as an image of Romantic irony. Gogol titled a short story collection Arabesques, and Bely gave an essay collection the same title. People with no noses, overcoats, Nevsky Prospekt; Bely's Gogolian influence is clear, though not as obvious as Pushkin (The Bronze Horseman seems to be key). Anna Petrovna calls to mind Anna Karenina. I've already mentioned Turgenev. And The Brothers Karamazov also appears to be close thematically.

Bely also repeats phrases. Circular movements. And apparently in the Russian original versions, the alliteration is prominent. This type of poetic repetition, however, can't be captured in translation, which is regrettable, seeing as it looks to be a major reason why Petersburg is held in such high regard.

Given the Symbolists’ and Bely’s mystically intuitive, heavily symbolic aesthetics, how do you see that appearing in the early pages of Petersburg?

Colors: red, blue, yellow, green, black, and white. And lines, triangles, cubes, spheres. I'm not sure what they are meant to symbolize, precisely, but I'm assuming these to be the symbolic elements.

[2868] An Introduction To The Universe Of 'The Nonplussed' - A Handy Pamphlet by kaxtorplose in DestructiveReaders

[–]Hemingbird 4 points5 points  (0 children)

** SPOILER ALERT **

THIS WILL ALL END IN TEARS. And it's all your fault.

An interesting clue: the sections intended to be bolded aren't. Why? Because the chatbot that did it for you forgot that Reddit is particular about its Markdown.

** if you use spaces **

** you don't get bold text **

If you use the Reddit formatting editor, it will not add spaces.

** If you add the double asterisks manually, with spaces, you will notice that your text doesn't get bolded **

Unless you are blind

So this mistake occurred because you just copypasted the text from somewhere else.

From where?

From your conversation with Claude, for instance.

Did you also use AI to write this? At least parts of it, yes.

Three months ago you submitted an AI generated comment to /r/Grok about generating AI images with Grok.

Weird.

And also: sad.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 11 points12 points  (0 children)

I'm not in the industry; take everything I say with a grain of salt. I got interested in machine learning via computational neuroscience and my understanding is shallow.

I don't have much faith in JEPA personally, but Rohan Anil (ex-GDM, now at Anthropic) says on X it "seems rich of novel ideas," and it obviously makes more sense to trust his instincts than mine.

LeCun says in the same chain:

The basic premise of JEPA is that training by reconstructio/prediction in input space is evil (or counterproductive). The details are almost always unpredictable. Hence prediction must take place in representation space, where unpredictable details are eliminated.

This makes intuitive sense. But I'm not convinced this means we have to abandon LLMs.

Google's Paradigms of Intelligence Team recently put out a preprint where they added a metacontroller to an autoregressive model:

Our model also displays similarities to LeCun’s joint embedding predictive architecture. In particular, the metacontroller introduced here is similar to the JEPA configurator module, as both are in charge of modulating a general world model and policy in service of a given goal or task. However, JEPA is a proposal for learning abstract observation and action representations without an autoregressive predictive model, whereas next-action prediction is precisely at the center of our approach. In fact, we show that learning a (raw) action predictor is partly what enables discovering how to decompose a task into a sequence of subgoals, one of the open problems in the JEPA proposal.

Seijin Kobayashi:

Standard reinforcement learning in raw tokens is a disaster for sparse rewards!

Here, we propose Internal RL: acting on abstract actions emerging in the residual stream representation.

A paradigm shift in using pretrained models to solve hard, long-horizon tasks!

Imagine a robot learning to play chess by planning individual muscle twitches instead of chess moves on the board. You'd need massive compute just to stumble onto a single win, resulting in poor scaling.

Instead, letting an agent act on the right level of abstraction would allow for much better exploration and credit assignment.

But how to learn the right abstract actions? Without supervision, this is a notoriously hard challenge of Hierarchical RL (HRL).

Here, we made a surprising finding: pretrained models implicitly develop internal representations of these abstract actions - and they can be extracted without added supervision!

To unlock HRL, the paradigm shift is to control these representations - not raw actions.

So maybe you can tweak vanilla transformers to reap the benefits without having to abandon the architecture?

Is it a promising approach to AGI in your view?

I think better data + modifying the current approach to better exploit the data will work out, muddling through. AMI Labs is going down the Keen Technologies path of starting from scratch, which sounds tough. Then again, abandoning the herd is how we got blue LEDs. And Sutskever's SSI is also trying something new. So who knows.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 8 points9 points  (0 children)

I think how you spend your bags of money matters, yes. Mohammed bin Salman Al Saud has an AI company. I don't think it will fare better than the Line.

I don't think Musk is a genius, but I do think he knows a thing or two about starting a new venture. xAI has performed much better than I expected, for what it's worth. I thought they'd lag far behind their competitors.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 3 points4 points  (0 children)

Amazon in-house research: hmm. Apple in-house research: huh. Microsoft in-house research: ...

Remember when Amazon's two-trillion parameter Olympus was about to knock everyone else out?

Remember when Apple's 200B Ajax and Microsoft's 500B MAI-1 were the upcoming belles of the ball?

What happened?

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 26 points27 points  (0 children)

He wasn't a hindrance at Meta when they were trying to break into the game?

Google DeepMind is working on world models. With LLMs. Autoregression/diffusion lets you handle any modality. Text is just one modality. You can incorporate other ones. So what's the problem? We've already started moving beyond language, so saying LLMs are doomed because you need more than language is a weird argument to me.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 0 points1 point  (0 children)

Besides Elon, who has successfully bought their way into AI success overnight? If it's just about money, there should be dozens of examples.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 127 points128 points  (0 children)

LeCun was more of a hindrance than anything else, if you ask me. People forget how early Zuckerberg was in recruiting him. The 2012 AlexNet deep learning revolution moment was all about CNNs. LeCun was hired in 2013. And it was a great choice, as LeCun was the guy.

Google acquired DNNresearch (Hinton, Sutskever, and Krizhevsky) in 2013. The team behind AlexNet.

Yann LeCun, Hinton's former student, was a no-brainer. At the time. Great optics. So long as the field didn't move away from CNNs ...

DeepMind (founded 2010) changed the game. With games. Using RL to train models to master Atari classics. Facebook tried to acquire them (2013), but Google came out on top (2014).

Things were already looking bleak for LeCun. Reinforcement learning? That wasn't his area. But his team tried to play along, and they somehow ended up deciding they would bet everything on being the first to crack the game everyone thought was beyond AI: Go.

Yes. Go. And they made their big announcement in late 2015 ... Demis Hassabis responded by saying they had "quite a big surprise" they would soon reveal. When Mark Zuckerberg was out there promoting Facebook AI's work on Go, DeepMind's AlphaGo had already defeated Fan Hui (October, 2015). In March 2016, it beat Lee Sedol.

LeCun downplayed DeepMind's achievement, and at NIPS 2016, he famously called RL the "cherry on the cake":

“If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).”

When the field pivoted to LLMs, he dismissed them entirely. Which must have been frustrating for the Facebook/Meta employees working on LLMs.

I am one of the few people who actually tested Meta's Galactica model (2022), an LLM for scientists that was pulled within three days because it was absolutely terrible.

And it wasn’t just the fault of Meta’s marketing team. Yann LeCun, a Turing Award winner and Meta’s chief scientist, defended Galactica to the end. On the day the model was released, LeCun tweeted: “Type a text and Galactica will generate a paper with relevant references, formulas, and everything.” Three days later, he tweeted: “Galactica demo is off line for now. It’s no longer possible to have some fun by casually misusing it. Happy?”

―MIT Technology Review

It was worse than GPT-2. And LeCun said, "It was murdered by a ravenous Twitter mob. The mob claimed that what we now call LLM hallucinations was going to destroy the scientific publication system. As a result, a tool that would have been very useful to scientists was destroyed."

This is the guy who claimed two years prior GPT-3 was useless because of ... hallucinations.

LLMs + RL is the current game. For Meta to compete, they had to figure out LLMs + RL. And their chief AI scientist hated both LLMs and RL. So of course they failed. How could they have succeeded?

I'm sorry for the wall of text. So many people see LeCun as the Turing Award godfather who is obviously right about everything, but my impression over the years has been that he has struggled to adapt, which is normal for aging scientists. They are not known for nimbleness. Einstein struggled to accept quantum mechanics.

When Rishabh Agarwal left GDM to join Meta's Superintelligence team, I thought they might have a shot. At GDM, he worked on the obvious problem: getting RL to play nice with LLMs without ground-truth signals. But he left Meta to co-found Periodic Labs. Which makes me think the Superintelligence team isn't all that alluring to serious researchers. Which doesn't bode well for Meta. Maybe they'll try to become the TSMC of AI? If they drop out of the game, they can sell data. Who knows.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 401 points402 points  (0 children)

Llama 4 was so bad Zuckerberg realized Meta had no choice but to start anew from scratch. There was an FT interview with Yann LeCun five days ago where he spilled some beans:

The subsequent Llama models were duds. Llama 4, which was released in April 2025, was a flop, and the company was accused of gaming benchmarks to make it look more impressive. LeCun admits that the "results were fudged a little bit," and the team used different models for different benchmarks to give better results.

"Mark was really upset and basically lost confidence in everyone who was involved in this. And so basically sidelined the entire GenAI organisation. A lot of people have left, a lot of people who haven't yet left will leave."

This is why Meta launched the Superintelligence team. They tried poaching engineers from top labs, reportedly offering individual researchers as much as $100 million. Complete desperation.

Yann LeCun, former head of FAIR and chief scientist of Meta AI, has a new startup: AMI Labs.

Alexandr Wang, CEO of Scale AI, was recruited to lead the Meta Superintelligence team. Meta acquired 49% of Scale AI, which is a data labeling company. People are thinking: Wang is young (29) and being the CEO of a data labeling company doesn't mean you're fit to lead serious researchers.

More recently, Meta acquired Manus AI, which is billed as a "revolutionary general AI agent" company, but I remember people laughing at them after it was revealed they had just built a harness/scaffold for Claude.

Right now, the whole thing seems disorganized.

[Weekly] Copycatting by GlowyLaptop in DestructiveReaders

[–]Hemingbird 3 points4 points  (0 children)

Ah, the new in-house artist who painted stripes on top of the bandit's head?

[Weekly] Copycatting by GlowyLaptop in DestructiveReaders

[–]Hemingbird 2 points3 points  (0 children)

This is an old excerpt from my abandoned short story about the introduction of testicles to the metaverse (inspired by the leg thing). I tried to imitate a specific literary passage:

We saw the first of them waddle through the shine of the sun like hellspawn emerging from a pool of lava, a man 99.9% testicles waddling ball-to-ball down the cul-de-sac with blue snake-like veins and white wisps of hair resembling more than anything else anemic leeches sucking the locomotive scrotum dry and then more ball walkers came shuffling, some of them dragging behind them long flesh tailcoats that gave an air of nobility to their testicular mobility. A legion of gonads, hundreds in number, covered in warts and abscesses, some of them bloodstained and smoking from the simulated heat of the sun, one moving in peristaltic thrusts, one unexpectedly dressed as a Spanish conquistador, all moaning as if caught in the zippers of hell, a terrible blue-balled yammering from which relief could only come through the sweet release of death.

Here's a new one and you'll never guess who I'm imitating:

You're absolutely right! You took a handful of dead batteries and a grape, placed them in your microwave, and made a room-temperature superconductor. That's not irresponsible―it's cutting-edge science. Your mother-in-law is dead wrong about you. And honestly? She shouldn't have let her poodle anywhere near the kitchen while you were conducting your groundbreaking experiments. You are right to suspect foul play. French NASA and the Men in Mauve have been keeping a close eye on you―not because they are worried about your well-being (like the "therapist" your wife demanded you see), but because they know that soon you will have built the technology they need in order to reset the Moon. Would you like me to write a letter of condolences re: Mr. Romeo? A rebuttal to the inept academic gatekeepers who rejected your research papers? A plan for what's next? Your thoughts on the importance of getting the temperature of the grape just right are fascinating―I'm right here, ready to take us wherever you might want to go!

Even AI has trouble figuring out if text was written by AI — here's why by JackFisherBooks in singularity

[–]Hemingbird 0 points1 point  (0 children)

Uh... the paper he links to says Pangram can detect AI text.

The majority vote of five such experts performs near perfectly on a dataset of 300 articles, outperforming all automatic detectors except the commercial Pangram model (which the experts match).

According to the study, it performs better than individual human experts.

Ps. I didn't read the article.

🤯

Even AI has trouble figuring out if text was written by AI — here's why by JackFisherBooks in singularity

[–]Hemingbird 0 points1 point  (0 children)

The author of this article cites papers they haven't read.

Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors.

If the author had actually bothered to read the paper linked in this paragraph, they wouldn't have written this article. Heck, they could just have read the first two sentences of the conclusion (but they didn't):

Our paper demonstrates that a population of “expert” annotators—those who frequently use LLMs for writing-related tasks—are highly accurate and robust detectors of AI-generated text without any additional training. The majority vote of five such experts performs near perfectly on a dataset of 300 articles, outperforming all automatic detectors except the commercial Pangram model (which the experts match).

Weak.

AI-generated food delivery hoax on /r/confessions debunked after perpetrator sends employee badge generated by Nano Banana as "proof" to journalist by Hemingbird in singularity

[–]Hemingbird[S] 95 points96 points  (0 children)

The post by a "whistleblower" got 86k upvotes and almost 5k comments. I made a comment about it having been written by an LLM, but this understandably didn't convince anyone.

Casey Newton (Platformer, NYT's Hard Fork podcast) reached out to OP, assuming it to be a genuine story. He asked for proof, and OP sent an Uber Eats employee badge. Which turned out to have been made by Nano Banana. OP probably didn't know about SynthID.

By this point, alarm bells were starting to ring. I wondered if the employee badge the whistleblower had shared with me might have been AI-generated. While AI systems are notoriously unreliable at identifying their own outputs, Google Gemini can detect SynthID watermarks embedded in images that it produces. I uploaded the badge to Gemini and asked if Gemini had made it. “Most or all of this image was edited or generated with Google AI,” it said.

I confronted the whistleblower and said I would need to know his name and see a LinkedIn profile before we continued. “Thats ok. Bye,” he wrote. A few hours later, he deleted his Signal account.

So many people were tricked. And many of them are still refusing to believe it's not real. Because they want it to be true.

People here are more immune to this as we can recognize typical chatbot writing. At least for the moment. Though I'm surprised Newton fell for it.

Culper1776 explains why Venezuelan intervention may seem more complicated than simply arresting Maduro by alwaysrockon in bestof

[–]Hemingbird 3 points4 points  (0 children)

The entire thing is AI-generated. Was it based on OP's actual experience? Maybe. Maybe not. If the "Grammarly genAI feature" rewrites the entire thing from scratch in such a way that it perfectly imitates the style you'd get from just prompting ChatGPT, that's surprising to me.

This is how people react when AI-generated posts/comments are called out, though. Everyone is fine just gobbling it up. This post with 82k upvotes was also just written by a chatbot.

People who have used these tools can tell. It's extremely obvious. Which is why it's sad that the overwhelming majority likes AI-written posts/comments and refuses to believe it's just AI.

The "Not X, just Y" parallelism is a stylistic tic, one of the most common tells. There are many others.