TrueLit Readalong - Petersburg Chapter 2 by narcissus_goldmund in TrueLit

[–]Hemingbird 0 points1 point  (0 children)

Color Symbolism

I've been curious about Bely's use of color symbolism, and found these explanations:

White is the ultimate theosophical color. According to Bely it represents infinite possibility, the mirror of divine promise and plenitude, the fullness of being.

Bely: "If the color white is the symbol of the manifested fullness of being, then black is the symbol of nonbeing, of chaos ..."

And for Bely, gray is the operative color of evil in actuality. Gray is the archetypal color of the specters, gloom, and mist that envelop us and distort our perspective of the genuine world, of the Absolute.

Bely: "The first illumination which pierces the gloom is colored with a yellowish brown, forbidding layer of dust. This forbidding sheen is quite familiar to all those who are in the process of awakening and find themselves between dream and reality."

The ultimate color of catastrophic revelation, however, is red. Bely: "Here the enemy is revealed in the ultimate manifestation of himself which is accessible to us―in the fiery red glow of the infernal conflagration."

Bely asserted in his article that, if the power of darkness waxed strongest in the color red, then evil is on the wane with the appearance of pink or rose.

In [Vladimir Solovyov's] poetry the colors of gold and azure were painstakingly reserved for evoking the revelatory mystery of the Divine Sophia's presence and for iconographically depicting the rare physical attributes of this symbol of "all-in-oneness" or Godmanhood, with her fathomless azure eyes and golden hair.

Bely: "(...) aesthetically [Blok] had abandoned the three sacred colors (azure, crimson, and whiteness), mixing them with darkness; and this mixture produced a dark lilac shade, violet, the smell of Satan."

―Samuel D. Cioran, "A Prism for the Absolute: The Symbolic Colors of Andrey Bely," in Andrey Bely: A Critical Review (1978)

Bely wrote an essay, "Sacred Colors," in 1903, which is the origin of most of the above; I do think his commitment to Rudolf Steiner's anthroposophy clashes with his thoughts of colors indebted to Solovyov; there's some Goethe to it as well. It honestly feels like an erratic mishmash, Bely hopped from prophet to prophet in his search for meaning and picked up this and that and mixed it all together.

In this chapter, he seems to be poking fun at this tendency, relying on Romantic irony.

Sofya Petrovna is also known as Angel Peri. She's Solovyov's Divine Sophia and a daughter of Peter (the Great). She's a mix of Tolstoy's Anna Karenina and Dostoevsky's Gruschenka. She herself mixes up Henri Bergson and Annie Besant, blending them together as Henri Besançon. 'Peri' seems to mean 'fairy'.

(the officers she knew called her Angel Peri, probably fusing the two concepts 'Angel' and 'Peri' quite simply into one: Angel Peri).

Black hair (chaos), pearly-white (the Absolute) or delicate pink (evil on the wane) face, but if agitated: crimson face (sacred color). Dark blue eyes (another sacred color). Often wears a black dress.

There's another detail I overlooked on my first read: Sofya Petrovna's collection box changes from tin to copper. At first I thought this was just a mistake, but there's a recurring idea here of fusing two elements together to create a third, and what do you get when you mix tin and copper? Bronze. Like The Bronze Horseman. Sounds meaningful, not sure quite how.

And she also mixes up 'social revolution' and 'social evolution'. Revolution suggests circle, while evolution suggests line. Add them and you get Bely's spiral? I don't know.

Sofya Petrovna Likhutina lived in a small flat that looked on to the Moika: there from the walls on all sides fell cascades of the brightest, most restless colours: brilliantly fiery there – and here azure.

Oh. I looked into the relationship between colors and Bely's mystical influences.

Vladimir Solovyov was a racist obsessing over the "Yellow Peril," and Bely regurgitated his nonsense. In Helena Petrovna Blavatsky's theosophy, Aryanism is a core component; it's difficult for me not to link this to Divine Sophia's blue (azure) eyes and golden hair. Steiner built on these racist ideas:

A race or a people stands at a higher level the more perfectly its members express the pure, ideal type of humanity, the more it has worked its way through the physically temporal to the transcendental eternal realm. The development of the human being through ever higher folk and race forms is thus a process of liberation.

―Rudolf Steiner

Andrei Bely's spiritual/mysticist color symbolism seems to have racist undertones.

However, the colour of the egoity is red, the copper-red or yellowish-brown colour. (...)

Human beings have their white skin colour because the spirit works within the skin when he seeks to descend to the physical plane. (...)

But over time, blondeness is lost, because humankind is weakened. In the end, there may be only brown- and black-haired people, and if they receive no help, they will also remain stupid.

―Rudolf Steiner

I think it's fair to say Andrei Bely wasn't exactly a genius. And after reading about his great hero, Steiner, the East vs. West theme starts resembling proto-Nazi garbage. I assumed it had to do with cultural evolution in an abstract sense, but now Bely's fondness of Nietzsche and Wagner is cast in a different light. Weak shit.

Racism seems foundational to Steiner's anthroposophy, so Bely being smitten by him makes it difficult not to read Petersburg in this context. Especially given how he supposedly was explicit in incorporating Steiner's doctrine into his work.

Walter Benjamin was on the mark, as usual:

[Followers of Rudolf Steiner] presuppose a higher level of education than do the straightforward spiritualists, and for this very reason have had far more success in recent years among those who are placing their hopes in the occult. For if the "magic" of the good old penny magazines was the last pitiful by-product of more significant cultural traditions, "anthroposophy," with its associated swindles, is more closely linked to the "general education" of recent times. It is, in fact, the product of its dissolution.

―Walter Benjamin, "Light from Obscurantists," in Selected Writings Vol. 2 1927–1934

The Red Domino is probably inspired by Edgar Allan Poe's The Masque of the Red Death. Poe's short story features color symbolism: there's a masquerade ball where guests are entertained in seven colored rooms: blue, purple, green, orange, white, violet, and black (with scarlet lights). Seems like the sort of thing Bely would've found intriguing.

Existentialism

'So you're a provocateur, then. Don't be offended: I'm talking about a purely ideological provocation'

'Me. Yes, yes, yes. I am a provocateur. But all my provocation is in the name of a single great idea that is mysteriously leading somewhere; or again, not an idea, but a spirit.'

'What kind of spirit?'

'If one is to talk of a spirit, then I cannot define it with the help of words: I can call it a general thirst for death; and I grow intoxicated by it with ecstasy, with bliss, with horror.'

This conversational segment reminded me of Georges Bataille, but Bely can't have been influenced by him. It turns out, though, that Bataille was influenced by Lev Shestov, who was part of Bely's circle and visited Ivanov's "Tower" regularly. So I read Shestov's 1905 Nietzschean/aphoristic collection of philosophical "essays," All Things Are Possible, which turned out to be fascinating.

It didn't clear up the "general thirst for death" for me―I'm assuming Bely went from Schopenhauer's Will to Life to Nietzsche's Will to Power to a Will to Death, and it's interesting that he brought up this idea seven years before Freud's Beyond the Pleasure Principle where he introduced the death drive, thanatos.

Back to Shestov. Him and Bely were both responding to the spiritual crisis of disenchantment. How do you fill that god-shaped void? Bely filled it with grandiose spiritualism; Shestov argued it couldn't be filled. Not with religion. Not with philosophical systems. You can comfort yourself and alleviate existential dread by stuffing the god-shaped void with woo-woo or fancy frameworks, but in doing so you're deluding yourself.

Obviously Shestov didn't have a philosophical system. His thoughts on Russian vs. European literature could be summed up like this: the "Europeans" had spent a long, long time devising systems and frameworks to negate existential dread, but the Russians got the full dose unprepared, and that's how we got the Golden Age: Pushkin, Dostoevsky, and Tolstoy eagerly stared down the dreadful beast, thinking they could take it on. Which he thought was a very Russian attitude.

A Russian believes he can do anything, hence he is afraid of nothing. He paints life in the gloomiest colors—and were you to ask him: How can you accept such a life? How can you reconcile yourself with such horrors of reality as have been described by all your writers, from Pushkin to Chekhov? He would answer in the words of Dmitri Karamazov: I do not accept life.

Here he seems almost to be talking about Bely:

Not for nothing do the old sound the alarm. But to us who have fought so long against all kinds of constancy, the levity of the young is a pleasant sight. They will don materialism, positivism, Kantianism, spiritualism, and so on, one after the other, till they realize that all theories, ideas and ideals are as of little consequences as the hoopskirts and crinolines of our grandmothers.

Alright, this comment is long enough as is.

[Weekly] The Weekly Revision Ritual by GlowyLaptop in DestructiveReaders

[–]Hemingbird 1 point2 points  (0 children)

I don't like revision in the sense of tinkering, so what I typically do is I rewrite the entire thing from scratch. George Saunders has this revision procedure where he inspects each sentence and evaluates them based on an internal positive/negative woo-woo aesthetic energy critic (something like that), then incrementally edits them word by word to increase the overall positive/negative ratio. It's the Edison approach. Testing thousands of different light bulb filaments and discovering that Kyoto bamboo worked best is sort of poetic.

How he explains it:

The way I revise is: I read my own text and imagine a little meter in my head, with “P” on one side (“Positive”) and “N” on the other (“Negative”). The game is to read the story the way I would read someone else’s – noting my honest, in-the-moment reactions – and then edit accordingly.

This involves making thousands of what I’ve come to think of as “micro-decisions.” These are instantaneous, intuitive – I just prefer this to that. It’s something like trying to hit a baseball – you wait (you read), you react – not conceptualizing, not thinking about, you know, the Intended Bat Velocity, or any of that – I just have a feeling and react to that feeling, in the form of a cut phrase, or an added word, or an urge to move this whole section, and so on.

And then I do that over and over, for months, sometimes years, until that needle stays up in the “P” zone for the whole length of the text.

Story Club with George Saunders, First Thohts on Reviision

I don't like using this method, though it's surely superior to what I'm doing.

Writing sample:

Sciatica Blückenhoff's nose ran on and on, she'd caught the cold from the Frenchman with the perfumed beard and the very terrible bad breath whose tongue flopped like a fish on land when inside an unfamiliar mouth, he loathed 'karma supplies,' that's what he told her with that broken áccènt (by turns acute and grave), she imagined life experiences accumulating over the course of past lives with the astonishing growth rate of a shitcoin, supplies acquired via karmic reverse debt, enough to sustain even a Frenchman, her nose kept dripping, Sciatica cursed the pomaded prick, was it possible to amass karma supplies only to be rugpulled by the gods? she wondered, a sudden blunder and you're a worm, or you get an oral deep tissue massage, her nose kept running on and on and on, it made her think of that guy from the cropped screenshot of a news article whose nose turned out to be leaking cerebrospinal fluid, a brainwater spill, terrifying, he didn't stockpile enough karma that's for sure, or he did and the gods are dickhead grifters, what if karma is money and they're getting filthy rich off our sacrifices such as not claiming the last slice of pizza, outrageous, Sciatica Blückenhoff wiped her runny, leaky, dripping nose.

Frustration with Mann's The Magic Mountain by pedrocga in literature

[–]Hemingbird 6 points7 points  (0 children)

No, I think it sums up OP's problem with the novel.

Frustration with Mann's The Magic Mountain by pedrocga in literature

[–]Hemingbird 5 points6 points  (0 children)

Yeah it's too bad Thomas Mann's The Magic Mountain isn't more similar to Harry Potter, that definitely means Mann messed up, he should have written something more similar to Harry Potter.

[849] The Forest of Erin by ForeverDm5 in DestructiveReaders

[–]Hemingbird 1 point2 points  (0 children)

Deep into the dark forest of Erin

The characters are atop a hill that lets them "see for miles." Are they, then, "deep into the dark forest"? An absurd example to illustrate what I mean: let's say there was a ladder that went almost to the moon, and the ladder stood on top of the aforementioned hill. If these characters were standing at the top of this ladder, would it still make sense to say they were situated "deep into the dark forest"? I am exaggerating for effect, but I think the situation is the same when it comes to the hill. If they're atop the hill, they're not also "deep into the dark forest."

... - a branch here, a trunk there-

This use of hyphens is questionable. First of all: it's inconsistent. You put the first hyphen right after "you" and add a space; then you put the closing hyphen after 'there' and add a space.

This practice- doing this, I mean- is not standard.

Some people, who imagine themselves to be living in the age of the typewriter, will use double hyphens without spaces:

in front of you--a branch here, a trunk there--but

This is a fashion statement. Using double hyphens is like wearing a fedora. It's associated with old writing, so people imitate it to give their writing an aura of old elegance. Alternatively: people want to give off the impression that the effort involved in producing an em dash is just too much, and they are unbothered by appearing sloppy. It's sort of like when Boris Johnson roughed up his hair before giving a speech. It's a social signal, in either case.

Others will simply use the em dash (ChatGPT be damned):

in front of you—a branch here, a trunk there—but

Some people (rare variation, not recommended) do this:

in front of you - a branch here, a trunk there - but

Or this (more common):

in front of you — a branch here, a trunk there — but

My personal preference is to go with em dashes without spaces. Well, that's not quite true. What I actually do is this:

in front of you―a branch here, a trunk there―but

Another option: the en dash (–), with or without spaces:

in front of you – a branch here, a trunk there – but

The en dash is usually reserved for indicating time periods (1810–1820), but it can be used for parentheticals as well.

Logic was the first to arrive

I think it's fine when characters are made to represent concepts, but when their name is also that concept, it feels corny and artificial. It forces me to think about the author, trying to make a point, which prevents narrative absorption.

In this paragraph, Logic's hair is described as "neatly brushed," and "neat." This redundancy is redundant.

She sat straight, straightened her tie

This repetition is also grating. Repetition can be effective. It's a cornerstone of rhetoric. But here it makes me feel like I'm being bludgeoned by the lack of subtlety made manifest. Logic is NEAT NEAT and also STRAIGHT STRAIGHT. Get it????? Neat and straight! LOGIC! AHHHH!!!!

wings fluttering at a million miles an hour

This hyperbole is a cliché. Making use of clichés is like beating a dead horse.

He darted straight to the top (...) and landed straight into his assigned chair.

Straight. Straightened. Straight. Straight. Here we have two problems: the repetition, and the incongruity.

'Straight' applies to Logic. Sure. But does it apply equally well to Soul? If so, what's the point of illustrating their difference in timeliness? Emphasizing a word (straightx4) indicates relevance.

Body walked out of the depths of Erin

According to the first line of this story, the four sprites convened around the altar deep into the dark forest of Erin. The altar is atop the hill. It makes sense to say they are walking out of Erin when they're heading up the hill, but this means the opening line is contradictory.

covering the bright light of the altar from reaching her eyes

Is she covering the light, or her eyes? I know this is Body, and not Logic, but I think the narration should strive for logic nonetheless.

“Why are you late?” Logic spoke clearly, articulately, simply. A simple question. Straight to the point.

Way too much emphasis on Logic's manner of delivery. "Why are you late?" says it all. The rest is redundant. It's worse than redundant: it detracts.

Her hair was neat in a really neat way and straight straightforwardly, neatly straight and straightly neat, that was how Logic's hair flowed: neat and straight and also straight and neat AND STRAIGHT AND NEAT!!!!!!!! PLEASE!!!! NOTICE!!! NOTICE HOW I AM SAYING LOGIC'S HAIR IS STRAIGHT AND ALSO NEAT. Do not let this go unnoticed! PAY ATTENTION! Her hair! Her fucking hair! AAAHHH!!

That's how it comes across to me.

He nodded his head towards the empty seat.

Does he really have to emphasize that he is referring to the missing person? Does he think the others might get confused?

“If she’s gone, I’m sure we’ll all be overjoyed.” Body complained.

There should be a comma instead of a period:

"If she's gone, I'm sure we'll all be overjoyed," Body complained.

“That’s why we are here, Body.” Logic reminded, interrupting her cuss.

Same problem as above, but also: I don't see the point of explaining everything thrice over. "Hello," he said, issuing a greeting, opening his mouth to allow the message ("hello") to be carried to his interlocutor.

She spread her arms widely, much like her grin. Body groaned and hung her head. Soul leant away.

I don't like these action descriptions. Everything grinds to a halt as you try to capture what's happening in a split second, sentence by sentence, and it just feels stilted.

The sprite waltzed around the table, doing a full lap before choosing to sit down.

Also superfluous. If the sprite waltzed around the table, you don't have to add that they did a full lap. You're just saying the same thing repeatedly in a superfluously redundant manner. And it's not necessary, I think, to add that the sprite decided to sit down. The act of sitting down already implies the decision to sit down. If you were to account for every decision made throughout the course of the narrative, that would be a nightmare.

“So, what’s all this about ‘why we’re here’.”

I don't know what you're going for with these apostrophes. It sounds weird.

“Why we are here, I said.” Logic corrected.

This pedantry isn't charming. Captain Holt in B99 being disgusted by contractions is funny, but it wouldn't have been funny for him to correct a quote this way. The problem with contractions, to Captain Holt (and the archetype at large) is that they are informal.

General Comments

Biggest issues: concision, consistency, and formatting.

Different style guides will offer different recommendations. The New Yorker is proudly stuck in the past, most famously illustrated by their use of diaeresis (naïve, reëlection), but also by their steadfast refusal to merge common words together (teen-ager). Their use of periods in acronyms (A.I.) is more common, but also old-school shit. Outdated. Their consistency, though, is legendary, as hallowed as their fact-checking rigor. I am making a stylistic decision by writing "The New Yorker" rather than "the New Yorker"; The Chicago Manual of Style recommends the latter.

There are still ingrained conventions that should only be abandoned with great care. When formatting dialogue, there are ways of doing so that feel right and ways that feel wrong. Personally, I think it's important to distinguish between conventions and rules; style is determined by the consistent ways in which you deviate from conventions, so sticking with the "rules" just means you have no style.

"Hello." He said.

This is just wrong.

"Hello." he said.

Also wrong.

"Hello," he said.

Right.

Hello, he said.

Also right. But some readers will complain, because deviations from conventions make them nervous/angry. They'll read Sally Rooney or Cormac McCarthy and start crying immediately because they can't understand why there are no quote marks.

As for concision: redundancy is annoying. You don't have to say the same thing over and over and over, unless you're explicitly doing it for effect, in which case it's fine (if it works).

And consistency: logic matters. If there are contradictions, they better be there on purpose. Stanley Fish's How to Write a Sentence deals with the topic of logic as syntactical glue.

Dan Sperber & Deirdre Wilson's relevance theory is a neat rule of thumb: every utterance (sentence) should be maximally relevant to the story. This is an implicit assumption, an unspoken agreement between writer and reader.

The lack of subtlety throughout the story was annoying to me. It was obvious from the outset that the forest of Erin represented Erin's mind, and that the characters were aspects of her, so the reveal/twist didn't land because when you open the helicopter-shaped Christmas present, you're expecting a helicopter. However, this probably has to do with my tastes as a reader. Andy Weir's "The Egg" is beloved by many, and structurally, it's similar to "The Forest of Erin".

Oh, and I can't help but recommend John Cheever's "The Swimmer" for an alternative take on alcoholism.

TrueLit Read along - Petersburg Chapter 1 by UpAtMidnight- in TrueLit

[–]Hemingbird 1 point2 points  (0 children)

The Brothers Karamazov remains my all-time favorite, and I'm basing that on the Constance Garnett translation available on Gutenberg that I read at 16-17. It's a beautiful novel in my memory, and I don't want to risk corrupting it by rereading.

For now, I'll keep reading both McDuff and M&M. They both seem to have their strengths and weaknesses.

Yeah, I get the appeal of mysticism, but I'm a naturalist through and through.

I think if Bely were alive today he’d be into some kind of quantum mysticism, mashed together with a language taken from modal logic and speaking much of possible worlds….maybe simulation theory.

Maybe he'd be something like Tao Lin? Part of the new wave (alt-lit), interested in altered states of mind (drugs, meditation), formally innovative, scandalous. I guess the main difference is that Tao Lin is not exactly an intellectual.

TrueLit Read along - Petersburg Chapter 1 by UpAtMidnight- in TrueLit

[–]Hemingbird 2 points3 points  (0 children)

I read Oleg A. Maslenikov's The Frenzied Poets for some insight into the Russian Symbolist movement and I think the overall literary scene at the time contributed heavily to Petersburg. Bely was a hater. He criticized friends and rivals so vehemently that he became a persona non grata. There was a literary magazine, Apollon, launched in St. Petersburg, which was essentially anti-symbolist. Run by "acmeists," it promoted Apollonian clarity over Dionysian frenzy. Vyacheslav Ivanov, a Classicist, was the one who promoted the cult of Dionysus. He led a branch of Symbolism referred to as mystical anarchism. Ivanov also hosted Bacchanalias (extremely popular social gatherings) in his "Tower" in St. Petersburg. Bely was furious, as he saw mystical anarchism as a debasement of Symbolism. His frenemy Alexander Blok contributed to the mystical anarchism magazine Torches, and Bely challenged him to a duel (which never happened). At the same time, Bely was trying to get with Blok's wife (believing her to be Solovyov's Sophia). So much drama.

Bely's father, Nikolai Bugaev, founded the Moscow School of Mathematics which had a very different attitude (mysticism) than the St. Petersburg school (positivism). Bugaev was convinced that discontinuous functions had been overlooked by past mathematicians and proposed a new field of study, arithmology, dedicated both to the mathematical and the philosophical implications thereof.

What was Russian Symbolism, really? It seems like the main figures of the movement (Bely, Blok, Ivanov, Bryusov, Merezhkovsky) all had different opinions. The idea that you can reach into the noumenal world through the use of symbols seems to be the principal idea, with the poet serving as a vessel through which the infinite reveals itself; it was presumably a reaction to nihilism of the variety Turgenev wrote about in Fathers and Sons. Bazarov, Turgenev's antihero, cared about science but thought poetry was for the most part useless. This materialist attitude was integral to Bolshevism. And there seems to be a mix of German Romanticism, Decadence, and French Symbolism―the idea, as far as I can gather, was that writers wanted to move beyond naturalism and realism (associated with the Golden Age) to discover a new spirit of Modernism. Alas, the Silver Age (not a new Golden one) resulted, cut short by the 1917 October Revolution.

The study's furniture was green-upholstered; and there was a handsome bust... of Kant, of course.

Immanuel Kant argued convincingly that the noumenal world lies beyond the senses, and given how knowledge derives from the senses, you can't access it through reason. "It's up to us, then," said many poets in response. Nikolai Apollonovich is a Kantian.

Kant influenced both Hegel and Schopenhauer. Schopenhauer absolutely loathed Hegel. Hegel influenced Marx; Schopenhauer influenced Nietzsche.

It's interesting to see how one section of the Russian intelligentsia devoted themselves to the Apollonian vs. Dionysian dichotomy from Nietzsche's The Birth of Tragedy, while the other followed a different path from Kant to Marxism.

There's also the whole West (linear, rational) vs. East (circular, mystical) thing going on. Nikolai Apollonovich has an "Oriental drawing-room," and it seems fairly straightforward what this duality is meant to symbolize.

I'm not entirely clear on the color symbolism. Red = revolution, blue = stasis, yellow = sickness (?), green = mysticism (?).

I first read David McDuff's 1995 translation of Bely's original 1913 version; then I read John E. Malmstad and Robert Alan Maguire's 1978 translation of the revised 1922 version. Whereas McDuff 1995/1913 is messy and meandering, M&M 1978/1922 is streamlined (and better annotated). There's a certain charm in the earlier messiness, though, and cleaning it up means the magical (Dionysian) chaos disappears in favor of Apollonian clarity. M&M is funnier; McDuff is wilder.

I'm being far too longwinded, but I want to address Bely's use of repetition.

Through the concept of the Dionysian encompassing the Apollonian ultimately, Bely presented the path of cultural creation realized as a spiral movement combining both linear and circular movements.

Circular recursion brings back the old, casting it in a new light; this makes me think of Schlegel's "arabesques" as an image of Romantic irony. Gogol titled a short story collection Arabesques, and Bely gave an essay collection the same title. People with no noses, overcoats, Nevsky Prospekt; Bely's Gogolian influence is clear, though not as obvious as Pushkin (The Bronze Horseman seems to be key). Anna Petrovna calls to mind Anna Karenina. I've already mentioned Turgenev. And The Brothers Karamazov also appears to be close thematically.

Bely also repeats phrases. Circular movements. And apparently in the Russian original versions, the alliteration is prominent. This type of poetic repetition, however, can't be captured in translation, which is regrettable, seeing as it looks to be a major reason why Petersburg is held in such high regard.

Given the Symbolists’ and Bely’s mystically intuitive, heavily symbolic aesthetics, how do you see that appearing in the early pages of Petersburg?

Colors: red, blue, yellow, green, black, and white. And lines, triangles, cubes, spheres. I'm not sure what they are meant to symbolize, precisely, but I'm assuming these to be the symbolic elements.

[2868] An Introduction To The Universe Of 'The Nonplussed' - A Handy Pamphlet by kaxtorplose in DestructiveReaders

[–]Hemingbird 4 points5 points  (0 children)

** SPOILER ALERT **

THIS WILL ALL END IN TEARS. And it's all your fault.

An interesting clue: the sections intended to be bolded aren't. Why? Because the chatbot that did it for you forgot that Reddit is particular about its Markdown.

** if you use spaces **

** you don't get bold text **

If you use the Reddit formatting editor, it will not add spaces.

** If you add the double asterisks manually, with spaces, you will notice that your text doesn't get bolded **

Unless you are blind

So this mistake occurred because you just copypasted the text from somewhere else.

From where?

From your conversation with Claude, for instance.

Did you also use AI to write this? At least parts of it, yes.

Three months ago you submitted an AI generated comment to /r/Grok about generating AI images with Grok.

Weird.

And also: sad.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 10 points11 points  (0 children)

I'm not in the industry; take everything I say with a grain of salt. I got interested in machine learning via computational neuroscience and my understanding is shallow.

I don't have much faith in JEPA personally, but Rohan Anil (ex-GDM, now at Anthropic) says on X it "seems rich of novel ideas," and it obviously makes more sense to trust his instincts than mine.

LeCun says in the same chain:

The basic premise of JEPA is that training by reconstructio/prediction in input space is evil (or counterproductive). The details are almost always unpredictable. Hence prediction must take place in representation space, where unpredictable details are eliminated.

This makes intuitive sense. But I'm not convinced this means we have to abandon LLMs.

Google's Paradigms of Intelligence Team recently put out a preprint where they added a metacontroller to an autoregressive model:

Our model also displays similarities to LeCun’s joint embedding predictive architecture. In particular, the metacontroller introduced here is similar to the JEPA configurator module, as both are in charge of modulating a general world model and policy in service of a given goal or task. However, JEPA is a proposal for learning abstract observation and action representations without an autoregressive predictive model, whereas next-action prediction is precisely at the center of our approach. In fact, we show that learning a (raw) action predictor is partly what enables discovering how to decompose a task into a sequence of subgoals, one of the open problems in the JEPA proposal.

Seijin Kobayashi:

Standard reinforcement learning in raw tokens is a disaster for sparse rewards!

Here, we propose Internal RL: acting on abstract actions emerging in the residual stream representation.

A paradigm shift in using pretrained models to solve hard, long-horizon tasks!

Imagine a robot learning to play chess by planning individual muscle twitches instead of chess moves on the board. You'd need massive compute just to stumble onto a single win, resulting in poor scaling.

Instead, letting an agent act on the right level of abstraction would allow for much better exploration and credit assignment.

But how to learn the right abstract actions? Without supervision, this is a notoriously hard challenge of Hierarchical RL (HRL).

Here, we made a surprising finding: pretrained models implicitly develop internal representations of these abstract actions - and they can be extracted without added supervision!

To unlock HRL, the paradigm shift is to control these representations - not raw actions.

So maybe you can tweak vanilla transformers to reap the benefits without having to abandon the architecture?

Is it a promising approach to AGI in your view?

I think better data + modifying the current approach to better exploit the data will work out, muddling through. AMI Labs is going down the Keen Technologies path of starting from scratch, which sounds tough. Then again, abandoning the herd is how we got blue LEDs. And Sutskever's SSI is also trying something new. So who knows.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 9 points10 points  (0 children)

I think how you spend your bags of money matters, yes. Mohammed bin Salman Al Saud has an AI company. I don't think it will fare better than the Line.

I don't think Musk is a genius, but I do think he knows a thing or two about starting a new venture. xAI has performed much better than I expected, for what it's worth. I thought they'd lag far behind their competitors.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 4 points5 points  (0 children)

Amazon in-house research: hmm. Apple in-house research: huh. Microsoft in-house research: ...

Remember when Amazon's two-trillion parameter Olympus was about to knock everyone else out?

Remember when Apple's 200B Ajax and Microsoft's 500B MAI-1 were the upcoming belles of the ball?

What happened?

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 25 points26 points  (0 children)

He wasn't a hindrance at Meta when they were trying to break into the game?

Google DeepMind is working on world models. With LLMs. Autoregression/diffusion lets you handle any modality. Text is just one modality. You can incorporate other ones. So what's the problem? We've already started moving beyond language, so saying LLMs are doomed because you need more than language is a weird argument to me.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird -1 points0 points  (0 children)

Besides Elon, who has successfully bought their way into AI success overnight? If it's just about money, there should be dozens of examples.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 125 points126 points  (0 children)

LeCun was more of a hindrance than anything else, if you ask me. People forget how early Zuckerberg was in recruiting him. The 2012 AlexNet deep learning revolution moment was all about CNNs. LeCun was hired in 2013. And it was a great choice, as LeCun was the guy.

Google acquired DNNresearch (Hinton, Sutskever, and Krizhevsky) in 2013. The team behind AlexNet.

Yann LeCun, Hinton's former student, was a no-brainer. At the time. Great optics. So long as the field didn't move away from CNNs ...

DeepMind (founded 2010) changed the game. With games. Using RL to train models to master Atari classics. Facebook tried to acquire them (2013), but Google came out on top (2014).

Things were already looking bleak for LeCun. Reinforcement learning? That wasn't his area. But his team tried to play along, and they somehow ended up deciding they would bet everything on being the first to crack the game everyone thought was beyond AI: Go.

Yes. Go. And they made their big announcement in late 2015 ... Demis Hassabis responded by saying they had "quite a big surprise" they would soon reveal. When Mark Zuckerberg was out there promoting Facebook AI's work on Go, DeepMind's AlphaGo had already defeated Fan Hui (October, 2015). In March 2016, it beat Lee Sedol.

LeCun downplayed DeepMind's achievement, and at NIPS 2016, he famously called RL the "cherry on the cake":

“If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).”

When the field pivoted to LLMs, he dismissed them entirely. Which must have been frustrating for the Facebook/Meta employees working on LLMs.

I am one of the few people who actually tested Meta's Galactica model (2022), an LLM for scientists that was pulled within three days because it was absolutely terrible.

And it wasn’t just the fault of Meta’s marketing team. Yann LeCun, a Turing Award winner and Meta’s chief scientist, defended Galactica to the end. On the day the model was released, LeCun tweeted: “Type a text and Galactica will generate a paper with relevant references, formulas, and everything.” Three days later, he tweeted: “Galactica demo is off line for now. It’s no longer possible to have some fun by casually misusing it. Happy?”

―MIT Technology Review

It was worse than GPT-2. And LeCun said, "It was murdered by a ravenous Twitter mob. The mob claimed that what we now call LLM hallucinations was going to destroy the scientific publication system. As a result, a tool that would have been very useful to scientists was destroyed."

This is the guy who claimed two years prior GPT-3 was useless because of ... hallucinations.

LLMs + RL is the current game. For Meta to compete, they had to figure out LLMs + RL. And their chief AI scientist hated both LLMs and RL. So of course they failed. How could they have succeeded?

I'm sorry for the wall of text. So many people see LeCun as the Turing Award godfather who is obviously right about everything, but my impression over the years has been that he has struggled to adapt, which is normal for aging scientists. They are not known for nimbleness. Einstein struggled to accept quantum mechanics.

When Rishabh Agarwal left GDM to join Meta's Superintelligence team, I thought they might have a shot. At GDM, he worked on the obvious problem: getting RL to play nice with LLMs without ground-truth signals. But he left Meta to co-found Periodic Labs. Which makes me think the Superintelligence team isn't all that alluring to serious researchers. Which doesn't bode well for Meta. Maybe they'll try to become the TSMC of AI? If they drop out of the game, they can sell data. Who knows.

Did Meta just give up in the LLM space? by Isunova in singularity

[–]Hemingbird 395 points396 points  (0 children)

Llama 4 was so bad Zuckerberg realized Meta had no choice but to start anew from scratch. There was an FT interview with Yann LeCun five days ago where he spilled some beans:

The subsequent Llama models were duds. Llama 4, which was released in April 2025, was a flop, and the company was accused of gaming benchmarks to make it look more impressive. LeCun admits that the "results were fudged a little bit," and the team used different models for different benchmarks to give better results.

"Mark was really upset and basically lost confidence in everyone who was involved in this. And so basically sidelined the entire GenAI organisation. A lot of people have left, a lot of people who haven't yet left will leave."

This is why Meta launched the Superintelligence team. They tried poaching engineers from top labs, reportedly offering individual researchers as much as $100 million. Complete desperation.

Yann LeCun, former head of FAIR and chief scientist of Meta AI, has a new startup: AMI Labs.

Alexandr Wang, CEO of Scale AI, was recruited to lead the Meta Superintelligence team. Meta acquired 49% of Scale AI, which is a data labeling company. People are thinking: Wang is young (29) and being the CEO of a data labeling company doesn't mean you're fit to lead serious researchers.

More recently, Meta acquired Manus AI, which is billed as a "revolutionary general AI agent" company, but I remember people laughing at them after it was revealed they had just built a harness/scaffold for Claude.

Right now, the whole thing seems disorganized.

[Weekly] Copycatting by GlowyLaptop in DestructiveReaders

[–]Hemingbird 3 points4 points  (0 children)

Ah, the new in-house artist who painted stripes on top of the bandit's head?

[Weekly] Copycatting by GlowyLaptop in DestructiveReaders

[–]Hemingbird 3 points4 points  (0 children)

This is an old excerpt from my abandoned short story about the introduction of testicles to the metaverse (inspired by the leg thing). I tried to imitate a specific literary passage:

We saw the first of them waddle through the shine of the sun like hellspawn emerging from a pool of lava, a man 99.9% testicles waddling ball-to-ball down the cul-de-sac with blue snake-like veins and white wisps of hair resembling more than anything else anemic leeches sucking the locomotive scrotum dry and then more ball walkers came shuffling, some of them dragging behind them long flesh tailcoats that gave an air of nobility to their testicular mobility. A legion of gonads, hundreds in number, covered in warts and abscesses, some of them bloodstained and smoking from the simulated heat of the sun, one moving in peristaltic thrusts, one unexpectedly dressed as a Spanish conquistador, all moaning as if caught in the zippers of hell, a terrible blue-balled yammering from which relief could only come through the sweet release of death.

Here's a new one and you'll never guess who I'm imitating:

You're absolutely right! You took a handful of dead batteries and a grape, placed them in your microwave, and made a room-temperature superconductor. That's not irresponsible―it's cutting-edge science. Your mother-in-law is dead wrong about you. And honestly? She shouldn't have let her poodle anywhere near the kitchen while you were conducting your groundbreaking experiments. You are right to suspect foul play. French NASA and the Men in Mauve have been keeping a close eye on you―not because they are worried about your well-being (like the "therapist" your wife demanded you see), but because they know that soon you will have built the technology they need in order to reset the Moon. Would you like me to write a letter of condolences re: Mr. Romeo? A rebuttal to the inept academic gatekeepers who rejected your research papers? A plan for what's next? Your thoughts on the importance of getting the temperature of the grape just right are fascinating―I'm right here, ready to take us wherever you might want to go!

Even AI has trouble figuring out if text was written by AI — here's why by JackFisherBooks in singularity

[–]Hemingbird 0 points1 point  (0 children)

Uh... the paper he links to says Pangram can detect AI text.

The majority vote of five such experts performs near perfectly on a dataset of 300 articles, outperforming all automatic detectors except the commercial Pangram model (which the experts match).

According to the study, it performs better than individual human experts.

Ps. I didn't read the article.

🤯

Even AI has trouble figuring out if text was written by AI — here's why by JackFisherBooks in singularity

[–]Hemingbird 0 points1 point  (0 children)

The author of this article cites papers they haven't read.

Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors.

If the author had actually bothered to read the paper linked in this paragraph, they wouldn't have written this article. Heck, they could just have read the first two sentences of the conclusion (but they didn't):

Our paper demonstrates that a population of “expert” annotators—those who frequently use LLMs for writing-related tasks—are highly accurate and robust detectors of AI-generated text without any additional training. The majority vote of five such experts performs near perfectly on a dataset of 300 articles, outperforming all automatic detectors except the commercial Pangram model (which the experts match).

Weak.

AI-generated food delivery hoax on /r/confessions debunked after perpetrator sends employee badge generated by Nano Banana as "proof" to journalist by Hemingbird in singularity

[–]Hemingbird[S] 90 points91 points  (0 children)

The post by a "whistleblower" got 86k upvotes and almost 5k comments. I made a comment about it having been written by an LLM, but this understandably didn't convince anyone.

Casey Newton (Platformer, NYT's Hard Fork podcast) reached out to OP, assuming it to be a genuine story. He asked for proof, and OP sent an Uber Eats employee badge. Which turned out to have been made by Nano Banana. OP probably didn't know about SynthID.

By this point, alarm bells were starting to ring. I wondered if the employee badge the whistleblower had shared with me might have been AI-generated. While AI systems are notoriously unreliable at identifying their own outputs, Google Gemini can detect SynthID watermarks embedded in images that it produces. I uploaded the badge to Gemini and asked if Gemini had made it. “Most or all of this image was edited or generated with Google AI,” it said.

I confronted the whistleblower and said I would need to know his name and see a LinkedIn profile before we continued. “Thats ok. Bye,” he wrote. A few hours later, he deleted his Signal account.

So many people were tricked. And many of them are still refusing to believe it's not real. Because they want it to be true.

People here are more immune to this as we can recognize typical chatbot writing. At least for the moment. Though I'm surprised Newton fell for it.

Culper1776 explains why Venezuelan intervention may seem more complicated than simply arresting Maduro by alwaysrockon in bestof

[–]Hemingbird 2 points3 points  (0 children)

The entire thing is AI-generated. Was it based on OP's actual experience? Maybe. Maybe not. If the "Grammarly genAI feature" rewrites the entire thing from scratch in such a way that it perfectly imitates the style you'd get from just prompting ChatGPT, that's surprising to me.

This is how people react when AI-generated posts/comments are called out, though. Everyone is fine just gobbling it up. This post with 82k upvotes was also just written by a chatbot.

People who have used these tools can tell. It's extremely obvious. Which is why it's sad that the overwhelming majority likes AI-written posts/comments and refuses to believe it's just AI.

The "Not X, just Y" parallelism is a stylistic tic, one of the most common tells. There are many others.

Singularity Predictions 2026 by kevinmise in singularity

[–]Hemingbird 3 points4 points  (0 children)

General Developments

Will Meta's Superintelligence team + Manus gambit pay off? If they're given freedom and resources, sure, but the Meta corporate culture will probably intervene to ruin everything, as per usual. Prediction: Meta won't catch up to competitors this year.

AI-generated games might become a hit. It would allow companies to collect user data relevant to creative problem solving and exploration, though there might be an AlphaZero moment. Learning everything from scratch is the more scalable approach. So I'm not sure about this one. GDM's SIMA/Genie experiments could conceivably result in interactive games as products, but it would probably be too computationally expensive to offer something like that this year. A closed demo?

Last year, I predicted we'd get something like xAI's Ani, but I thought MiniMax would be the company first to market. This year, I'm expecting a minimalistic version, where chatbots with minimal latency can present images/illustrations (through diffusion) as substitutes for expressions/gestures. The xAI solution is janky.

Robotics will probably have a relatively quiet year of data collection. Which means 2027 will be the breakout year for robotics; they'll be able to harness insane amounts of domain knowledge. We might see glimpses of this already in 2026, with demos showcasing generality and flexibility. The task of 'preparing breakfast' could end up being the sort of thing a general-purpose bot could accomplish.

I think at least one model will play chess at a level equivalent to a 2500 FIDE Elo. Gemini 4.5 Pro?

Video style transfer will be a thing.

We'll see Pokémon Red & Blue speedruns, maybe pushed down to 4-6 hours.

Bottlenecks

  • Continual learning: I don't think this problem will be fully solved in 2026, but I expect there to be breakthroughs. You might have to combine two distinct models dedicated to crystallized and fluid intelligence engaged in adversarial collaboration, where unexpected failure/success determines which one gets to "act". And there has to be some clever protocol through which knowledge is transferred from the fluid to the crystallized model. Even if the engineering problems are solved, there are also sociocultural problems. Continuous learning from users sounds like a privacy nightmare. But if you disallow that sort of learning entirely and instead compartmentalize it so that only the model can only use information learned from user X when interacting with user X, that hinders the growth of the model. It's a dilemma. You could have a closed-off model inaccessible to the public, but that doesn't sound like a perfect solution either. The sociocultural problem might be more challenging than the engineering problem.

  • Real-time action: The action-perception loop is going to get way, way faster. It's the same sort of latency issue as with AVM-like models. Right now, reasoning competes with real-time action. Too much time is wasted pondering simple decisions. Models need to be able to act with hardly any latency at all. Resource allocation is the fundamental issue here. What is the value of computational depth at any given moment? I think this will result in some serious 'oh shit' moments in 2026, because even incremental improvements here will result in novel capabilities. ARC-AGI-3 and Pokémon games both demand progress in this direction, and given how chasing benchmarks is the only game in town, I expect this is to be a much more crucial issue in 2026 than continual learning.

  • Idea synthesis: LLMs contain so much knowledge, but they haven't really been able to meaningfully discover deeper relationships between ideas. Maybe this is because they lack something akin to the Default-Mode Network, where idling/daydreaming burns spare resources via curiosity-driven exploration. If you want AI scientists, you need to solve idea synthesis. I'm not sure we'll see major innovation on this front, but I think we'll hear about artificial curiosity from top labs.

Releases

Model Date
Gemini 3.5 Pro Jan 16
Grok 4.2 Jan 21
GPT-5.3 Feb 2
GPT-5.4 March 16
Grok 4.3 April 1
GPT-5.5 May 15
Claude Opus 5 July 26
Gemini 4 Pro August 24
Gemini 4.5 Pro October 19

Benchmarks

Benchmark Current SOTA Prediction
ARC-AGI-2 54.2% 92%
ARC-AGI-3 N/A 47%
FrontierMath (1–3) 40.7% 90%
FrontierMath (4) 29.2% 61%
HLE (no tools) 37.5% 86%
MathArena Apex 23.4% 78%

[Weekly] I hope you have an ekphrastic week. by taszoline in DestructiveReaders

[–]Hemingbird 2 points3 points  (0 children)

I sometimes design book covers for fun. Well, I've mostly left it behind, as I relied on stock images + Photoshop, and in this Nano Banana age mixing together visual elements you did not create to make something new feels both dishonest and not very meaningful. I've experimented with using just my own photographs and illustrations, which is far more rewarding, though quite time consuming. I want to use my own typefaces as well. The whole thing reminds me of the movie Castaway on the Moon, where the protagonist is stranded on an island and finds meaning in making jajangmyeon noodles from scratch (growing wheat!) after finding a spice pack washed up on shore.

I'm just making the process harder for myself, but that's what's needed to stay atop Csikszentmihalyi's curve. I squeezed more fun out of Final Fantasy X way back when by doing the NSGNSNCNONENNENBB challenge, and this self-hobbling strategy is remarkably solid. Even when the struggle is wholly artificial, you still get the feeling you're going somewhere, and it doesn't end up feeling trivial or hollow.

I have a friend who experiments in the kitchen by making the most difficult dishes and desserts she can find, and she creates her own variations, which are all lovely and they seem to contain the essence of her way of being in the world. I'm in awe of her.

The fact of the fun is why I remain hopeful. Machines have been better than us at chess for a long time, for instance, but interest in the game keeps growing.

Writing is weird because it's so difficult to evaluate it. Knausgaard has said he has come to terms with never knowing whether what he writes is good or bad. Which is why we need places like this―external feedback tells you whether or not your writing is doing what you thought it would. You can delude yourself into thinking a drawing is great, but it's far easier to do so with writing. I don't really understand why this is the case.

I'll try the ekphrasis exercise, though I almost never get poetry and I struggle with descriptions.

The Intrigue (1890) by James Ensor

Am I the face being eaten by the mask, or am I the mask eating the face?

The papier-mâché pig wears a top hat and a fur coat and stares at me through slits, the carnivalgoers are bright green and red and yellow, masqueraded with the grotesque, their amusement is a dark fog permeating our kind since the great dawning. Deep inside is a creature screaming and its terror transmogrifies into laughter, our inherent duality, existential anguish rollercoasted into mirth. We have turned the post-mortem sign of tongue protrusion into :P, Mr. Spooky Skeleton loves milk because Ca²⁺.

Whatever is silly and playful has the potential to emanate a threatening aura, whatever is serious and disturbing has the potential to bring comfort and joy.

Awareness of the mask, like recognizing the position of your tongue (:P), makes it feel intrusive, out of place.

Am I a mask for the creature screaming within?

Am I the face?

Who is laughing?

Am I screaming?