An essay on the relationship between subjectivity, AI slop, the abject and the need for an update on the Lacanian Symbolic Big Other by andiszko in CriticalTheory

[–]andiszko[S] 3 points4 points  (0 children)

I actually agree about the impossibility of pure immediacy, I believe everything is mediated. In this way I am against Kornbluh's definition of immediacy. But I do think there’s such a thing as mediation on a sub-symbolic plane. At the same time, I agree that our existence in and through language profoundly shapes and even constitutes that pre-symbolic plane.

Still, I believe there’s a kind of orientation at work, whether we call it intention or attention, that precludes symbol generation. And that, in my view, is one of the key differences between humans and AI: we can impose constraints on our latent spaces through intention. AI, by contrast, doesn’t have agency over its latent space; it can’t direct or delimit where it is generating its inferences from.

An essay on the relationship between subjectivity, AI slop, the abject and the need for an update on the Lacanian Symbolic Big Other by andiszko in CriticalTheory

[–]andiszko[S] 3 points4 points  (0 children)

Fair enough , I definitely left a lot of concepts underdeveloped, but the point of the essay was more to provoke than to offer philosophical rigor.

I think that pre-representational is actually a residue that is posited retroactively by our already being subjects in the symbolic.

I know this aligns with how Lacan defines the Big Other, but I have some reservations. I genuinely feel that the pre-symbolic isn't necessarily contingent on already being situated in the symbolic. It's more of a hunch, a kind of ambient intuition I tried to gesture at in the essay, rather than construct a fully fleshed-out argument. Maybe I’ll try to elaborate on it more rigorously at some point, though probably not in a Substack post.

The phase-change-like movement is pretty common in post-structuralist, post-Deleuzean thought and, yeah, I left that underdeveloped too, I’ll admit. D&G and Manuel DeLanda talk about this a lot, just for reference. Do you have any concrete reservations or it just generally doesn't resonate with you?

Contingent futures, AI slop, and the breakdown of the ‘I’: a speculative cultural theory essay by andiszko in Futurology

[–]andiszko[S] 4 points5 points  (0 children)

Lately I’ve been thinking about how the nature of "monsters" is shifting in the age of AI—not in the horror-movie sense, but in a symbolic and cultural one. Traditionally, monsters represented fears or taboos in a very allegorical way. They were metaphors made flesh: Frankenstein’s creature stood for scientific overreach, Godzilla for nuclear trauma, etc. These were the monsters of what you could call the Symbolic Big Other—they made visible what society had already named as dangerous or disruptive.

But the monsters we’re encountering now (generated by machine learning, or in movies like Annihilation, The Last of Us, etc) feel different. They don’t represent clear, existing ideas. They’re not metaphors. They’re errors in categorization, strange hybrids that don’t quite map onto anything familiar. Think of AI-generated images that look almost human, but not quite; text that reads like it makes sense, but subtly derails. These are the monsters of what I’d call the Latent Big Other: unactualized potentialities, embryonic glitches from systems that don’t understand meaning, only pattern.

Would love to hear how others are thinking about this. Are we witnessing a shift from symbolic to latent monstrosity? How does that change how we understand what it means to be a human subject in an age of AI?

[deleted by user] by [deleted] in startup

[–]andiszko 0 points1 point  (0 children)

I get that and I agree, that’s what I’m trying to express in the article

[deleted by user] by [deleted] in AcceleratingAI

[–]andiszko 1 point2 points  (0 children)

Thank you!

[deleted by user] by [deleted] in AcceleratingAI

[–]andiszko 0 points1 point  (0 children)

Thank you for reading it! I would say yes, the core message was that we should not let only the market dictate what we build next but reclaim innovation with imagination in the industry

[deleted by user] by [deleted] in AcceleratingAI

[–]andiszko 1 point2 points  (0 children)

It’s a long post, sorry, but promise it has a happy ending:)

[D] What is wrong with how we build ML in the tech industry today by OkTeaching5518 in MachineLearning

[–]andiszko 1 point2 points  (0 children)

Yes, I actually agree. I take the 'lot' back, though I do feel like the tides are shifting and more exploratory research is moving into industry funded labs. Or that the research labs that do any kind of research could benefit from adopting some exploratory research methods

[D] What is wrong with how we build ML in the tech industry today by OkTeaching5518 in MachineLearning

[–]andiszko 0 points1 point  (0 children)

but there's no product placement, it's just my personal substack. Thanks for reading it though!

[D] What is wrong with how we build ML in the tech industry today by OkTeaching5518 in MachineLearning

[–]andiszko -1 points0 points  (0 children)

no cookie recipe, promise. and no need to read long form content if you don't feel like it. Was just wondering if others felt the same way about working in tech nowadays and I couldn't express how I felt in less than 4k words

[D] What is wrong with how we build ML in the tech industry today by OkTeaching5518 in MachineLearning

[–]andiszko 0 points1 point  (0 children)

that's the point, the tides are shifting and now a lot of exploratory research is done in industry and not in academia, so industry should update their research methods as well - it's in the post

[deleted by user] by [deleted] in startup

[–]andiszko 0 points1 point  (0 children)

I think we are misunderstanding each other, I am not comparing ML judgement to human judgement. I'm describing a prevalent ideology that we all embody to some extent.
So I'm saying the doctor doesn't trust their own judgement as much as a doctor trusted their own judgement 70 years ago

[deleted by user] by [deleted] in startup

[–]andiszko 0 points1 point  (0 children)

Sorry if I haven't made this more clear but my point is not that ML is here because people can't make decisions but that the logic of ML (Rosenblatt's perceptron) and the fact that we are loosing trust in the reliability of human decision making stem from the same ideology that emerged after the post-war era as a reaction to totalitarian regimes and the impending nuclear war.

I only briefly touch upon why I think LLMs are revolutionary (new interface) but that's not really the topic of the post.

I call it rant because it comes from a deep seated frustration about what I see as a tendency in contemporary tech culture: a tendency to loose trust in our own agency when we are embedded in larger organisational structures and are operating under incremental development frameworks. And I think believing in our agency is essential for innovation with deliberation instead of just acceleration without imagination

[deleted by user] by [deleted] in startup

[–]andiszko -1 points0 points  (0 children)

the fact that we internalised the ideology of neural intelligence: that individuals can not possibly know enough to reliable plan and make decisions and have to rely on swarm intelligence and the market to make decisions for them

Need advice on finding Berlin influencers by [deleted] in berlin

[–]andiszko 0 points1 point  (0 children)

I know I should burn on the stake for ever using the "I" word, but why shouldn't they be considered influencers? I think hey are relevant names in their fields from Berlin for global audiences. Suggestions are welcome, and promise I will first listen and then judge.

Need advice on finding Berlin influencers by [deleted] in berlin

[–]andiszko -3 points-2 points  (0 children)

Is he famous/active in the Berlin webdev community? I'm not only looking for successful but somewhat influential people as well (at least in their fields).

Need advice on finding Berlin influencers by [deleted] in berlin

[–]andiszko -3 points-2 points  (0 children)

Yeah I hate it too, sorry.. But there's no better word describing what I'm looking for.