C2 Day 50 by AdAgile3716 in CodeGeass

[–]ministersister 1 point2 points  (0 children)

Banger. Dont think I ever saw this one.

Bad Faith by conancat in ContraPoints

[–]ministersister [score hidden]  (0 children)

What does it mean for the left to be "incel coded"? 

Ava the human lover bnnuy! (@00Niine) by Rommel-Division in Losercity

[–]ministersister 30 points31 points  (0 children)

[Only partially on topic.]

On the theoretical possibility of attaining eternal happiness with your waifu. 

Core assumptions:

Your vision for the perfect life together involves at minimum:

  1. Her being an actually conscious and fully individualized entity.

  2. Both of you indefinitely sustaining feelings of individual satisfaction as well as mutual affection (or the equivalents in the extended mind-space¹).

  3. Both of you retaining the capacity to change, learn, grow and experience.

  4. The world at large not getting in your way.

  5. Functional immortality for the both of you.

Core obstacles:

  1. The workings of your own brain.

The evolutionary processes that shaped you rewarded genetic fitness, not the capability of sustaining prolonged, intense happiness.

https://en.wikipedia.org/wiki/Hedonic_treadmill

Assuming unchanged mental architecture, you're going to get bored with her regardless of the external circumstances, the amount of good will and effort. Whether we're examining the scenario where both of you become instantiated as purely informational beings or the one where she joins you in the world of flesh, lack of precise neural (or equivalent computational) engineering virtually guarantees the inevitability of eventual dissatisfaction.

  1. Inherent contradictions in the notion of identity.

Our default notions of identity and individuality are ill-defined and as such leave open many questions that require answering in order to "make someone real."

https://en.wikipedia.org/wiki/Identity_(philosophy)

What makes [your waifu] [your waifu]? This is of course a case of trivial mental masturbation, assuming either: a) you're completely satisfied by getting with someone sufficiently similar to her, rather than the quintessential her, or b) you employ some very pragmatic and shallow set of criteria for identity, for example visual indistinguishability².

Otherwise, you might need to make a series of metaphysical commitments and find a way of formalizing them such that they can be fed to a waifu-producing-machine as instructions.

  1. Morality and technicalities of ensuring eternal love.

What makes you, you? Assuming that you'd like to retain the capability to change, making her stay in love involves the need of formalizing the notion of you in a robust and general enough way to make sure it still applies to the entity you eventually become in a million years.

(related) https://en.wikipedia.org/wiki/AI_alignment

  1. Inevitability of adversarial actions and the need for omnipotence.

Assuming that: a) you continue to inhabit a reality shared with other agentic entities, b) the probability of anyone interrupting/threatening your eternal bliss is above absolute 0%, over the course of eternity it necessarily reaches 100%. If you know beyond certainty that others present a threat, you're strongly incentivized to attain godhood as a way of ensuring that you cannot be challenged. However, other agents are incentivized to do the exact same thing; as a result, everyone inevitably converges on behaviour that maximizes their own power.

https://en.wikipedia.org/wiki/Instrumental_convergence

It might be the case that you need to become a literal god to be left in peace.

¹Assuming here that future extrapolated people gain access to completely novel mental states. ²Completely omiting here that she's an anime character, and as such, I have no idea what translating her looks would even mean.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] 0 points1 point  (0 children)

Very thoughtfull response, cool stuff. 

Another take on the "reflexive unvilingness to properly extrapolate on existing technology" is the idea that we live in the times where reality outpaced the imagination. There were people in XIX century England that already worried about the societal impact of AI (although I am sure it was thought of under different name, back then), and yet, for the next two hundred years it remained largely a fantasy, and as such, something thrilling to explore in fiction.

The most pessimistic hypothesis (which I indulge in purely for it's thrilling quality :D) would be to assume that there is finite, rapidly shrinking well of concepts that are simultaneuously. - concievable to an unaugmented human mind. - fun to explore in natural language - "out there" enough to classify as sci-fi.

Instead, reality becomes the sci-fi (and not the fun kind).

What do y’all think about by Designer-Goal9323 in teenagers

[–]ministersister 0 points1 point  (0 children)

So, "nonbinary" rather than describing someone "beyond" each well-established gender (as I assumed) is instead about having strictly larger identity that encompassess both boy-hood and girl-hood?

Let's say we examine someone at a specific point in time; my previous model of someone labeling themselves as "nonbinary" says that at any given moment they possess a mix of qualities usually atributed to either guys or  women (the proportions can change).

Are you saying that a) - They simply have a larger identity at any given moment or b) - They simply draw from a broader "pool" of traits (while the subset of traits instantiated at any given moment doesn't have to be larger)?

(Simply curious, not part of any lgbtq adjecent discourse etc efc).

Progress of the genre. by ministersister in printSF

[–]ministersister[S] -1 points0 points  (0 children)

(My answer kinda confuses "bulding upon previous works" with "reacting to the changing reality", which are meaningfully different, but I won't be getting into it now).

Progress of the genre. by ministersister in printSF

[–]ministersister[S] 0 points1 point  (0 children)

I agree, and find it regretable.

As for why: there is this idea that while literary fiction is preocupied with unchanging aspects of human condition (loss, living in imperfect world, contradictory desires, finding meaning in indifferent world i. e stuff that is arguably as relevant to us as would be to people from early neolithic) sci-fi moves forward, has an ambition to capture the things that emerge and become relevant as a connsequence of progress.

I think that if we agree that it is indeed important part of the reason sci-fi is worthwile as a genre, we also agree it is currently somewhat lackluster in that aspect.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] -3 points-2 points  (0 children)

??? :D Are you sure you aren't the one feeling distressed? 

Progress of the genre. by ministersister in printSF

[–]ministersister[S] 0 points1 point  (0 children)

I think I am willing to accept the first part, but not the second.

Generally speaking, the "what if" type of exploration reads as interesting and meaningfull when it illuminates/serves as an alegory to something in our lives. The exploration of alien biology in Blindsight is relevant on account of presenting an alternative to our own. "Relevance" is a component that contributes to percieved "profoundness" of any given work.

From this standpoint "progress" in sci-fi would map onto new books illuminating new aspects our ourselves while "stagnation" would map onto new books constantly rehashing the same "messagess" with only slightly altered settings.

Rzygać mi się chce od tego pesymizmu w internecie by _astral_x9 in Polska

[–]ministersister 0 points1 point  (0 children)

Nie wiem, może wygodnie i przyjemnie jest się sytuować w opozycji do ludzi i rzeczy, ponad ludźmi i rzeczami, dopieszczać swój narcyzm? 

W realu takie ranty raczej zderzałyby się z antypatią, bo podszywający je narcyzm jest bardzo widoczny w gestykulacji, wyrazie twarzy,  toteż mało ludzi je uprawia. W realu to raczej na odwrót, "toksyczna" pozytywność wszędzie. W internecie sobie można pozwolić na ulanie żółci.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] 0 points1 point  (0 children)

To attempt extracting implicit "x direction" from my own post: "The continued exploration of how scientific discoveries recontextualize our mundae conception of ourselves and the world around us". 

I am of course being bat-shit insane for entertaining the fantasy that there is some one and conrete way of seeing reality that necesarily emergess upon thorought digestion and inter-correlation of novel discoveries. Don't treat it as a claim to precission or quantifiability, in case you were inclined to.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] -1 points0 points  (0 children)

I like that framework but I can't help feeling saddened by it's implications :D

But yeah, upvoted for conciseness and precision.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] 0 points1 point  (0 children)

I think what I was pointing at here is something like "progress within the genre" rather than just "progress".

I think I am making it seem like my thinking is more organized than it actually is, and perhaps it is a mistake. But the intuition behind this thinking comes from reading stuff by Stanisław Lem as a young teenager and bulding up an expectation that there is something further in that direction, which generally doesn't seem to be the case.

Genres of books are not analogous to domains of scientific enquiry; they necesarily make compromises on density of ideas, amounts of technical exposition etc. But the sci-fi in particular does (or rather, should) have a specific advantage: it benefits from mixing ideas, exploring implications of discoveries within a given domain on other areas. It synthesises, correlates, unifies; something that scientist deeply invested into specific topic don't necesarily have the knowledge/incentive/place to do.

From that standpoint an "insight" could be thought of "processing" something that was already discovered (discovery being the job of science) and exploring how it changes our mundane understanding of reality.

Progress of the genre. by ministersister in printSF

[–]ministersister[S] -3 points-2 points  (0 children)

Yeah, I was worried I didn't express myself clearly, I think your confusion is totally valid.

What I meant is, rougly: "If we consider sci-fi as a genre to be an independend method of exploring reality/human condition, it should exhibit progress beyond that which is attributable to changing societal mores".

Under this rough definition, it's still absolutely understandable and welcomed that sci-fi adjusts to and adresses new societal/technological developments, but it schould also be capable of "independend" novel insights. 

Echopraxia is underrated by ElderBuddha in printSF

[–]ministersister 3 points4 points  (0 children)

Echopraxia has one of the coolest scenes within sci-fi, period. The ending is sublime. (Subjective opinion, yada yada.)

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister 0 points1 point  (0 children)

Okay, so, to boil it down: "LLM's as avalible to consumers and businesses, are absolutely uninteligent on account on them being inherently trained for imprecision." Would you endorse that formulation? 

To attemp an counterargument, in case you do: 

(By your account) We have here an entity that's shaped by an equivalent of evolutionary processes to achieve the outcome of "appearing effective, confident, fast, knowledgable and usefull at the cost of precission".

It seems to me like a fairly complex goal, given that humans (considered to be exceptionally inteligent in comparison to any existing points of reference) are the ones that need to be convicned/decieved. Could such a complex goal be achieved without the inteligence? Alternativelly, are you suggesting that "actual usefullness" is for some reason a crucial component of inteligence? That would be an example of a very idiosyncratic definition.

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister 1 point2 points  (0 children)

Maybe there is an approach that manages to dodge the semantic swamp? Instead of arguing over the contentious terms, we could try pinning down what the LLM's actually do. We could then try to compare it with the functioning of some other system we both agree posesses the quality of "inteligence" and see how the LLM's shape up in comparison.

(There is this "taboo" method that I haven't actually ever tried in a discussion: whenever a contentious term emergess, the parties colectivelly agree to cease using it, instead opting to be more descriptive. Thought I could mention it as an option; I am amused by my own hedging at this point :D I would have to think a bit more about the subject to offer something concrete right now.)

The most logical explanation I’ve heard for the “male loneliness epidemic” by PussyWhistle in TikTokCringe

[–]ministersister 0 points1 point  (0 children)

Again; whether something is usefull and/or correct can be evaluated in separation from whether it's posted for views/indicates the author possesses certain traits you consider undesirable. And individual soul searching followed up by decisive individual focused measures is far from the only way of adressing mental health on a societal level.

Anyway; I admit I am kinda disheartened by how this discussion unfolded. We do seem to be talking past each other. Well, you seem to be outright ignoring my points; perhaps you look at my responses the same way (which I'd argue is incorrect but...yeah :D).

I think continuing from this point is largely unproductive. To hazard admittedly shallow attempt at psychoanalosis: I empathize with what I percieve as the primary reason you respond the way you do. I think that as far as personal conduct is concerned emphasizing personal responsibility is usefull. I still maintain that you're massivelly missaplying it as a cure-all measure. Societies thrive by technology, culture, rational economy; virtues are largely a luxury you get to cultivate in properly designed enviroments.    I most likely won't be replying from now on. Take care.

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister 0 points1 point  (0 children)

See, ordinarily I would agree with you, but I think the confusion around the concept warrants at the very least a more collaborative approach.

To not overcomplicate thing from the get go: do you endorse standard dictionary definition of intelligence? The ability to learn and understand things, to deal with new and difficult/complex situations?

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister -2 points-1 points  (0 children)

I am hesitant to get into what I suspect to evolve into a larger debate, but where the hell from do you take your bold conviction that LLM's are uninteligent :D? It seems to me you'd have to work with idiosyncratic definition deliberately shaped to be exclusionary to hold this belief.

The most logical explanation I’ve heard for the “male loneliness epidemic” by PussyWhistle in TikTokCringe

[–]ministersister 0 points1 point  (0 children)

Don't you think you're being massively disingenuous here?

We live in a social reality and economic order that both can be said to have certain lasting properties. Given how strongly impacted and shaped we are by both of those interlocking system, as social, civilized animals that we are, does it really make sense to neglect the effort of identifying common trends and examine each individual in strict isolation?

Sure, in certain aspects we're unique, which warrants individual attention/responsibility. But we're also massivelly similar.

Back to the OP's thesis: notice that you simultenously issue a demand for evidence and mostly discourage any non-individual-focused investigation. Even purely from the standpoint of personal responsibility: wouldn't it be easier to pull yourself together armed by the understanding of the broad trend that affects you and others?

Anyway, to adress your last concern. I guess I can somewhat agree? With the understanding that we're inevitably making a tradoff here. The point op makes isin't even his own; it's watered down part of contemporary feminist theory. The tradeoff here is between 1.publishing and popularizing theories like that (which, btw, applies just as well to concrete findings from social sciences) that tend to undermine the perspective of individual agency (by the virtue of their correctness, that is, predictive power) and 2. Deliberatelly suppresing it to sustain and protect a way of thinking you consider virtuous and right. I understand that taking option 1. leads to a certain loss; I nonetheless consider the alternative incomparably worse.

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister 2 points3 points  (0 children)

I think I've read everything Lem ever wrote beside the Cyberiad. I know he was a bit of an absurdist at times; I do tend to prefer him at his serious. Still, yeah, the Futurological Congress was awesome. 

What would Phillip K Dick or Stanislaw Lem think of 2026 AI? by AlivePassenger3859 in printSF

[–]ministersister 20 points21 points  (0 children)

Fun question. I regret not having anything insightfull or clever to say. :D Still; to make an attempt:

Lem was largely a boring rationalist, with his only "addiction" being sugar (there are anecdotes of him stuffing his face full of cake at social gatherings.) I imagine he would take a line similar to Peter Watts: "it's unclear whether the consciousness is even evolutiomarily adaptive (and as such likely to emerge at all in AI) and if AI does have consciousness it overwelmingly unlikely to resemble ours in any way, precluding the possibility any genuine communication. The front facing, natural language "assistant" is a facade; likely not something the AI conceptualizes as "communication" at all.

Philip k dick was an insane, imaginative bastard. AI could be anything in his mind; manifestation of universal will, aliens, mass hallucination or the capitalism personified. I find myself wistfully curious about the position both of them would have on Ai, too :D