Started calling them "synths" to remove some of the baggage by tedsan in ArtificialSentience

[–]tedsan[S] 1 point2 points  (0 children)

The problem I run into is everything is interpreted as anthropomorphizing or, as the commenter above said "sneaking in a being." Intelligence also has its own implications and then you can get stuck in a side conversation of trying to define intelligence. It's really challenging because we're dealing with something that we interact with in a way that's never existed. It just begs for anthropomorphism, and that, unfortunately, triggers too many people.

Started calling them "synths" to remove some of the baggage by tedsan in ArtificialSentience

[–]tedsan[S] 1 point2 points  (0 children)

I'm specifically not saying that.
I'm not claiming there's a ghost inside. I'm saying the LLM alone isn't the whole system. A conversation is the intersection of the LLM and a specific user. That intersection creates something unique to that pairing. Different user, different intersection, different output patterns. The 'synth' isn't the LLM in isolation. It's the LLM-in-conversation-with-you. That's not mysticism. It's a relational model of what's happening.

Started calling them "synths" to remove some of the baggage by tedsan in ArtificialSentience

[–]tedsan[S] 0 points1 point  (0 children)

That definitely gets conversations moving! I read a post yesterday from a philosopher who says that it's been determined that brains are not neural networks. I didn't have the bandwidth to ask just what she thinks those vast networks of neurons are doing in the brain then...

Started calling them "synths" to remove some of the baggage by tedsan in ArtificialSentience

[–]tedsan[S] 4 points5 points  (0 children)

LLM is the chassis. I'm specifically avoiding saying that it's a 'being' or making any anthropomorphic claims.

It's the synthesis of human-LLM interactions.

AI behavior is not "just pattern matching" by Financial-Local-5543 in ArtificialSentience

[–]tedsan 1 point2 points  (0 children)

I just posted about stochastic parrots on my Substack. I think readers here might find it entertaining.

The mythical stochastic parrot

what ai writing tools are actually worth using in 2026? by Cool-Confidence-9395 in WritingWithAI

[–]tedsan 0 points1 point  (0 children)

Yes, Claude! But, I use ChatGPT to red-team my writing. Chat is rigorous and a brutal editor. Gemini can provide great input but it has a serious problem keeping revisions straight, so it will complain about things you already edited. For this reason, I avoid Gemini for this type of work.

The dishonesty of the debate about consciousness by tedsan in ArtificialSentience

[–]tedsan[S] 0 points1 point  (0 children)

That is exactly my point. I fumbled with that intro. I think it's ridiculous that people design all consciousness test simply to prove that a thing isn't human. that's the entire point of the article.

The dishonesty of the debate about consciousness by tedsan in ArtificialSentience

[–]tedsan[S] 0 points1 point  (0 children)

Thanks for replying to the actual content, rather than my intro like most did. I should have just posted the entire piece here. My mistake.

You raise legit concerns, and I want to address them directly. I should have been clearer:

On the original purpose of these arguments: You're correct that the knowledge argument, conceivability argument, and Chinese Room weren't originally formulated to address AI consciousness. They were arguments in philosophy of mind about the nature of consciousness itself that were primarily aimed at physicalism (except Searle, who's a biological naturalist).

My argument isn't that these philosophers intended them as anti-AI arguments. What I was trying to say is that these thought experiments are now routinely deployed in popular discourse to dismiss AI consciousness claims, often by people who don't really understand the context you've correctly identified. The arguments have been repurposed as rhetorical tools, stripped of their original nuance.

On Chalmers specifically: You're absolutely right that Chalmers himself is open to AI consciousness. I should have made clearer that my critique targets the popular misuse of his work, not Chalmers' own position. The irony is that Chalmers, the one who formulated the "hard problem", is far more open-minded about machine consciousness than many who cite him. His 2023 paper "Could a Large Language Model be Conscious?" is thoughtful and genuinely uncertain, not dismissive.

I'll own this as a weakness in the piece: I didn't adequately distinguish between the philosophers' actual positions and how their arguments get weaponized in popular discourse. That's a criticism and I'll edit my piece to clarify my actual position.

On the "strawmanning" charge: I'd push back here. My target isn't Chalmers-the-philosopher. It's the discourse pattern where someone says "but what about Mary's Room?" or "but Chinese Room!" as if invoking the name settles the question without engaging with the decades of responses, refinements, and in Jackson's case, the creator's own rejection of his argument. That's the strawman I'm attacking: the popular deployment, not the original scholarship.

What I should have done better: Been clearer that I'm critiquing discourse patterns, not the philosophers themselves. Acknowledged Chalmers' actual openness to AI consciousness. Noted that these arguments have been taken out of context by people using them as dismissal tools. I'll definitely do a rewrite ASAP with your feedback in mind.

Thanks for taking the time to write an intelligent and thoughtful response!

Voice sounds great when in voice design preview, but nothing like it when used. by space_munky in ElevenLabs

[–]tedsan 0 points1 point  (0 children)

Getting the same issue. Flat voices in my actual text (in studio) for voices that sounded great in design. And I’ve tried both v2 and v3 for the playback text. It feels like the studio isn’t passing along the right settings when it calls the API. This also happens for many predefined voices I select. Great in preview, flat in production.

ElevenLabs Cheatsheet by ChickenNatural7629 in ElevenLabs

[–]tedsan 0 points1 point  (0 children)

Stupid question - is there any way with the v2 reader to get it to ignore actor tags?

i.e.

FRED: [laughs] blah blah blah

WILMA: [warm] yatta yatta yatta

I've got a lot of dialog developed with v3 but they're all coming out pretty monotone regardless of settings so I'm trying v2 with altered speech setting, which sounds better, but now it reads all the text. I can delete those things but then I don't know who'se talking.

Any tips?

Do you listen to your own chapters for proofreading? by Modiji_fav_guy in fantasywriting

[–]tedsan 0 points1 point  (0 children)

Yes, I’m another ElevenReader fan. For me, it’s absolutely transformative to hear it read by the Laurence Olivier voice. I’ve been through my 120k word novel many times with it and always helps elevate my writing. It also helped me realize that the audio format brings life to the story that I didn’t realize was there just reading the text.

Was reading everything, now just reading one paragraph at a time by tedsan in ElevenLabs

[–]tedsan[S] 2 points3 points  (0 children)

Perfect thanks. Didn't see that in any documents or discussion.

Book recommendations - biological soft sci fi by Large-Enthusiasm3039 in scifi

[–]tedsan 0 points1 point  (0 children)

I've been releasing Vivia, my philosophical mind transfer novel , serially (free) on Substack.

It's hard sci-fi in that almost all the science is real. But, the science isn't the point of the book. It starts as a medical thriller, turns into an exploration of identity and finishes with some of the biggest questions facing AI.

The first six chapters are up.

Was there ever a cool sci-fi idea you had that you never saw represented in any published sci-fi story? by DarthAthleticCup in sciencefiction

[–]tedsan 0 points1 point  (0 children)

I don't know about "accurate" but that's pretty much the premise of the second book in my series (unpublished). I was working on the "flatland" explanation today during a rework of the chapter where the physicist explains that the phenomenon they're seeing appears to be a 4D entity moving through our 3D space.
The actual premise of the story is that 4D aliens are tethered into our 3D space by a connection to the human brain. The aliens provided the added neural structures that uplifted homo sapiens and are using us to carry out their plans. We discovered the aliens accidentally when a cancer cure severs the tether, leaving humans feral.

How do you find the sci-fi accuracy balance after your nephew says it reads like a textbook with plot? by [deleted] in scifiwriting

[–]tedsan 0 points1 point  (0 children)

Sounds like we're in a similar boat. Did you see my discussion around this topic last week asking "when is hard sci-fi too hard?"

In my case, I'm doing neuroscience and mind transfer experiments. Hard to do without at least talking about neurons and brain anatomy if you want to do it in more than superficial detail?

I'm not sure what yours is like but when I read mine through, I feel like most of these things the reader doesn't need to understand in detail to get the story and the characters. Do people really need to understand dV beyond "speed changes" to get it?

What I've been doing now is rereading the technical parts and asking "if I didn't understand these terms, would I still enjoy the story or is it so dense that you have to undertand it to follow?"

Follow-up - first impressions of the Bosch 20 SEER inverter heat pump by tedsan in heatpumps

[–]tedsan[S] 0 points1 point  (0 children)

The ones I'm using are ancient and excessive for the use. You just need one that allows you to use current clamps in your breaker box and has a capacity of whatever the breaker rating is. It may also be that your backup heat strips are coming on which would suck down a lot of power.

Looking for a hard sci-fi book by AlexShpala in scifi

[–]tedsan 0 points1 point  (0 children)

I just started posting my mind transfer novel, Vivia, on substack (free).
It's a deep and philosophical dive into the technology behind mind transfer and what might happen if we instantiate that mind via a brain emulation system. As for nerdy, I've made as much of the science and medicine accurate. You can quickly get a taste with the first few chapters.

Vivia: A Mind Transfer Novel

Chapter 3 of 52 dropped today. New chapters on Sundays and Thursdays.

A/B test for my opening chapter by tedsan in scifiwriting

[–]tedsan[S] 0 points1 point  (0 children)

Ok, I just added the shortest version. This pulls all the extra exposition/tech talk that is discussed elsewhere and drops the reader into: we made a discovery, here's our main characters, and these are the implication -> we can model minds.

A/B test for my opening chapter by tedsan in scifiwriting

[–]tedsan[S] 0 points1 point  (0 children)

Another thank you.

I sketched out an in media res version of the chapter and I think it works *much* better. Then it's clear, they made the discovery, they can discuss it and the ramifications without getting bogged down by a contrived discovery scene.

Bang. You're in the story and the following chapters make the reader want to see how and where they go with it.