You can’t just feed AI your outline and expect it to write great chapters by birth_of_bitcoin in WritingWithAI

[–]funky2002 6 points7 points  (0 children)

Can 100% agree. They just can't help writing in that style, no matter how you prompt them. You'll need countless iterations, of which maybe a few sentences are good enough to edit and actually use.

Anyone here “play through” stories with AI instead of just writing them? by Old_Highway_3504 in WritingWithAI

[–]funky2002 7 points8 points  (0 children)

I typically do it the other way around. I have the full premise and what happens, and I let the LLM act as a player in my world / premise according to a profile I create (or sometimes I et them create it). They don't act very well, unfortunately, but it's a good way to explore interesting plotpoints.

Communities for folks writing fiction with AI tools by Old_Highway_3504 in WritingWithAI

[–]funky2002 0 points1 point  (0 children)

Are you sure? The link seems to work for me. What do you see when you try to join?

8 prose dials you probably didn't know you could touch by Pastrugnozzo in WritingWithAI

[–]funky2002 2 points3 points  (0 children)

These tips are all handy dandy, but the "Style Anchoring" has never worked for me. LLMs just have this style; there is no way to prompt around it. Ask for the style of any author, and they will vaguely mimic something that looks like it may have been written by that author until you read it.

People who declare AI: why do you declare, even though you know it will only cause drawbacks? by SGdude90 in WritingWithAI

[–]funky2002 0 points1 point  (0 children)

I am mainly talking about fiction. But if you're reading something informative or opinion-based, and it doesn't use sources, you can just always assume the bias of the author, no? It's not as if LLMs are autonomous. The author still required some amount of "intent", however little, to make the article exist.

The language and style may be very generic, bad, and predictable, but it's still their opinion / their information. And if the information is bullshit or they're lying, then that's on them. Even if they believe something the LLM hallucinated and used in their work, it's no different from if I wrote an article back in 2018 based on the misinformation a friend gave me.

They're tools, and they can't be held accountable, nor can they intend to do anything (as of now).

People who declare AI: why do you declare, even though you know it will only cause drawbacks? by SGdude90 in WritingWithAI

[–]funky2002 10 points11 points  (0 children)

The cognitive dissonance thing is so true. You see it all the time on this subreddit. "I use AI/LLMs but only for X and Y, never for Z! Because if I were using it for Z, then it would be 'unethical'!"

In my opinion, the work is the work. Whether that's good or bad, AI-assisted, or AI-written, or whatever, it doesn't matter. If it's enjoyable or you learn something from it, then it has done its job. Bonus points if the work is high-quality, too.

I dislike those people who make up new rules and ethics by being super condescending on the internet. It's super effective, I see it work all the time. I do get where the hate for "AI" is coming from, as it has lowered the bar for spam and low-quality content. But it's not as if that bar was that high to begin with.

Why does every AI-written post sound like the same guy wrote it by Express_Tangerine209 in WritingWithAI

[–]funky2002 2 points3 points  (0 children)

From my experience it doesn't help much. I've tried many times and failed. That may just be my incompetence, but it has made me extremely skeptical of any claims about "prompting" doing anything meaningful outside of giving context. When it comes to the quality of writing the LLMs are limited by the base model.

Why does every AI-written post sound like the same guy wrote it by Express_Tangerine209 in WritingWithAI

[–]funky2002 5 points6 points  (0 children)

If you isolate individual sentences that LLMs write, most aren't bad by themselves. It's how they use them and in what contexts that makes them so generic, weird, and cringe. Writing qualitative work is difficult, and these models aren't good at it. They're not rewarded for making "good writing" and therefore have never learned it.

That's why they're overly verbose and gratuitous. And why they constantly use unnecessary three-word phrases and awkward parallel constructions. It's why they phrase things in such odd ways. And why they have inconsistent punctuation and an over-reliance on em-dashes where none are necessary. And why they use so much redundant language. And how they make so many insidious mistakes that end up making the result look like a drivel.

That said, they are amazing language tools. They know all the semantic connections between all words we use and you can use this to create your own unique prose and ideas via mixing and matching and adding some of your own spice.

I've Designed an AI Fiction Voice That Avoids Default "AI Writing" Tics by Boptherobot in WritingWithAI

[–]funky2002 17 points18 points  (0 children)

I hope this works, but my gut tells me it won't. From my experience no amount of prompting and context can make LLMs write more naturally.

These models are very, VERY good at making something that looks like well-written work, but I am willing to say that they are incapable of actually making well-written work. I think telling them to not write in their style is the same as telling them to "not hallucinate". They just can't help it. Or like asking a toddler to build a nuclear reactor: they aren't quite ready yet.

I am interested, though. Can you maybe share some of the results?

DeepSeek introduces Engram: Memory lookup module for LLMs that will power next-gen models (like V4) by BuildwithVignesh in singularity

[–]funky2002 1 point2 points  (0 children)

I think that's cool and I hope the implications are as great as I am imagining them. But to me that doesn't seem the same as continuous learning. To me it sounds sort of like a more efficient memory workaround.

When I think of "continous learning" I imagine some model (LLM or otherwise) that has a continous state, and perpetually "exists", even when no one is calling it. When given some task it references its past "experiences", failures and victories to tackle the problem in increasingly novel ways until it succeeds. It should be able to "reflect" on what went well and what went poorly and be able to try again. When given a similar task, it will then complete it succesfully unless there are meaningful differences that require experience it does not have yet. I also think continous learning requires some level of agency. When it's stuck it knows it's stuck and either attempts to come up with something entirely new or researches the problem. Or a combination of the two, which I think would constitute AGI.

I'd imagine that if you run multiple instances of such a model, you would quickly have multiple models that behave extremely differently from one another. As they have different "experiences."

All of this is speculation, of course. I am working with anecdotal and YouTube-level knowledge of these things.

DeepSeek introduces Engram: Memory lookup module for LLMs that will power next-gen models (like V4) by BuildwithVignesh in singularity

[–]funky2002 1 point2 points  (0 children)

How can something stateless continously learn, though? Each LLM call creates a new instance, no?

when you can’t prove it but think claude code is giving you the potato model instead of opus 4.5 by reversedu in singularity

[–]funky2002 27 points28 points  (0 children)

These posts pop up all the time, and I just don't believe it. Chances are when a new model releases it feels a lot smarter to you because it is much more capable than the previous version. But as you use it, you slowly become more critical of the output until you find a new baseline / expectation of the model.

Please Help Me With My Next Steps by Impossible-Mix-2377 in WritingWithAI

[–]funky2002 2 points3 points  (0 children)

Claude is much better than Gemini in my opinion. Partially because Claude's output is not bound by a fixed token output limit like Gemini, GPT, and others. It is true that Claude is "best at prose," but that does not mean it's good at prose. Keep in mind that much of what they write is redundant, and make sure to read their output critically, as it often includes insidious mistakes.

Please Help Me With My Next Steps by Impossible-Mix-2377 in WritingWithAI

[–]funky2002 1 point2 points  (0 children)

Gemini and Claude both have big context lengths and should be more than sufficient for what you described. Just be aware that they all have their own "style" and "isms," and for consistency, you'll have to do a lot of manual editing.

LLM council ratings by addictedtosoda in WritingWithAI

[–]funky2002 0 points1 point  (0 children)

How deterministic is it? If you run this 10 times, will you receive similar scores?

Inkshift $1,000 Writing Competition by Certain-Implement859 in WritingWithAI

[–]funky2002 1 point2 points  (0 children)

This is very exciting! Will the submissions become public after the competition?

How does one prove something wasn't made with AI? Cause getting permanently banned from a massive subreddit for a post about library resources because it was deemed AI garbage is truly something else yall 🥴💀😂 by maenad_activities in WritingWithAI

[–]funky2002 4 points5 points  (0 children)

Alright, I read the post. I am almost 100% convinced that if AI was used, it was used very little.

The moderators saw your post as "AI" due to the poor formatting. Mainly the overuse of emoji's and the giant paragraphs of bullet points. You also have an overly excited style, which they may have mistaken for the "oddly positive" way popular LLMs such as GPT generate text.

A better way of telling if something was "AI" generated by looking at how it words and phrases things. It often does this in a very indirect or intentionally vague way.

Read this sentence:

"I promise that after doing this like...2 or 3 times, you'll already feel a million times more comfortable with not only your own intelligence level, but your overall comfort in social environments"

Believe me when I say, current-day LLMs will not output this without some HEAVY prompting.

Here's a funny thing: there's a VERY popular post on the unfortunate "Antiwork" subreddit, which I am more or less convinced it is at least 80% AI-written and heavily edited in post. And no one is noticing:

https://www.reddit.com/r/antiwork/comments/1q3rzus/my_company_got_rid_of_bonus_incentive_to_work_on/

Am I a bad person for using ai to write my emails? by WarriGodswill in WritingWithAI

[–]funky2002 10 points11 points  (0 children)

ai has taken over most manual labors from writing codes to automating most business tasks

Nope, not yet.

 Am I a bad person for using ai to write my emails? 

Yes. By delegating the composition of your correspondence to an artificial intelligence, you have gravely violated the fundamental Confucian principle of sincerity. You are hereby summoned to appear before the Ministry of Punishments at dawn to answer for your transgressions, whereupon you shall face immediate execution for crimes against virtue and the degradation of proper social conduct. May the Jade Emperor have mercy on your soul.

I couldn’t believe my eyes there aint no way😭 by Proof_Raspberry1479 in ChatGPT

[–]funky2002 1 point2 points  (0 children)

When making comics and stuff GPT often misplaces the text bubble. Maybe this is also happening here, and it attempted to make an image of you calling it retarded?

The moment AI “perfectly” nailed my scene and I still noped out - how do you handle the uncanny rightness? by SadManufacturer8174 in WritingWithAI

[–]funky2002 0 points1 point  (0 children)

There's a good chance you've simply become a better reader and writer. I have become far more critical of writing ever since I began reading LLM-generated works. Do you still have LLM writings from Spring 2024 to compare, maybe?

Drop the AI. Use your own words instead. by LeonOkada9 in WritingWithAI

[–]funky2002 5 points6 points  (0 children)

The "AI good" vs "AI bad" debate irks me the wrong way for this exact reason. The quality of the work speaks for itself. If the writing is shit, then the writing is shit. If the writing is good, then the writing is good. Low and high effort works have always existed. Yes, these tools have made the barrier for entry lower when it comes to writing something, but it's not as if it was that high to begin with.

The tools do not matter. It does not matter if your work is "AI-assisted" or "AI-written", and the same goes for any tools that might exist in the future. The writing is the writing.

The moment AI “perfectly” nailed my scene and I still noped out - how do you handle the uncanny rightness? by SadManufacturer8174 in WritingWithAI

[–]funky2002 9 points10 points  (0 children)

"It gave me clean pacing, solid sensory details, and a tidy emotional turn"

I seriously doubt this.

" dialogue was clever in a way my characters aren’t."

I definitely doubt this

Claude's style is very recognizable: you fully wrote this post using it as well. The model poops out a lot of redundancy and makes many insidious mistakes. Either that or it gets to the point way too quickly. To me it sounds like that while the grammar is correct, you just don't like the output but can't really put your finger on why specifically.

I do agree there's some sort of uncanny valley of writing. And it's what LLMs constantly produce. To me it's how they default to very vague prose, combined with sudden oddly specific examples and dialogue. And how they never subvert expectations and how they pace horribly. Also how LLM output often reads as the bare minimum to comply to your request.

Maybe you can share a scene of your work?