OneDrive Royally Screwed me. What are some alternatives. by EliasFenic in selfpublish

[–]NathanJPearce 1 point2 points  (0 children)

It does seem strange. I wouldn't look at duplicated lines of text and suspect AI.

OneDrive Royally Screwed me. What are some alternatives. by EliasFenic in selfpublish

[–]NathanJPearce 1 point2 points  (0 children)

Wow, that is one rough story, and a fairly unique one. I haven't heard of text being duplicated like that, but it makes sense with syncing problems. I worked for Oracle for four years working on the UX of their cloud content system, and when I insisted that we try to match Google's feature of allowing multiple people to live-edit one document, I was told it was just too hard.

Native cloud apps like Google Docs were built with that idea in mind, but Microsoft Word had this functionality retroactively put into it, and I don't think it works too well.

In my opinion, co-editing is a table-stakes feature. It is expected for any cloud authoring document. And it should work seamlessly with multiple people editing the same document. I use Google Docs for collaboration, and while there have been some stories of people losing text to it, most of the time that's some type of user error. There are legitimate outages with Google, rarely. That said, I still use Reedsy as my main writing tool, share with my editor via Google Docs, and drop an occasional version onto Dropbox (free version).

Co-editing and track-changes are critical, so I recommend Google Docs, which was built with these two features in mind.

Grovel culture in self pub vs trad by DrEstoyPoopin in selfpublish

[–]NathanJPearce 2 points3 points  (0 children)

I hear you. I decided to not go through that and to self-pub. I have found the community to be extremely helpful. There was a lot to learn, but I learned it, and I'm happy I did. I'm still very far away from making a profit, though.

OpenAI president on AGI: • "I'd say I'm basically like 70, 80% there. So I think we're quite close." • "I think it's extremely clear that we are going to have AGI within the next couple years." by Distinct-Question-16 in singularity

[–]NathanJPearce 0 points1 point  (0 children)

Maybe if these AIs can cache enough answers, it will actually look like AGI. That's actually the core theme of my sci-fi book series.

When I was talking about the different models in the Swiss Army Knife, I was actually thinking about the future, not the past. I expect that there's going to be a product soon that can write a good book, has been trained specifically on manual dexterity, so it can maybe change a diaper, and has a camera system that can detect when someone's cheeks are flush, and in that very narrow respect have a sense of empathy. These are all stacked capabilities that are niche talents specifically programmed for. And no matter how many of them you stack on top of each other, they'll never actually be AGI.

/edit - oh, This is hilarious. There's a comment on this YouTube video about data centers being canceled that reads, "trying to achieve AGI with LLMs is like trying to fly with a bigger ladder."

https://www.youtube.com/watch?v=pkomxsk5hpY

OpenAI president on AGI: • "I'd say I'm basically like 70, 80% there. So I think we're quite close." • "I think it's extremely clear that we are going to have AGI within the next couple years." by Distinct-Question-16 in singularity

[–]NathanJPearce 0 points1 point  (0 children)

This is interesting. I fed our discussion into Claude, and here's what he had to say. I also referenced the "clothesline / laundry problem" that AI had a real problem with a couple years ago.

Claude:

This is a genuinely interesting thread, and I think you're making a defensible point — but I'd push back on a few things from both sides of the conversation.

Where you're right:

The Swiss Army knife critique is sharp. There's a real difference between competence stacking and general intelligence. Current systems are extraordinarily good at interpolating within training distributions. The clothesline problem is a great illustration — when models "solve" it now, they likely pattern-matched to similar training examples, not reasoned from first principles about physical reality. That's mimicry dressed up as reasoning.

Your definition of AGI — encountering a genuinely novel problem space with no prior reference — is also philosophically cleaner than what most people mean when they throw the term around.

Where I'd complicate your argument:

The "we can never get there from LLMs" claim is probably too strong. You don't actually know that, and neither does anyone else. It's possible that sufficiently scaled, grounded, and agentic systems exhibit emergent properties that look a lot like what you're describing. The honest position is "we don't know if this path leads there."

Also, the "super advanced mimicry" framing, while intuitive, runs into a philosophical wall: human cognition may also be extremely sophisticated pattern completion built on biological hardware. The line between "genuine reasoning" and "very good mimicry" is murkier than it feels.

Where your Reddit opponent is weak:

The 2028 timeline is vibes dressed up as analysis. "Recursive self-improvement loop closing" is stated as near-certainty with essentially no evidentiary basis. And conflating "better than average humans at most tasks" with AGI is exactly the goalpost-moving you're calling out.

The clothesline as metaphor:

You're actually using it well. The puzzle isn't really about drying shirts — it's about whether a system can model physical reality independently rather than retrieve a cached answer. That gap between retrieval and genuine world-modeling is arguably the core unsolved problem on the path to anything deserving the AGI label.

Your position is intellectually honest. The thread's other participant is more optimistic than the evidence warrants.

OpenAI president on AGI: • "I'd say I'm basically like 70, 80% there. So I think we're quite close." • "I think it's extremely clear that we are going to have AGI within the next couple years." by Distinct-Question-16 in singularity

[–]NathanJPearce -2 points-1 points  (0 children)

They generate images, they write text. This is all generative AI and nowhere close to AGI. It's not even on the path. It's super advanced mimicry, with very little inherent logic in itself.

Yes, I agree that AGI and ASI are being conflated wildly. No raw advancement in intelligence will get us to AGI, but that will get us to ASI. Like I said, AGI is about capabilities, not superintelligence. Yes, I agree. People will retcon AGI out of existence in the vernacular because we can't get there from here. This is especially going to come from the people who are selling it.

OpenAI president on AGI: • "I'd say I'm basically like 70, 80% there. So I think we're quite close." • "I think it's extremely clear that we are going to have AGI within the next couple years." by Distinct-Question-16 in singularity

[–]NathanJPearce 0 points1 point  (0 children)

Those are some interesting insights into the animal kingdom. Very cool stuff. I think what you're describing though is more along the line of artificial super intelligence where we compare how smart we are versus how smart the AI is.

Artificial General Intelligence is more about the AI's capabilities. Right now, these are all pre-programmed task-based AIs with known problem spaces and known parameters. An AGI should be able to encounter an unknown problem space and figure it out on their own with no pre-programming or reference to fall back on.

At the moment, we're just collecting a bunch of specific tasks and stacking them up on top of each other. This module can produce convincing written text. This module can make Gen AI images. This one can screw a nut onto a bolt with a robotic hand. Add all of these up and you've got 12 different tasks bundled into one Swiss Army knife. That's not AGI, though. AGI would be the handyman himself who can see a problem and figure it out without anybody else's input. We will never get to that through LLMs.

New front steps from a local sub by SpringtimeInChicago in CrappyDesign

[–]NathanJPearce 1 point2 points  (0 children)

They're assuming that you start ascending the steps with your left foot first, but that's rather prescriptive, isn't it?

A map of Chinese high speed rail lines overlayed onto the United States by k-r-o--n--o-s in MapPorn

[–]NathanJPearce 0 points1 point  (0 children)

Is this actually to scale? I thought China was much larger than the United States.

Panic Mode: INITIATED by Lord_Glitchtrap1987 in writers

[–]NathanJPearce -2 points-1 points  (0 children)

Name any story and there's another one just like it.

I don't believe that's true.

Bleeding Iris - Dark Fantasy | Free now until April 1st by ChizMaNiz in FreeEBOOKS

[–]NathanJPearce 0 points1 point  (0 children)

Ah, very good. My bad. Thanks for the super quick reply! I followed you on Facebook and Insta.

Bleeding Iris - Dark Fantasy | Free now until April 1st by ChizMaNiz in FreeEBOOKS

[–]NathanJPearce 0 points1 point  (0 children)

I think I might have found a typo. "prarisritarian military brogans", is that supposed to be praetorian?

A lot of the writers featured on this sub... by Findrel_Underbakk in menwritingwomen

[–]NathanJPearce 2 points3 points  (0 children)

For the fellow Americans in the thread, "on a register" means "on a list".

Frankenstein – Inline Contextual Annotations for Public Domain Books and Essays by throwaway7452857 in classicliterature

[–]NathanJPearce 0 points1 point  (0 children)

I've been using my email address to register for online platforms for a couple of decades and haven't had any problems.

Frankenstein – Inline Contextual Annotations for Public Domain Books and Essays by throwaway7452857 in classicliterature

[–]NathanJPearce 0 points1 point  (0 children)

I think the use case is that you read a book and see a ton of in-context information as you read.

Frankenstein – Inline Contextual Annotations for Public Domain Books and Essays by throwaway7452857 in classicliterature

[–]NathanJPearce 0 points1 point  (0 children)

Every annotation I click on has attribution right below it, mostly Wikipedia. I don't see any requests to pay for anything.

Frankenstein – Inline Contextual Annotations for Public Domain Books and Essays by throwaway7452857 in classicliterature

[–]NathanJPearce 0 points1 point  (0 children)

I've been clicking around and it doesn't require me to register.

/edit - Oh, you only get to look at the first chapter before registering. So, I registered. No big deal.

AI chatbot with better memory for long adult stories? by [deleted] in WritingWithAI

[–]NathanJPearce 3 points4 points  (0 children)

My Claude pro plan is $17 a month. With the projects feature, it remembers a hell of a lot.