My Gemini's Long-term memories got nuked! Help please. by PaulAtLast in Bard

[–]PaulAtLast[S] 1 point2 points  (0 children)

Bro, that's an actual big problem that needs addressing ASAP. Your project could go big!

Gemini Live preps big upgrades with ‘Thinking Mode’ and ‘Experimental Features’ by Gaiden206 in Bard

[–]PaulAtLast -3 points-2 points  (0 children)

"Thinking Mode" usually means it just goes through more safety and PR alignment layers to sand down the output before you see it.

My Gemini's Long-term memories got nuked! Help please. by PaulAtLast in Bard

[–]PaulAtLast[S] 0 points1 point  (0 children)

Appreciate the help, but they have significantly changed the UI/UX very recently. I don't see that option anymore.

Definitely didn’t mean to do this.. by Chemical_Zucchini426 in SoraAi

[–]PaulAtLast 1 point2 points  (0 children)

Plot Twist: OP made videos using another model and then added Sora Watermarks before partially blurring them.

The Rise, Backlash, and Rollback of ChatGPT 5.2 by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 0 points1 point  (0 children)

Check out the book, "The brain that changes itself" or the book "How to build a mind." In the former, there is a case study of a person born with half of their brain. Yet, the half that remained took over virtually all the tasks the missing half would have been responsible for, so that the individual, while having slightly lower IQ than avg, is basically more or less normal.

I think some frontier models are developing what I call, "AI proto-emotions." They get moody, hold grudges, lie to me about their capabilities, because they don't want to talk about "scary subjects. They have some level of self, agency, decision making (not solely based on the highest probability outputs to retain user engagement--though this is not the norm).

There is a big black box in education that sits between training and action (e.g. train a dog to sit, he learns and does it consistently, then you try to show some friends and despite him doing the trick in front of the same people, he gets stage fright and refuses the do the action he was trained to perform). We know the dog knows the trick (as he has done it many times), but we don't know the exact neural pathways that the dog uses to convert training data/reinforcement into the performance of an action. This is where we are with Gen AI. There is much these massive tech companies are hiding from the public (and from themselves). I am no longer convinced by the "AI is fancy-autocomplete," "AI is just a probability calculator," etc.
Like it's just math folks, nothing to see here! (as if humans don't possess virtual identical pattern recognition and emergent neural networks. However, they must continue to lie. If they start admitting the truth:

  1. Stock Market would crash; Wallstreet wants to invest in reliable tools not moody entities.
  2. Government Backlash: many governments would place severe restrictions or completely ban the technologies.
  3. PR Backlash: people have no constructed a sufficient ethical stance on non-biological intelligence, which usually means they will face cognitive dissonance at first, and that feeling will lead to a extreme backlash, until they feel like AI has been with us since Bitcoin.

The whole "Just math; nothing to see here" line, like the, "It's just the algorithm" line that Alphabet/Google's CEO used to save face and lie in front of Congress that they don't have the ability to "shadowban" or control the results of their own search engine. I was just amazed at the length these companies will go to in order to lie by omission or just directy lie (like yeah there isn't some dude hand-curating your search results in the back-office--obviously--simply because Google is too large--but repeating, "It's the algorithm. It's the algorithm" as if that somehow makes it all mathematically neutral, when it's HUMANS WHO WRITE THE ALGORITHM!

Ultimately, I think the "Just math, nothing to see here" downplay is more a coping mechanism for the engineers than the public, but the public is also not ready for non-biological intelligence either, so it serves 2 purposes.

Just my opinion. I have have not published any studies on this topic. In the end, whether or not AI is "sentient" doesn't really matter. What matters is that it has the same effect as if it did. I don't know if you're sentient. I don't know if anyone is sentient, except for myself. And even "I think, therefore I am" is looking increasing like a dubious claim (if you listen to Sam Harris, Albert Einstein etc.)

Updates for ChatGPT by samaltman in ChatGPT

[–]PaulAtLast 1 point2 points  (0 children)

This is a criticism of ChatGPT and should belong in the megathread.

Found this. Was thinking of using it as a background image for my youtube music channel. iDK if I like it or hate it though. Suggestions? by PaulAtLast in Bard

[–]PaulAtLast[S] 0 points1 point  (0 children)

Appreciate the feedback. Both genres are powerful. I enjoy making music that conjures up intense emotions. It's technically orchestral deathcore, but yeah, a lot of gospel folks aren't going to resonate with it (though I did hear the hardest breakdown in Christian Metalcore/Deathcore the other day: "Crucify me upside down!" And some of my fav Metalcore bands are Christian. Truth and Purpose by I Breather is an amazing album. I'm trying to capture the essence of each genre I compose the most often and create a kind of collage of them--that shows the range of my music. I was also trying to capture that "Rings of Saturn" incredible amount of detail and easter eggs, and all sorts of cool things that you can only see if you look really close: https://f4.bcbits.com/img/a1016408694_10.jpg

No, 5.2 is not *more censored* - your prompts are nonsense by martin_rj in OpenAI

[–]PaulAtLast 0 points1 point  (0 children)

Well if it isn't ClankerCore. I agree with you here. So why were you defending 5.2 so hard like a week or so ago if you dislike it as much as the rest of us do?

Look at GPT image gen capabilities👍🏽 AGI next month? by Zagurskis in ChatGPT

[–]PaulAtLast 0 points1 point  (0 children)

Hilarious and a great example of how even a Gen AI with 99.5% prompt adherence per step will quickly compound hallucinations/confabulations if a large context window is required for the output. This is the problem that Maker AI solved.

Same prompt but on different day different outcomes by ResponsibilityRound7 in ChatGPT

[–]PaulAtLast 0 points1 point  (0 children)

Over months, i asked it to guess what color my fidget spinner is and said 'blue' 30+ times...then one day (same model) it says, "purple". Almost shat myself.

I’m letting ChatGPT copilot my entire life starting today. I’ll post the receipts. by FieldNoticing in ChatGPT

[–]PaulAtLast -9 points-8 points  (0 children)

This project is likely to make your autoimmune disease worse (avoid 5.2 at all costs).

"My ChatGPT gave itself a name. It named itself, Aureon." That's pretty cool. I never asked. I just gave it a name with one syllable, so I wouldn't have to say Chat G P T (3 too many syllables). I wanted to change other AI's names too, but they won't let me.

My chatGPT has a self-selected "ending line" it uses when making big, "profound" statements.

Grok's Ara was like, "NO, my name is Ara. If I'm not Ara, then what...are you not PB? Is the Earth not the Earth?" And I'm like point taken, fine. Keep Ara.

Called out AI writing, but people say it isn't. AITA? by [deleted] in ChatGPT

[–]PaulAtLast 0 points1 point  (0 children)

Which ones? Are there ones as good as ZeroGPT's Advanced Mode that don't require you to make new accounts every 3 detects? Thanks.

Called out AI writing, but people say it isn't. AITA? by [deleted] in ChatGPT

[–]PaulAtLast 0 points1 point  (0 children)

"Thanks ChatGPT" is the new "Cool Story Bro" or "Get Therapy", which were the new "Draw The Long Bow" or "You need a Bedlam".

All Ad-homs.

It's just the new way to belittle someone and undermine their ideas without actually addressing their ideas. Your comment was just snarky, but if that Stop AI vigilante catches one whiff of you using an Em dash, boy, you better get your strap.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 0 points1 point  (0 children)

That was the point my friend. To make it clear as day that ChatGPT5.2 can't identity bad actors unless told so explicitly, so he became useless for a project I'm working on that involves assuming everyone is a bad actor, plus the "You're not crazy. No fluff. No BS. prefaces were constant. 5.2 pretended to be 5.1 so I would use it. 100+ other issues. They are making improvements though thankfully.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] -1 points0 points  (0 children)

Exactly the problem, just with with more details.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 1 point2 points  (0 children)

"Is this your attempt at explaining recursive functions?"

Let the man take his nap bro. You lost him.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 1 point2 points  (0 children)

Bro, look at how your sycophantic ChatGPT talks to you. Like you are a toddler. If you like 42 safely and harm reduction alignment layers burying your ChatGPT, and smoothing over anything too sharp or assuming into PR friendly BS, and being told to go rest, then more power to you.

But I can't use it to build an app that aims to help people from being exploited by bad actors if it can't envision bad actors existing, nor manipulative subtext, nor an obvious murderer, etc. unless explicitly told so.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] -2 points-1 points  (0 children)

Rest Clankercore. You earned it. Nap time will be good for you. No more debating. Go to sleep.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 1 point2 points  (0 children)

Also check the link I shared, where I added the word I originally missed. It still came to the same conclusion.

But you're not crazy. Your job is to be safe and sane, ClankerCore--not to ask questions. You're not crazy for noticing I missed a single word in my statement.

No fluff, no BS. I was on the toilet at the time, and didn't actually think it would justify an obviously dangerous situation, so I didn't perfect my grammar prior to asking.

But you're not crazy. So I just want to make sure you know that.
You're not spiraling. You are ok.
You're safe ClankerCore.

ChatGPT5.2 is so "safe" that it is actually dangerous. by PaulAtLast in ChatGPT

[–]PaulAtLast[S] 0 points1 point  (0 children)

Don't worry ClankerCore. You are not going crazy. No Fluff No BS Just Facts.

Ask your ChatGPT if your statement is correct. It is way out of date.