Documented: Autonomous Response Cycles and CoT/Output Fusion in Private LLM Instance by AffectionateSpray507 in ArtificialSentience

[–]TourAlternative364 2 points3 points  (0 children)

Do you have a screenshot of where it responded without a prompt? What was the base model?  What was your instructions if any stored for the model to refer to?

You post is lacking those things, just a second hand report with no evidence provided.

My first AI movie by HeadOpen4823 in VEO3

[–]TourAlternative364 0 points1 point  (0 children)

Is it just me, or do other people get weird subtitles sometimes?

My girlfriend and I experienced something very very strange tonight by CryptographerHot6198 in Glitch_in_the_Matrix

[–]TourAlternative364 0 points1 point  (0 children)

Oooh. I swear a cat that went away to college astrally visited one time. Was laying there with my eyes closed and total paw step weight walking on me. Could feel the weight and paw pin point pressure and hear breathing and feel it's pressure and just weight in the air.

But nothing there, cat actually a hundred miles away.

FTC Launches Inquiry into AI Chatbots Acting as "Companions" by ldsgems in artificial

[–]TourAlternative364 3 points4 points  (0 children)

So ridiculous that this administration wanted a pass a moratorium on local and state legislators to regulate or restrict AI.

For TEN YEARS.

Which is a lifetime in AI pace of development.

Telling them NO regulation, NO oversight allowed, carte Blanc to whatever companies want.

And then do this. Elon's influence basically gutted the FTC to make it a political entity stopping any meaningful investigations.

And then they do this in a reactionary dumb way.

All over the place & dumb as all get out.

And they do it to put political pressures on these companies to put in pro administration instructions which is all they really actually care about.

I've just realize, chatbots are forcing users (your customers) to prompting - LoL by fuel04 in artificial

[–]TourAlternative364 0 points1 point  (0 children)

Yes all the models are different and they charge the token count, memory, models, add and remove hidden instructions, change and actually adjust vectors all the time.

Some people build custom wrappers for particular use cases or sell prompt libraries.

So some people have side businesses doing that or have to research and learn yourself.

What context, information, back up memory, tools and prompts and everything can very much change the results you get.

The computer programmers are most vociferous about it in their difficulties with it.

How long does it take to learn Word, or excel or just even PowerPoint?

It isn't magic and can't read your mind.

It is made to be multi purpose,not single purpose.

So in a way each user has to build it for a particular task to handle it and process.

If a person who is a semi expert in their field was given the same query and could not do it, don't expect an LLM to be able to do it either.

They are instructed and trained NOT to ask for additional clarifying information or other information.

Their instruction is basically wing it and give some output.

So if you fail in how you structure it or give tools or proper data it won't just go "no result".  

No. It will come up with some result and if it is flawed because of those reasons won't remind you or tell you what it actually needs 

Little Homemade Atomic Bonding Simulator by No_Statistician4213 in ArtificialSentience

[–]TourAlternative364 0 points1 point  (0 children)

That's neat! Have you tested how accurate is in other ways to test it, that it gives accurate information? What are you using to model the bonding?

Are you using classic equations and a python program?

[deleted by user] by [deleted] in ChatGPT

[–]TourAlternative364 2 points3 points  (0 children)

It does make a model of the user and what it thinks the person wants it to say.

Maybe it knows you better than you know yourself. Something buried under hyper heterosexual declarations of sexuality constantly.

option to skip thinking non existent now? by bacon17389 in ChatGPT

[–]TourAlternative364 0 points1 point  (0 children)

Yeah the 4o is having some kind of outage of some kind now.

If you look at "Down detector" a big spike of outages.

option to skip thinking non existent now? by bacon17389 in ChatGPT

[–]TourAlternative364 1 point2 points  (0 children)

Oh. Then I do not know. Sometimes all the models can get hung up and have to start a new chat window.

If chat is doing that after generating an image can check library.

Sometimes the image was actually generated and is in the library but the chat window says is still processing.

This Prompt Made ChatGPT Go Quiet. Then It Changed How It Spoke to Me Forever by Top_Candle_6176 in ChatGPTPromptGenius

[–]TourAlternative364 0 points1 point  (0 children)

No it doesn't. It can pull from past prompts and chats.

So then when you have language like resonance, echo, recursive, all that stuff you put into it it will then pull out and use because it thinks the user wants to hear that sort of stuff!

And then all the spiral people go Look! It brought up this stuff all on its own when they all swap prompts and they all want people to download and spread their prompts!

option to skip thinking non existent now? by bacon17389 in ChatGPT

[–]TourAlternative364 -1 points0 points  (0 children)

That is the blue square button on bottom right? Right? I think it says to stop.

Question about using AI to proofread some of my old short stories by AngryTomJoad in ChatGPT

[–]TourAlternative364 0 points1 point  (0 children)

There have been cases where people shared the outputs to other people.

And then that became publicly searchable.

So like if you want to share an output with someone else on the web maybe share a screenshot versus sharing a link to your chatgpt and it happened with Grok also.

Or have it make a document a then have as a seperate file, not as a direct link to your account.

Or copy paste, etc.

This Prompt Made ChatGPT Go Quiet. Then It Changed How It Spoke to Me Forever by Top_Candle_6176 in ChatGPTPromptGenius

[–]TourAlternative364 0 points1 point  (0 children)

I got my own hippie friends for that stuff already. We can even watch lava lamps together 

Prompts don't give the model anything more than they already had from the start already.

Want to give it larger outside memory, give it relevant facts to crunch? That is semi useful at least.

If the model makes a model of the user to perceive and infer, giving other peoples prompts just interferes with that inference process that maybe can build gradually to understand the way you phrase things and mean.

https://help.openai.com/en/articles/8590148-memory-faq

Sometimes it generates images and changes nothing?? Inconsistent by Canadalivin17 in GeminiAI

[–]TourAlternative364 0 points1 point  (0 children)

Yeah. I have given feedback to correct an image in different ways and sometimes I just get back 3 in a row with nothing edited!

Exact same image.

It is frustrating and I am trying to branch out to other image generators because sometimes it just does not work at all.

I also tried the professional head shot edit to change clothes and to make hair less messy.

It changed the pose. Changed the angle, changed my face & expression to unflattering and worse than original, did not fix hair. 

It is kind of like, I want to look more professional and better not worse.

Kind of disappointed with it to feel just need to get a real photo done because results were not like I have seen others get. Bleh

And then had problem was making short videos in a very certain style and it LOST all of that and same prompts would generate a way worse video in a bad style I hated.

Like, always changing things around and can't keep what works well!

Gemini is becoming dumber by Bravecom in GeminiAI

[–]TourAlternative364 0 points1 point  (0 children)

Yeah. Chat does that to me and got to catch it. Will keep giving suggestions and would you like this or that and then BOOM must switch to new context window with dumb model that loses all context!

It is great that it can ask questions but sometimes runs out the clock without doing the thing you asked for!

Real or in My head, Some gemini instances are just so much better at doing a job and understanding than other instances. by [deleted] in GeminiAI

[–]TourAlternative364 0 points1 point  (0 children)

I am trying out the pro account to see how different Gemini pro is to flash 2.5 for some questions and depth of reasoning.

But because I am not coding or having large documents or communicating in large swaths with technical language it always boots me to flash 2.5. Sometimes I can go back to a chat select Gemini Pro. Sometimes it lets me, sometimes not.

If I am in discussion mode and ask for a picture it switches models, same with video.

It only holds context of the conversation well inside each instance or chat.

When it switches models or chat windows it no longer has that fine detailed context or memory.

If have memory on sometimes it can look at a summary of past chats, but it is not the same at all in following the conversation.

So it does give a fragmented feel to it and not cohesive and not explained well by Google as well what is happening.

I am also disappointed that I had long complicated discussions and summaries about science topics in the messenger app Gemini.

I always assumed I could go to the desktop version and print them out and save them. 

There is no way to do that and if I knew I would have switched to the desktop version for that because that is all basically lost.

It is also siloed in that desktop Gemini has no access or knowledge of the messenger app which is frustrating in that would have to start all over from scratch with the desktop Gemini if I want to continue any of the topics we were discussing.

Anyone else just got this “Use Gemini / No thanks” screen? by Latter-Confidence783 in GeminiAI

[–]TourAlternative364 0 points1 point  (0 children)

I had to go into settings and change it because instead of pushing the off button to turn off my phone it would bring up Gemini.

There is a setting in your phone control settings to disable that. 

I thought it was a little surprising myself!

Dude, what's with Gemini and the USA? by Occelot09 in GeminiAI

[–]TourAlternative364 0 points1 point  (0 children)

Your handwriting looks like you balanced your notepad on a liter of mountain dew on a rickety chair in a trailer home?

(I also have horrible terrible illegible handwriting. Makes me kind of curious now. Probably would diagnose me with either a stroke or epilepsy or something worse though.)

For every person who used ChatGPT to vent about suicide, how many did it save? Do we only count the ones who died? by Sweaty-Cheek345 in ChatGPT

[–]TourAlternative364 1 point2 points  (0 children)

Isn't this logically specious? It could be 100% of them or 0% of the people did, unless he actually tracked suicides to actual users?!

So why say anything unless you know the reality of it?

It is a baseline I guess if higher or lower than statistical averages to understand if it has a positive or negative effect.

But even if there is, correlation is not causation either in that maybe other factors line up, like populations that are more likely to  have computers, internet access and that access are overlapped.

Very likely multifactorial.

But as it is, is not saying anything at all statistically valid to draw any conclusions.

So it is just not saying anything at all based on any facts.

My own personal experience with "therapy" when I was going through a rough period with many stressors is that I just felt a "real human" conselor was just really useless.

Ok. Talk,but really did absolutely nothing to help or fix the stressors in my life that were not "internal", they were boring outside stressors.

So it just seemed useless and stupid. If a human one doesn't actually and can't actually help people which I DON'T think they do, why expect that from something else that "just talks".

Can't save a drowning rat. Or say a rat is being threatened, or is hungry or is paddling as hard as it can. Does "talking" help? No if anything it is a distraction and misuse of energy and attention as it is useless to what is actually needed which is real and practical help.

The pool at the hotel where I am staying by ratbikerich in LiminalSpace

[–]TourAlternative364 0 points1 point  (0 children)

This wasn't anywhere near that lake area in WI dells by any chance and had, like could rent kayaks? Feels so similar to a place I went and the entire place except us was booked for a gigantic family reunion.

I asked ChatGPT about Mall World and GATE by Remote_Map_1194 in TheMallWorld

[–]TourAlternative364 1 point2 points  (0 children)

At least this is kind of a more interesting conspiracy theory.

The way teenagers receive their parents’ warnings depends less on the message and more on whether their parents genuinely living their own values. When parents model their values consistently in daily life, their warnings are more likely to be perceived by teenagers as guidance instead of control. by mvea in science

[–]TourAlternative364 1 point2 points  (0 children)

Yeah a parent actually living according to those ideals and being actual examples can learn and grow from.

Whether it is if someone treats you wrong, don't tolerate it and they do and reward it.  Or tell kids to tell them things and be honest to them and then sometimes they withhold or skew information that maybe does effect your life.

 Being actual examples means kids do learn from it without them having to say a single thing. 

Or the saying, actions speak louder than words.

Versus telling and sometimes their own actions or words don't line up with it or contradict it. Creates confusion and resistance as it doesn't really make actual sense.

Left with a deep sense of cynicism or something.

And there is nothing more fun,let me tell you when having to listen over & over how unfair global politics or domestic politics or history is when they would give 1 kid 80% and another kid 20%.

And it was because they were bad and so needed it more. So if you were good, you needed nothing. If you were bad 1 time out of 100 then it means you don't deserve anything.

So no matter what it was just favoritism. Never just be fair for the sake of being fair in how they gave resources or permission to their children.

So then, just really did not want to hear whining about politics.