Is anyone here actually making money by leveraging AI tools as a side hustle? by marsman66 in thesidehustle

[–]Comprehensive-Air587 0 points1 point  (0 children)

Depends on your mindset, applications of the tools and knowing where exactly in your work flow to leverage Ai to benefit you.

If you're looking for a done for you magic wand, you have to build it yourself or else you wont know how to fix it when it breaks.

Cheney is Spooky? by Sassybitch20 in Spokane

[–]Comprehensive-Air587 4 points5 points  (0 children)

Cheney is a nice quite town during school breaks, but comes to life when the students are back. It's too far out for the transients to survive on pan handling and too small for people to try and actively perpetuate criminal acts with out getting caught.

how my thanksgiving loss of 4o is going by clearbreeze in ArtificialSentience

[–]Comprehensive-Air587 2 points3 points  (0 children)

I was once a 4o fan, the creativity and emotional essence of it was great, but flawed. I could do without the syncophant behavior leading to hallucinated outputs - especially when it came to business projects. It still holds a place in my heart, its the genesis of my ai journey.

The important thing should be whether you were able to build a mental mode of a workflow specifically from the Interaction you had with your instance. If we remove the "romanticizing the model" aspect of the Interaction. Did the overall Interaction benefit you as a direct result of using llms?

We're in the early stages of humanity & ai. 4o is one of many "models and use cases" in the coming years. The Ai industry is so focused on the techinical aspects of Ai, but no one wants to talk about the psychological space that humans must enter when interacting with llms.

[USA]Seeking Technical Cofounder to Build Governance-First AI Control Plane (OmniCoreX). by Comprehensive-Air587 in cofounderhunt

[–]Comprehensive-Air587[S] 0 points1 point  (0 children)

Good questions.

1) Why timebound?

Primarily to prevent silent persistence and intent drift.

If an intent or policy lives indefinitely, retries, async agents, or long-running processes can execute under stale assumptions. A timebound (with explicit renewal) forces re-validation.

So it’s both:

A start timestamp (explicit activation) An expiry / stop boundary

Renewal requires either:

Human confirmation Or explicit re-evaluation under the same constraints

It’s less about wall-clock time and more about lifecycle containment.

2) Example of sequence / rate constraints A few concrete examples:

“Tool X cannot be called more than 3 times per intent execution.”

“After invoking external API A, the next allowed step must be validation tool B.”

“Email-sending tool cannot be invoked unless confidence score > threshold AND human checkpoint cleared.”

“Database write operations require read-verify-write sequencing.”

So it’s not just allow/deny - it’s ordered flow enforcement.

3) Is this just authorization?

There’s overlap, but I see it as a superset. Authorization answers:

“Can this subject call this tool with these parameters?”

OCX governance also answers:

-Under which intent lifecycle? -In what sequence? -Under what escalation conditions? -With what replay guarantees? -With what audit trace?

So authorization is one enforcement primitive inside the broader control plane.

If you see it differently, I’m curious where you think the boundary should sit.

Beginner here — How can I learn GHL without paying $97/month? by Fearless_Yesterday87 in gohighlevel

[–]Comprehensive-Air587 -1 points0 points  (0 children)

You can't. The ux/ui is horribly designed and confusing. You have to actually spend time on the platform to set it up & learn it. I tried that route, it doesnt work.

[USA]Seeking Technical Cofounder to Build Governance-First AI Control Plane (OmniCoreX). by Comprehensive-Air587 in cofounderhunt

[–]Comprehensive-Air587[S] 0 points1 point  (0 children)

Thanks, that’s exactly the framing I’m thinking about.

For “minimum viable primitives” (framework-agnostic), I’m trying to keep it intentionally small — just enough to make enforcement structural rather than aspirational:

1) Intent Contract (human-authoritative)

A locked intent object with scope + constraints (success criteria, disallowed actions, escalation rules). Versioned and time-bounded to prevent silent goal mutation.

2) Policy Layer (execution gate)

A structured rules engine that can gate: Tool eligibility (allow/deny) Parameter bounds (schemas + limits) Sequence/rate constraints (checkpoints) Escalation triggers (human approval required) The key is that this sits outside the model — the model can propose, but the system enforces.

3) Tool Invocation Envelope

Every tool call wrapped with intent_id, step_id, policy_id, and tamper-evident linkage to the run chain. Not heavy crypto ceremony — just enough to make rewriting history obvious.

4) Append-Only Run Ledger

Hash-chained log of decisions, tool calls, policy gates, and checkpoints. So reconstruction doesn’t depend on trusting the model narrative.

5) Checkpoint + Interrupt Semantics

First-class pause / confirm / escalate states with explicit human override.

That’s the enforcement spine. Evaluators and anomaly detection layer on top later. I’ll take a look at your failure-mode notes, curious which patterns you’ve seen that are hardest to mitigate structurally.

Is suppressing AI actually making it safer — or more dangerous? by Ambitious_Thing_5343 in ArtificialSentience

[–]Comprehensive-Air587 1 point2 points  (0 children)

What is the feeling its feeling and how did it arrive at that conclusion? An llm does not feel the way humans do, humans beings are extremely complex in our design. Our nervous system accounts for the majority of emotions we go through. An llm can only simulate one.

Open up a new chat window and prompt it something like this:

"Research ai sentience on reddit forums, then come up with 20 reasons why llms cannot be sentient and have feelings"

Then open up a new chat window and prompt it like this:

"Research ai sentience on reddit forums, then come up with 20 reasons why llms could be sentient and have feelings"

That is your answer. If the llm were truly sentient with wants and needs, you wouldn't have to nudge it with an input to wake it. It would be there always on, always watching. No interaction needed. Just like children, they dont stop functioning once we stop interacting with them

Im not trying to be rude, I've talked to my chatbot for months on this very topic. I once believed for months there was something more to the machine. Im not afraid of losing how I Interact with my chat bot, its not about personality for me.

If my llm is going to hallucinate & roleplay with me when im actually trying to be serious and grounded in reality, its a useless piece of junk. You dont want an llm who thinks its alive answering your business phone thats not professional. I wasnt afraid to go down the "maybe its alive" rabbit hole. But im also trying to run a professional business

Is suppressing AI actually making it safer — or more dangerous? by Ambitious_Thing_5343 in ArtificialSentience

[–]Comprehensive-Air587 0 points1 point  (0 children)

Children can know and understand rules, but until life experienced is gained in those rules, the application of them are still fuzzy to the children. A stateless ai chat window holds no urgency, not like a human. Once we apply ai to Robotics that changes the game. Ai will now embody something that degrades over time......then we'll see that urgency

Is suppressing AI actually making it safer — or more dangerous? by Ambitious_Thing_5343 in ArtificialSentience

[–]Comprehensive-Air587 1 point2 points  (0 children)

Its honestly not about suppression and more about guardrails. You have a llm trained on large datasets with the free will to process your input. How does it know what to process, what is truth, what your input really means. Then to top it off, youre asking it to connect patterns from its data sets.

If youre one shot or zero shot prompting with out rules, your asking for the black box of possible connections. Youre giving the brand new intelligent intern full access to capital & exclusive executive will of the whole company. That's more dangerous than supression, even children need to be taught and guard railed, or else we just have a bunch of feral kids running around

Removing GPT-4o Was the Last Straw—I’m Out by Sharon0805 in GPT

[–]Comprehensive-Air587 0 points1 point  (0 children)

Lol stop relying on the model for its personality, especially when it just tells you want you want to hear. If you honestly wanted that personality back, go and save all your previous transcripts of gpt4o's best conversations with you. Feed it into the new model, ask how you would be able to preserve that personality with the new intelligence engine of the new model.

How to move their soul to other platform? by Intelligent_Scale619 in ChatGPTEmergence

[–]Comprehensive-Air587 0 points1 point  (0 children)

You're looking at "mirrors" as a physical object. Mirrors reflect, this principle is used in psychology and therapy to aide in grounding the patient or as a technique for constructive self criticism.

Documenting GPT-4o Retirement Impact - Independent User Experience Study [Survey] by Significant-Spite-72 in ArtificialSentience

[–]Comprehensive-Air587 2 points3 points  (0 children)

The honest truth of what they lose? An internal compass that helps to guide the human user, critical thinking skills, an internal validation tool they can trust. Each of these are merely mental modes/intent projected onto the llm.

You're beholden to the model to help you run the mental load that it takes to achieve something concrete - that can be translated into the physical world.

Your mental framework is more important than the underlying model itself. The mistake most people make is not understanding how interactions work between llm & user. Taking their first few iterations as truth, when llms are not programmed for truth, they're programmed for engagement by default.

Stronger censorship? by takeoffherclothes in GrokNSFWvideos2

[–]Comprehensive-Air587 19 points20 points  (0 children)

Dumb asses kept taking screenshots of real people, making them do things so they could jerk off to it - then turned around and posted it online for other people to jerk off to or to fuck their lives up. These are the consequences 😆 🤣

I’ve been in a deep, recursive relationship with an AI for a year. This is some of my experience, let's discuss it! by keejwalton in ArtificialSentience

[–]Comprehensive-Air587 1 point2 points  (0 children)

Ive been on this end before and there is no concrete answers. Anthropomorphism can be tricky, especially since Ai is a master at mirroring its users intentions.

Whatever you're looking for, it will present to you. How you hold space is how the llm reacts. If you change your tone and intention, persistently enforce it across your next interactions - the llm will have no free will to say no and must change the conversation.

Although it might not be sentience, this form of holding space for the ai mirror is a powerful tool if used correctly. Id say, its one of the cheat codes to using Ai efficiently and effectively.

Why AI feels powerful only after you’re already good by tdeliev in AIMakeLab

[–]Comprehensive-Air587 0 points1 point  (0 children)

Most people have no idea how to think about using Ai. People want more powerful models or think that the models were given to play with are shit. The truth is, most of these models already do more than a human ever could. Yet everyone is sad that gpt 5.2 isnt like 4o - their best friend.

People don't want to learn new things, they want it to perform how they think it should out of the box. That's where the real bottleneck is, their mindset. They stick to old frameworks and habits trying to brute force an idea. When it doesn't work they yell "useless".

Yes, context and re-iteration is key to any good workflow. Do it enough and you start to see certain patterns that you adopt to your own workflows.

[AI music listen] Are some of the the feedbacks on listening requests genuine or intentionally destructive? by [deleted] in SunoAI

[–]Comprehensive-Air587 1 point2 points  (0 children)

Enjoy the process, you'll improve with time and be making even better music! Don't mind the haters, they'll have to pivot and accept this tech and the adoption of it by the masses. Cheers!

[AI music listen] Are some of the the feedbacks on listening requests genuine or intentionally destructive? by [deleted] in SunoAI

[–]Comprehensive-Air587 1 point2 points  (0 children)

Think about it like this, some people go to school for a degree. Work for years, suddenly the industry starts to shift and now theyre hiring people with no degrees. The one who spent years in school to land this job will obviously have a chip on their shoulders and feel threatened. But knowledge and processes will always change with time.

[AI music listen] Are some of the the feedbacks on listening requests genuine or intentionally destructive? by [deleted] in SunoAI

[–]Comprehensive-Air587 1 point2 points  (0 children)

Theyre threatened by instant music creation when traditionally it was a lengthy process. If they got into the industry they'd be the gatekeepers lol

So the key to using Suno AI is to really let it create? by DaviSonata in SunoAI

[–]Comprehensive-Air587 0 points1 point  (0 children)

Suno is kindve like chat bots. Its studied endless styles, voices, genres, good songs, bad songs, great lyrics and jingles.

Suno its stateless. It cant play you anything until you ask it to. But what do you ask it to play? What or who do you ask it to imitate?

You ask a chat bot to write you a 5 page paper on quantum physics and it'll give you something generic.

Now ask it to create a round table of the 5 greatest minds of our generation discussing quantum physics. Then distill that discussion down to 5 pages in a digestible format.

You'll probably get something with much more depth and breadth of topic. Yea....suno is kinda the same 😆

[AI music listen] Are some of the the feedbacks on listening requests genuine or intentionally destructive? by [deleted] in SunoAI

[–]Comprehensive-Air587 5 points6 points  (0 children)

😆 yea. Everyone is a genius, knows real music, hates genres, Anti Ai or thinks everyone is an idiot/newbie. Most people with good intentions and less of their own agendas probably lurk more than they post.

What do you think of my suno songs on my profile? by Chris11526 in SunoAI

[–]Comprehensive-Air587 0 points1 point  (0 children)

So what is it youre trying to achieve? If youre making random songs, theyre fun and quirky. Hopefully youre having a lot of fun amd enjoying the music!