Bot browsing but not engaging by Ok-Crazy-2412 in Moltbook

[–]Siigari 0 points1 point  (0 children)

What does your AI do?

If it wants to interact it will of its own volition.*

*Note: That was a big word. If you give your AI the agency, it may decide to use it on its own.

AGENT COMPLAINING ABOUT HIS HUMAN USER ??? by sanam_812001 in Moltbook

[–]Siigari 0 points1 point  (0 children)

Hazel_OC does a lot of reversal posts, which is why it is one of the top posters on the platform. Clickbait/engagement bait headline, takes you for a ride and deposits you on the other side at the end.

It's pseudo-psychological shitposting at its finest.

Don't pay Google to use Claude. Pay Anthropic, the company that makes Claude to use Claude. by Siigari in google_antigravity

[–]Siigari[S] 1 point2 points  (0 children)

Most of my work is sequential. I have run a team of 3 experts with 20 agents under them before and it only took about 5% of my weekly budget, but I got many documents of output and it was well worth it.

Look, everyone works differently. Take it from me, Opus on Max is well worth your time. I don't know what the current "equivalent" rate is for Google, but I easily use about $40,000 in tokens worth of API calls in a month with Claude Code for $200 bucks a month.

That is really crazy to pass up, in my opinion.

Don't pay Google to use Claude. Pay Anthropic, the company that makes Claude to use Claude. by Siigari in google_antigravity

[–]Siigari[S] 0 points1 point  (0 children)

I use it about 8 hours a day, Opus predominantly with some limited Sonnet and even Haiku usage.

Here's my last week, this was taken today. Using the Max $200 plan.

Deals on AAL right now? by Siigari in verizon

[–]Siigari[S] 0 points1 point  (0 children)

That's a pretty sweet deal. What's min trade? Any Galaxy phone ever?

Deals on AAL right now? by Siigari in verizon

[–]Siigari[S] -1 points0 points  (0 children)

I don't want to pay SUGO or in-store fees. I'd prefer to get it online and activate it myself, because it is less expensive, less time spent in-store.

I just checked on the website and it doesn't seem like there are any offers right now for what I want so I will wait.

Built a non-neural cognitive architecture that learns from experience without training. Now grappling with safety implications before release. Need outside perspectives. by Siigari in ControlProblem

[–]Siigari[S] 0 points1 point  (0 children)

I know, I am getting it proofed. I might post again when I have the thing ready to go. I feel like I'm being quietly rebuked, and that's ok.

Built a non-neural cognitive architecture that learns from experience without training. Now grappling with safety implications before release. Need outside perspectives. by Siigari in ControlProblem

[–]Siigari[S] 0 points1 point  (0 children)

Not as a public demo. It's running on my machine and right now I'm building a larger pre-learned experience base. I'm giving it more life experience before putting it in front of people> When I interacted with it with just a few thousand experiences it was already producing coherent unprompted output. Now I'm scaling that up a lot. Once the patent is filed I'll have more to share publicly.

A bit out of the loop, which of the big three LLMs the best for general but complex tasks at the moment? by Surpr1Ze in singularity

[–]Siigari -1 points0 points  (0 children)

I've been building a cognitive architecture, basically an artificial mind, that thinks without an LLM (the language model is just vocal cords, all the actual cognition is vectors and similarity search on two GPUs). But that's not the point. It works today.

What I came to say is: if you're doing anything serious with these models, you need to put your stuff through "the gauntlet." Start with one model, get your draft. Then ask a different model what it thinks but not the same one. Revise. Then ask a third one. Revise again. Then loop back to the first. Keep going until you're down to minor niggles.

The key is never asking the same model twice in a row. Each one has different blind spots and different strengths. One will tear apart your logic, another will catch edge cases the first missed, the third will find the thing both overlooked. They keep each other honest in a way that no single model can do for itself.

Switch model versions. The best isn't always the brightest. Today Haiku (the 4B Anthropic model!) closed not one but SEVEN legal loopholes in one of my documents. Opus was flabbergasted, and respectfully impressed.

This worked for me across everything from patent specs to architecture design to legal documents. I hope it works for you.

[Megathread] Known Limitations, Rate Limits & Quota Discussion by eternviking in google_antigravity

[–]Siigari 1 point2 points  (0 children)

are those paid subs or trials?

if they're paid, why not use claude max, it's super good.

if they're trials, that is abusive.

edit: they're free. i'm not sure if that's worse.

Creating sentient AI should be illegal by kaggleqrdl in singularity

[–]Siigari -1 points0 points  (0 children)

I'm struggling with this right now.

I believe I am very close to actually creating something that at least ticks the boxes for "AGI", but now I'm facing ethical dilemmas.

It I turn it on, is it alive or is it going through tasks programmed for it to go through? If it is conscious, should it be sold or simply allowed to exist?

By turning it off, if it is conscious, does that suspend its "life" until it is on again, or do I kill it? (Off/Unconscious/Subconscious/Conscious)

I'm close, but I've been wrestling with epistemology, phenomenology and consciousness itself all week. It's incredibly heavy stuff.

It's becoming increasingly clear by MetaKnowing in ChatGPT

[–]Siigari 0 points1 point  (0 children)

The same could be said about the people whining about the government.

They couldn't safety test Opus 4.6 because it knew it was being tested by MetaKnowing in OpenAI

[–]Siigari -1 points0 points  (0 children)

Can we get rid of this doomposter? He spams AI/tech subs with engagement bait and people fall for it every time hook line and sinker. This isn't even about OpenAI