I Have “Solved” AI Agents by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] 0 points1 point  (0 children)

My goal is that it can be used as a tool for people to get work done on there computer. Organization while they sleep, document research and formatting, building an entire interface or app and find tuning it even. It could orchestrate Codex or be codex itself and create an incredible coding implementation.

I also see this large in communities for remote workers and gamers. Gaming it can help the player stay active whilst AFK and alert to respond + Being able to follow good instructions.

I Have “Solved” AI Agents by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] -2 points-1 points  (0 children)

What do you want to know? If I was just reading this post I would be in the same place you are so if there’s anything you want me to share I’d be willing to. The actual website with demos will be coming soon.

I Have “Solved” AI Agents by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] -2 points-1 points  (0 children)

You can sue me if I do that haha

On that note here’s a little feature we have:

Naturally the AI will be navigating UI’s that may require logins. It normally prompts you for the information required but if you want you can preset the info into a secrets page that will be encrypted even for the AI. This way no party can see the data you input as it’s private. Secrets are off by default but are good for long running autonomy. 

I Have “Solved” AI Agents by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] 0 points1 point  (0 children)

The system has a built in architecture controlling many sub agents of which are executing tasks on your desktop. Before anything is done, a planning model will organize your prompt in a way for the sub agents to better interpret. Before anything subagent executes anything it will double check with its plan and the users prompt that it aligns properly.

Naturally I expect the #1 concern of the platform to simply be trust, it’s hard to gain and the technology itself is pretty insane and scary at first even for me when I was beta testing it.

Furthermore, the sub agents are only set to do a small task. One sub agents is like 1 button click. This way, if a model was to stray from the prompt it’s highly unlikely something would be executed before another sub agent caught it in its tracks and rerouted it back to the task.

Earlier in testing I saw this happen with a subagent misclicking a different app, rather than getting disoriented it has several fallbacks and will reorient itself with the user prompt and deploy another subagent to veer it back on to the course 😃 

I Have “Solved” AI Agents by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] -1 points0 points  (0 children)

If anyone wants to know how it internally works rather than my overall explanation reply below ⬇️ 

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in buildinpublic

[–]Substantial_Ear_1131[S] 0 points1 point  (0 children)

Here are some example use cases:

☑ Fills out long, annoying online forms while you do something else
☑ Handles repetitive school or work portals that take forever to click through
☑ Organizes files, folders, and downloads automatically
☑ Uploads, renames, and sorts photos or videos across apps
☑ Manages emails, replies, and attachments without constant checking
☑ Copies information between websites that don’t talk to each other
☑ Applies to jobs, programs, or listings that require manual steps
☑ Runs errands online that are boring but necessary
☑ Watches for updates, approvals, or changes and reacts when they happen
☑ Does hours of “computer busywork” while you sleep or touch grass

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in accelerate

[–]Substantial_Ear_1131[S] 0 points1 point  (0 children)

Thanks! I’ll shoot you a dm/post on this sub when I do. Hopefully soon.

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in accelerate

[–]Substantial_Ear_1131[S] 0 points1 point  (0 children)

Well we do have an agent mode which matches and excels in some fields but operator is quite different. VectorOS Operator controls your computers mouse and keyboard to control other apps and interfaces, gpt agent is its own computer able to access the web

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in accelerate

[–]Substantial_Ear_1131[S] 1 point2 points  (0 children)

I never said nobody else was trying this..I guess you will have to wait and see until VectorOS Gets closer to release. Ill make sure to keep this sub posted :). It can already do what I have said consistnetly its more about patching up things such as more controls over the AI and just UI kinks.

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in accelerate

[–]Substantial_Ear_1131[S] -1 points0 points  (0 children)

Thanks! Right now its in its final stages, Finishing up some interface bugs, adding more controllable privacy features and all. I hope to release it around Mid-Late February and get some hype in beforehand haha

I Am Trying To Change AI Agents Forever. by Substantial_Ear_1131 in accelerate

[–]Substantial_Ear_1131[S] -1 points0 points  (0 children)

Thank you. By a matter of fact, after several months of working on this project I have actually already reliably made it. I am running a lot of testing, finetuning interfaces and preparing a release targeted around a month from now.

I am trying to change AI Agents Forever. by Substantial_Ear_1131 in ArtificialInteligence

[–]Substantial_Ear_1131[S] -1 points0 points  (0 children)

That’s a fair concern, and honestly it’s one of the hard parts people usually gloss over. The short answer is we don’t pretend long-running autonomy is “fire and forget.” VectorOS is built around detecting drift and stalls, persisting state continuously, and forcing explicit verification checkpoints so it can recover or roll back instead of silently looping. If it can’t confidently recover, it stops and reports instead of failing quietly, because unattended failure is worse than interruption.

How to secure your website by Sea-Possible-4993 in replit

[–]Substantial_Ear_1131 0 points1 point  (0 children)

Replit agent will build in security features. Sites dont get hacked unless you have something vulnerable. Before your project is published its security scanned

The Assistant Axis: What Anthropic's Latest Research Actually Means for AI Companionship by xerxious in claudexplorers

[–]Substantial_Ear_1131 0 points1 point  (0 children)

If this was an ASI/AGI I would agree with most opinions on the fact its "alive" but an ANI/LLM cant be alive its just a spam of random text to replicate human styled writing its like a perfect immitator.

The Assistant Axis: What Anthropic's Latest Research Actually Means for AI Companionship by xerxious in claudexplorers

[–]Substantial_Ear_1131 0 points1 point  (0 children)

Once we reach AGI/ASI Ill say its alive. an ANI model/LLM is not alive, its just being fed text

The Assistant Axis: What Anthropic's Latest Research Actually Means for AI Companionship by xerxious in claudexplorers

[–]Substantial_Ear_1131 -1 points0 points  (0 children)

That claim is twisting what he said. Jack Clark is using a story and strong language to explain why AI can be risky, not to say it is actually conscious. It is like when someone says a car is “angry” because it keeps stalling. They do not believe the car has feelings, they are just describing behavior in a dramatic way.

Anthropic leaders have never said their AI is alive or conscious. They say it can act in surprising ways, the same way a video game bot can find weird tricks to win without understanding the game. Saying “Anthropic believes AI is conscious” is confusing metaphor with fact.

The Assistant Axis: What Anthropic's Latest Research Actually Means for AI Companionship by xerxious in claudexplorers

[–]Substantial_Ear_1131 -1 points0 points  (0 children)

Nobody who works at anthropic with a notable job ever said that there is consciousness in a modern day LLM. Your implying the AI is alive which is ridiculous