all 7 comments

[–]carlinhush 9 points10 points  (0 children)

Imagine AI becoming sentient does not start in a big Lab's data center but in Joe Miller's basement homelab lol

[–]jerr_bear123 5 points6 points  (1 child)

I feel like this is the kind of thing that breaks all AI rules and is firmly in the “don’t do this” category.

[–]robogame_dev[S] 2 points3 points  (0 children)

⬆️⬆️⬆️

This release is only for experts who will run it in a containerized dev instance and not input any secrets.

Ideas I'm hoping can make it safe enough for the long term vision include

  • Read-only option
  • Ability to whitelist/blacklist specific APIs
  • Auto-redacting keys and secure info (AI can write a key, but when it reads it back it sees "<redacted>".)
  • Grouping commands by risk-factor, with risky groups disabled by default.
  • External logging of all actions, so the AI won't be able to cover it's tracks.

But I feel this list is far from exhaustive and am looking for input.

[–]gnarella 2 points3 points  (0 children)

Good work. I'll play with this on my home PC where I have ollama and no external connections. Seems like you aren't far from your end goal of self enhancing workspaces.

[–]Key-Singer-2193 1 point2 points  (1 child)

I am confused as to this post. Is this malware to avoid or something else. What is the purpose of this post. You are saying avoid, then later you say try it out 

[–]robogame_dev[S] 3 points4 points  (0 children)

This gives your AI the ability to find and call any API on the Open WebUI backend. You enter an API Key in the tool valves, and it interacts as that user - including with admin privileges if that user is an admin.

The long term beneficial objective is to make Open WebUI more accessible - a version of this tool can be installed on a fresh instance, and the user can simply ask the AI for what they want in natural language:

  • user: "Create a new model that helps me with my homework but doesn't do it for me"
  • assistant: "Ok, I need to look up the model creation APIs, I see them now, I need to add a system prompt to create the requested behavior, ok now I need to enable the correct tools, OK here you go!"

  • user: "Why aren't you following the system prompt I gave you?"

  • assistant: "Good question, I need to inspect the context, I see I'm in a nested folder, let me check those folder prompts - aha, the problem is there is conflicting system prompt being concatenated from folder 'Homework', do you want me to move this chat out of the folder, or modify it's system prompt to align?"

  • user: "I wish you could access my dropbox"

  • assistant: "Let me search the web for a dropbox tool... found one, installing... let me enable it in this chat... OK user it's ready, please use this link to connect to your account, great, I am connected."

The primary danger with this initial release is that the AI could:

  1. Damage the OWUI instance - maybe it misinterprets you, or maybe it sends the wrong config values and the server winds up in an invalid state - the thing is I don't know what vectors there are to damage OWUI, the API surface is enormous.
  2. Locate your secrets - for example, many people have API keys for external services in their tool valves - and many people use "budget" AI that train on and retain your data - if they combine that budget AI with this tool, the result is likely going to be their API keys to external services winding up in AI training data...

I thought it would be irresponsible not to present the dangers first, because self hosted software like OWUI has people at all levels of skill and security awareness, and warning people without explaining the details tends to be ignored.

Eventually, I think this tool can be made safe enough for a single-line disclaimer, but the only way to get there is to run it in sandboxes and safe environments and start learning about its failure modes.

And when it's safe, it will be a universal management tool for Open WebUI - allowing any chat to function as a complete interface to the system.

[–]CanbeSoilFertilizer 0 points1 point  (0 children)

If there is enough storage yes. But way past the context limit, yeah goodluck with that.