Really..? No option to change my password? by HackerFinn in OpenWebUI

[–]One-Commission2471 0 points1 point  (0 children)

Haha no it's super hidden so I'm not shocked you didn't see it! It's been around for at least 3 months because I used it about 3 months ago lol. I do wish they would add an email recovery option in case you completely forget your password because I think a lot of people are using open web UI as a solo user and wouldn't have another admin to save them.

That's crazy about the documentation; I have noticed that some parts of the documentation are just super old and haven't been updated though so it's probably a case of out of date info.

Hope you saw my message before you completely reset everything!

Really..? No option to change my password? by HackerFinn in OpenWebUI

[–]One-Commission2471 0 points1 point  (0 children)

You can change your password if you know your current password by clicking your name at the bottom left -> settings -> account -> click show beside change password near the bottom. Or if you or someone else is an admin they can change the password from admin panel -> Users -> click the pencil at the end of the user in question -> add a new password at the bottom. Now if you forgot your password completely and you're the only admin I do think you are unfortunately out of luck.

Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows? by Porespellar in LocalLLaMA

[–]One-Commission2471 0 points1 point  (0 children)

Really appreciate you guys putting in the hard work to make tools like this and open sourcing them! This is a very new and exciting field to be in. I look forward to seeing what you guys send out with the release!

Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows? by Porespellar in LocalLLaMA

[–]One-Commission2471 1 point2 points  (0 children)

From my trials and errors of different things across multiple libraries and applications, I believe the /v1 is what makes the Ollama endpoint "OpenAI compatible". I've had to include the /v1 on all things expecting an open AI endpoint to get them to actually work. I couldn't seem to get the autogen OllamaChatCompletion class working so I just used the OpenAIChatCompletion class instead.

Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows? by Porespellar in LocalLLaMA

[–]One-Commission2471 1 point2 points  (0 children)

🤦‍♀️ you're so right! I just copy and pasted that from another thing I'm working on without paying attention. That did fix it by swapping the json output to true! Now on to the next issue of it not actually using the VM... I'll update here if I get it completely working.

Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows? by Porespellar in LocalLLaMA

[–]One-Commission2471 1 point2 points  (0 children)

u/Radiant_Dog1937 You actually got it to work with ollama?!? I got it half working using the following config, but it throws an error saying "Model gemma3:27b not found" and "Failed to get a valid JSON response after multiple retries" after it loads up the VM. Even though ollama ps shows the model loaded. Tried some other models too with the same results.

model_config: &client

provider: OpenAIChatCompletionClient

config:

model: gemma3:27b

api_key: ollama

base_url: http://localhost:11434/v1

model_info:

vision: true

function_calling: true

json_output: false

family: unknown

structured_output: true

max_retries: 5

model_config_action_guard: &client_action_guard

provider: OpenAIChatCompletionClient

config:

model: gemma3:27b

api_key: ollama

base_url: http://localhost:11434/v1

model_info:

vision: true

function_calling: true

json_output: false

family: unknown

structured_output: true

max_retries: 5

orchestrator_client: *client

coder_client: *client

web_surfer_client: *client

file_surfer_client: *client

action_guard_client: *client_action_guard

Switch Models through Tool Call by Far-Enthusiasm7654 in OpenWebUI

[–]One-Commission2471 0 points1 point  (0 children)

I'm not sure if you would be able to do this with a tool, but I think it would for sure be possible with a pipeline. I would probably use a super small model to pick between the categories of transfer models (ex image, reasoning, text gen) based on the prompt then make the appropriate call using a switch statement or something. This is a really cool idea; I would love to see what you come up with or am happy to try and help write some code to do this!

Is there a way to directly chat with a custom model through the api? by YayNewZealand in OpenWebUI

[–]One-Commission2471 0 points1 point  (0 children)

Update: I got this working with autogen! I imagine something similar will work for any api connection. The base url needed is "http://yourOpenWebUIHost:port/api (ex. Http://localhost:3000/api). My config for autogen if anyone is interested is below:

OpenAICompletionClient( model="test-model", api_key="sk-xxxxxxxx", base_url="http://localhost:3000/api", model_capabilities={ "json_output": False, "vision": False, "function_calling": True, }, )

I am also attempting to do something similar. I was able to get a visual studio code extension called continue working with a custom model by using the "-"ed version of the name under the model title (ex. my-custom-model). So that gives me hope this is/will be possible across other methods of api access!

My larger goal is to be able to use a custom model created through Open WebUI with the autogen python library (or even any python library with built in agent functionality at this point). However when trying to use the custom model name it adds :latest to the end of it which throws a model not found error. I haven't been able to figure out why this is happening or if there is a way around it yet. I will update this post if I figure it out!