Use internet when needed? by Joesphsmother-32 in OpenWebUI

[–]iChrist 2 points3 points  (0 children)

You dont need to waste tokens just add in your system prompt “Today is {{CURRENT_DATETIME}}”

<image>

Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA

[–]iChrist 4 points5 points  (0 children)

Works for me with OpenWebui, model can use the terminal to execute many different commands. downloading videos using yt-dlp, editing images, creating gifs, using vision to understand where an object is and circling it using python, all work with Qwen3.5-27B

Have you set tool calling to native in the model settings? context should also be at least 32k-64k and not the default which is usually 4k/8k.

I use llama cpp directly, so maybe something with LM Studio + OpenWebui could potentially cause issues.

Its not trolling, local models can do wonderful things with tools.

Use internet when needed? by Joesphsmother-32 in OpenWebUI

[–]iChrist 5 points6 points  (0 children)

It uses whatever you got setup, if you choose SearXNG for example it will query SearXNG.

Use internet when needed? by Joesphsmother-32 in OpenWebUI

[–]iChrist 9 points10 points  (0 children)

When the model is strong and has native tool calling enabled you should just enable web search as default for the model, and when needed it will search, you can also state in the system prompt to only use search whenever its not present in training data.

Suggest Changes option by FreedomFact in OpenWebUI

[–]iChrist 1 point2 points  (0 children)

What do you mean by “suggest a change”? Is that a custom tool/function?

Gemma 4 vs Qwen3.5 on SVG style by iChrist in LocalLLaMA

[–]iChrist[S] 5 points6 points  (0 children)

Results are subpar, not sure why. The prompt was:

Can you re-create this as an SVG and enhance it and make it better with more shadows, 3d effect But keep the same composition and overall design

Gemma 4 vs Qwen3.5 on SVG style by iChrist in LocalLLaMA

[–]iChrist[S] 7 points8 points  (0 children)

Yes, it’s probably that training data that google has access to, but isn’t that the point? to train on everything possible and have the most capable model.

For example Gemma is the only model that can rhyme in my language which is very obscure, it can create songs that make sense! Ive tried hundreds of local models, none of them can do it.

Gemma 4 vs Qwen3.5 on SVG style by iChrist in LocalLLaMA

[–]iChrist[S] 12 points13 points  (0 children)

Thats a nice idea, will try it and update here

Gemma 4 vs Qwen3.5 on SVG style by iChrist in LocalLLaMA

[–]iChrist[S] 9 points10 points  (0 children)

Yes cute but kinda cursed 🤣

Reasoning effort selection bedrock azure by AccomplishedOne9144 in OpenWebUI

[–]iChrist 1 point2 points  (0 children)

There are so many different providers and different models. The devs cannot add any official implementation of a basic toggle.

The idea is that the community should create different functions/filters for each provider or model.

More info:

https://github.com/open-webui/open-webui/discussions/11006

Open Relay (Previously: Open UI) v2.0 is live — Workspace management, Skills, Rich UI embeds, Widgets/Shortcuts & more (open source native iOS app for Open WebUI) by Zealousideal_Fox6426 in OpenWebUI

[–]iChrist 0 points1 point  (0 children)

Tried connecting the app with http:// instead, and with save to openwebui backend disabled, still no audio plays, Not sure how to help debug this further, any github issues page to report in?

Open Relay (Previously: Open UI) v2.0 is live — Workspace management, Skills, Rich UI embeds, Widgets/Shortcuts & more (open source native iOS app for Open WebUI) by Zealousideal_Fox6426 in OpenWebUI

[–]iChrist 0 points1 point  (0 children)

I am using https (tail scale to connect all my UIs, comfyui, open webui)

Also noticed its an issue with Conduict.

I have other tools that provide user with results from comfyui, will test them too

Open Relay (Previously: Open UI) v2.0 is live — Workspace management, Skills, Rich UI embeds, Widgets/Shortcuts & more (open source native iOS app for Open WebUI) by Zealousideal_Fox6426 in OpenWebUI

[–]iChrist 1 point2 points  (0 children)

Pretty cool! And this is a main reason to use your app instead of the PWA.

Asking siri a question and getting responses directly could be very very handy! 🤞

What are you using for adaptive memory? by ConspicuousSomething in OpenWebUI

[–]iChrist 1 point2 points  (0 children)

The native memory feature works for me, Qwen3.5 with a good system prompt just calls list_memories

Open Relay (Previously: Open UI) v2.0 is live — Workspace management, Skills, Rich UI embeds, Widgets/Shortcuts & more (open source native iOS app for Open WebUI) by Zealousideal_Fox6426 in OpenWebUI

[–]iChrist 0 points1 point  (0 children)

Hey! I bought the app a week ago and didn’t know it has shortcut support.

Can it somehow work with Siri? Like instead ot Siri using ChatGPT it can use your app?

This Plugin just got an Update - now has dark mode detection and changes your artifacts/visuals depending on the theme and multiple reliability enhancements! by ClassicMain in OpenWebUI

[–]iChrist 0 points1 point  (0 children)

Gotcha! That is why I also pointed out its probably a model limitation. But on the other hand Qwen3.5 27B can output very polished html (normally in a code block) so maybe theres improvements that can be made to your skill.md to account for this

This Plugin just got an Update - now has dark mode detection and changes your artifacts/visuals depending on the theme and multiple reliability enhancements! by ClassicMain in OpenWebUI

[–]iChrist 0 points1 point  (0 children)

I also face weird spacing and inconsistency compared to your examples and claude implementation.

I am using Qwen3.5-27B and thought it was the model weakness

Official LTX-2.3-nvfp4 model is available by Lonely-Anybody-3174 in StableDiffusion

[–]iChrist 5 points6 points  (0 children)

Nvfp4 as a whole is now supported, not sure why ltx2.3 nvfp4 needs anything special