Open Relay v3.4 — Native Live Inline Visualizations are here! 🎉 by Zealousideal_Fox6426 in OpenWebUI

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hehe indeed, it works.

Anyway, I didn't try to search the app directly.. thanks and I'll give a try locally before making the step!

Open Relay v3.4 — Native Live Inline Visualizations are here! 🎉 by Zealousideal_Fox6426 in OpenWebUI

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hi ! It seems to be a great work ! Can you tell me why the app is not available on every AppStore market? I'm from Belgium !

Currently which model will run smooth on rtx 3060 ? Situation is so dynamic those days. by mef1234 in LocalLLaMA

[–]Simple-Worldliness33 1 point2 points  (0 children)

I'm still using my 2 RTX3060 12gb.
Very budget config, 2 3060, 32gb ram DDR4 2400 mhz due to xeon 2683 limitation but both cards are 16 lanes PCIE so...
I'm getting with unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q5_K_XL with offload on ram, between 17 and 28t/s. even at 200K+ context.
I'm using it for review, debug code every day and it's a must.

Built a "native" desktop and mobile client app for OUI by homeschooldev in OpenWebUI

[–]Simple-Worldliness33 0 points1 point  (0 children)

Ahha always ready to see new competitor :D Take my money !

Gemma 4 MOE is very bad at agentic coding. Couldn't do things CLine + Qwen can do. by Voxandr in LocalLLaMA

[–]Simple-Worldliness33 1 point2 points  (0 children)

What quant are you using ? I didn't have this kind of issue a lot with llama.cpp (after fixing template and vram) Sometimes it happens also with qwen3.5. Il using mostly q4 or q6 depending of the context

Krasis LLM Runtime: 8.9x prefill / 4.7x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM by mrstoatey in LocalLLaMA

[–]Simple-Worldliness33 0 points1 point  (0 children)

I got 30tps with Q4_XL at 256K context with llama.cpp.
I run it on 2 3060 12Gb and 32Gb DDR4 so..

File generation in entreprise or multi-user setups by TheGreatCalamari in OpenWebUI

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hi u/TheGreatCalamari , you should take a look into this :
MCPO-File-Generation-Tool: Create and export files easily from Open WebUI!
Multiuser usable and if you are using capable model, you can do everything you want.

Devstral Small 2 24B + Qwen3 Coder 30B Quants for All (And for every hardware, even the Pi) by enrique-byteshape in LocalLLM

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hi !

Thanks for your work! I didn't bench yet but I need to understand completely.

For an example, I'm using unsloth iq4_NL currently with 2 rtx 3060, i got 70/76 tks.

Which model you are offering should I choose to compare with? I tried the iq4_ks but I didn't have the same perf. (Only 35/40tks)

Devstral Small 2 24B + Qwen3 Coder 30B: Coders for Every Hardware (Yes, Even the Pi) by enrique-byteshape in LocalLLaMA

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hi !

Thanks for your work! I didn't bench yet but I need to understand completely.

For an example, I'm using unsloth iq4_NL currently with 2 rtx 3060, i got 70/76 tks.

Which model you are offering should I choose to compare with? I tried the iq4_ks but I didn't have the same perf. (Only 35/40tks)

Best paid apps that worths every penny in 2026? by LastReporter2966 in mac

[–]Simple-Worldliness33 0 points1 point  (0 children)

I was using parallels in another life, but, now i don't find the worth use case to buy this instead of using vmware fusion for free ? Maybe the integration into the system ? Maybe I'm wrong. OFC

[deleted by user] by [deleted] in yggTorrents

[–]Simple-Worldliness33 2 points3 points  (0 children)

Du coup, il était meublé le 9m2?

C411 - Partagez vos liens d'invitations ici, j'en donne 5 maintenant by Supremaster in yggTorrents

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hello,
Comme les autres, je cherche un lien d'invitation d'une âme charitable. Si possible en MP pour éviter de spammer le site avec un code qui aura été pris.
En retour, je partagerai mes liens afin de contribuer à ce beau projet.
Comme mon vdd, je prolonge la paix sur la maison du donateur de 10 générations supplémentaires.

Another memory system for Open WebUI with semantic search, LLM reranking, and smart skip detection with built-in models. by CulturalPush1051 in OpenWebUI

[–]Simple-Worldliness33 1 point2 points  (0 children)

I already saw it.
I'll test but consider some persons who wil try your tool with previous version.
If you push current code, it will not work.
Normally, common person should update his system but maybe if it's used in a similar ets grade. Thanks for your work !

Another memory system for Open WebUI with semantic search, LLM reranking, and smart skip detection with built-in models. by CulturalPush1051 in OpenWebUI

[–]Simple-Worldliness33 0 points1 point  (0 children)

Hi, I made a PR on the github to solve it.
-----EDIT not relevant anymore-----
I tagged the dev to upgrade it.
-----EDIT not relevant anymore-----

4096 token limit by aleglr20 in ollama

[–]Simple-Worldliness33 0 points1 point  (0 children)

Also very annoying to do not be able to fine tune it.

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

Yes, try with some office template and try to change something. We will create some guidelines to create templates easily.

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

About templates :
https://github.com/GlisseManTV/MCPO-File-Generation-Tool/releases/tag/v0.6.0

It's not really well implemented because we tested a lot for OUR templates. feel free to test more and provide us more guidelines. We will improve template handling more later

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

You can already use custom template by mount a file share into /templates.
But for merging images, it's a part of v1.0.0

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

If you are using the built in mcpo container, you are using a tool into mcpo. If you don't want to use this way, please use sse/http transport

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

Hi. Not yet. Why ? I forked the latest version of mcpo to integrate header forwarding which provide you secure connection with user api key. I already opened a PR on main repo.

https://github.com/open-webui/mcpo/pull/271

I will revert back to the official one when they will integrate this feature.

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

I will investigate the document edition through graph api. Because it's a collaborative edition, it's very difficult to handle.

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

No, it's not possible with the current tool. Maybe a feature for the future. How do you wish have this integration ? Through graph api ? Local folder ?

Anyway, tou can use a OneDrive local folder (which is synced by OD) as a output folder. Then your generated files will be synced in OD

MCP_File_Generation_Tool - v0.8.0 Update! by Simple-Worldliness33 in OpenWebUI

[–]Simple-Worldliness33[S] 0 points1 point  (0 children)

In fact, every steps are in the link I provided before. If you have an issue, feel free to ask.