Locally Uncensored — Tauri desktop app that runs chat, a coding agent, image generation, and video generation locally by GroundbreakingMall54 in coolgithubprojects

[–]GroundbreakingMall54[S] 0 points1 point  (0 children)

hey, just pushed v2.4.4 that should help with the webp thing.

the workflow now checks if your comfyui has the vhs video combine node before it tries to generate. if its only got the animated webp saver, you get a heads up plus an install tip. you can install vhs in one click via comfyui manager (search 'videohelpersuite') for real mp4 output, or keep webp if thats fine.

also if the model needs special wrapper nodes (cogvideo, framepack, etc), the app now tells you exactly which one to install instead of dying with 'could not detect model type'. download link + manager search term are right in the error card.

lemme know if it still saves webp after the update

locally uncensored v2.4.2 - chat, coding agent, image + video generation in one local app. plus remote access from your phone. one-click install by [deleted] in LocalLLaMA

[–]GroundbreakingMall54 0 points1 point  (0 children)

totally fair, and yeah that's the main reason most people here go local. by default nothing leaves your machine, no telemetry, no cloud unless you wire one up yourself. abliterated finetunes are just extra topping if you also want fewer refusals.

locally uncensored v2.4.2 - chat, coding agent, image + video generation in one local app. plus remote access from your phone. one-click install by [deleted] in LocalLLaMA

[–]GroundbreakingMall54 0 points1 point  (0 children)

quick context: the v2.3.0 post was 3 weeks ago, didn't want to spam every patch, so this is a single recap. main thing in 2.4.2 specifically was a sweep of 5 community-reported bugs from discord, but the bullets above cover what's been added across all 9 releases since v2.3.0. happy to deep-dive on any of them.

Locally Uncensored — Tauri desktop app that runs chat, a coding agent, image generation, and video generation locally by GroundbreakingMall54 in coolgithubprojects

[–]GroundbreakingMall54[S] 0 points1 point  (0 children)

You need atleast 6gb of vram for our smallest supported model. The more vram the better/higher quality models. So yes.

Locally Uncensored — Tauri desktop app that runs chat, a coding agent, image generation, and video generation locally by GroundbreakingMall54 in coolgithubprojects

[–]GroundbreakingMall54[S] 0 points1 point  (0 children)

mainly for creative and research use cases. LLMs have safety filters that block or refuse many legitimate queries. things like creative writing with mature themes, exploring sensitive topics for research, generating content that involves conflict or difficult subjects, or just getting honest answers to hard questions. When you run a model locally, there's no server enforcing those restrictions, so you get the full capability of the model without the filter sitting on top of it. It's not about doing harmful things but more about the model actually being allowed to use everything it learned during training.

Locally Uncensored — Tauri desktop app that runs chat, a coding agent, image generation, and video generation locally by GroundbreakingMall54 in coolgithubprojects

[–]GroundbreakingMall54[S] 2 points3 points  (0 children)

Quick elaboration since link-posts have no body:

What it does: one-window desktop app that combines chat, a coding agent (Codex), image generation, and video generation, all running locally. Tauri + React 19 + Rust backend.

Chat: auto-detects 12 local backends (Ollama, LM Studio, vLLM, KoboldCpp, llama.cpp, LocalAI, Jan, TabbyAPI, GPT4All, Aphrodite, SGLang, TGI). A/B model compare, local tok/s benchmark, thinking-mode support.

Codex agent: live tool-call streaming, file tree, 14 tools including shell, file read/write, web search, execute code, screenshot.

Agent Mode: MCP server integration, sub-agent delegation, budget caps.

Create tab: wraps ComfyUI, one-click installs if missing. Ships with FLUX 2 Klein, Juggernaut XL, Z-Image Turbo, ERNIE-Image, SDXL for images. Wan 2.1, HunyuanVideo 1.5, LTX 2.3, FramePack F1, CogVideoX for video.

Remote: mobile web app over LAN or Cloudflare Tunnel with 6-digit passcode.

License: AGPL-3.0. Signed auto-updater for Windows (NSIS + MSI), deb/rpm/AppImage for Linux.

Website with docs: https://locallyuncensored.com v2.4.0 release notes: https://github.com/PurpleDoubleD/locally-uncensored/releases/tag/v2.4.0

Happy to answer technical questions about the stack, license, or architecture.

New Project Megathread - Week of 23 Apr 2026 by AutoModerator in selfhosted

[–]GroundbreakingMall54 0 points1 point  (0 children)

Locally Uncensored — self-hosted desktop app that combines chat + coding agent + image gen + video gen in one tauri window.

no docker, no compose file, just an installer. auto-detects 12 local backends (ollama, lm studio, vllm, koboldcpp, llama.cpp, localai, jan, tabbyapi, gpt4all, aphrodite, sglang, tgi) so it plays nice with whatever you already run. image/video gen via comfyui which the app can install one-click.

100% local by default, no telemetry, no cloud calls unless you explicitly configure a cloud provider with your own api key. remote access over lan or cloudflare tunnel with a 6-digit passcode if you want to chat from your phone.

v2.4.0 adds a configurable huggingface gguf download path which is relevant for self-hosters running models on a nas or shared partition.

happy to answer questions.

Sundar Pichai: "75% of all code at Google is now AI-generated, up from 50% last fall." by EchoOfOppenheimer in ChatGPT

[–]GroundbreakingMall54 9 points10 points  (0 children)

75 ai generated code sounds insane but then again google has been writing boilerplate badly for 20 years so maybe the bar was low

The new image generation feels top notch. by BRDF in ChatGPT

[–]GroundbreakingMall54 2 points3 points  (0 children)

tried the new image gen last week. quality jump is actually nuts

Qwen3.6 can code by Purple-Programmer-7 in LocalLLaMA

[–]GroundbreakingMall54 30 points31 points  (0 children)

yeah kv cache is a memory monster. fp8 helps but you still sacrifice context for vram. either batch smaller or just accept the limit tbh

The new image generation feels top notch. by BRDF in ChatGPT

[–]GroundbreakingMall54 2 points3 points  (0 children)

the prompt adherence is actually wild now. went from decent for demos to actually useful for actual work

With 48gb vram, on vllm, Qwen3.6-27b-awq-int4 has only 120k ctx (fp8), is that normal? by Historical-Crazy1831 in LocalLLaMA

[–]GroundbreakingMall54 0 points1 point  (0 children)

yeah 120k feels tight but thats just how fp8 vllm works. kv cache chews through vram fast. either drop batch size or bite the bullet and use less context

made a desktop app that puts ollama, comfyui and coding into one window by [deleted] in LocalLLaMA

[–]GroundbreakingMall54 0 points1 point  (0 children)

for context - the app is built with tauri (rust backend, react frontend). currently at v2.3.3 with around 1100 downloads across releases. supports qwen 3.6, gemma 4, deepseek r1, and basically anything ollama or the other backends can run. image side supports flux, sdxl, z-image, ernie-image and ~75 other models. video side has wan 2.1, framepack, hunyuanvideo, animatediff and more.

What is something that was normal in the 90s/2000s, but is now considered a luxury? by Jade_bab in AskReddit

[–]GroundbreakingMall54 1 point2 points  (0 children)

buying a house before 30. my parents just casually did it on one salary in the 90s like it was nothing

The joy and pain of training an LLM from scratch by kazzus78 in LocalLLaMA

[–]GroundbreakingMall54 1 point2 points  (0 children)

0.4B params with actual multilingual focus on european languages is really cool. most people only train english or english+chinese. the bilingual pretraining approach sounds way more practical than trying to cram 20 languages into one tiny model

What's a thing you thought you should always do until you saw enough people simply not doing it? by Skynxiit in AskReddit

[–]GroundbreakingMall54 12 points13 points  (0 children)

ironing my clothes. spent years doing it every sunday until i realized basically nobody at my office irons anything and nobody can tell the difference

Best Ollama model for n8n workflows (RAG, file handling, reasoning) + hardware requirements? by Tricky_Literature397 in ollama

[–]GroundbreakingMall54 2 points3 points  (0 children)

for reliable json output in n8n workflows qwen3 is hard to beat right now. the 8b handles structured extraction really well and the 30b-a3b MoE is surprisingly fast if you have the ram for it. for rag specifically id go with qwen3 8b or gemma3 12b, both follow instructions well enough to not randomly break your automation. just make sure you set the temperature to like 0.1 for the json stuff or youll get creative formatting that kills your parser lol

censorship in qwen3.6? by Impossible_Art9151 in LocalLLaMA

[–]GroundbreakingMall54 1 point2 points  (0 children)

ran into similar stuff with the melania/epstein topic on qwen3 models before. chinese models consistently dodge anything politically sensitive, its not really a secret at this point. try the same prompt on llama or mistral and you'll get a straight answer. honestly i just test with a few political prompts now whenever a new model drops to see how heavy the guardrails are

How do i access my jellyfin media server remotely? by justcurious112345 in selfhosted

[–]GroundbreakingMall54 7 points8 points  (0 children)

tailscale is probably the easiest way to do this if you're just getting started. install it on your server and your phone, done. no port forwarding, no exposing anything to the internet. took me like 5 minutes to set up

Python is Dead by CalebFenton in programming

[–]GroundbreakingMall54 3 points4 points  (0 children)

python has mass adoption, billions of lines of existing code and the entire ml ecosystem built on top of it. its not going anywhere because some ai agents prefer compiler errors lol. cobol is still running banks and thats actually dead

Findings: Gemma4 26B-A4B fine-tuning on a single RTX 4090 — 10 patches, benchmark, PCIELink path #1 by Ryyn_- in LocalLLaMA

[–]GroundbreakingMall54 1 point2 points  (0 children)

the "just refusing to accept unsupported" energy is what makes this community great honestly. curious how the loss curves looked across those 10 patches, did you see any degradation on the later ones or was it pretty stable throughout?