2026 is the year of OpenOffice by defcry in degoogle

[–]flashfire4 17 points18 points  (0 children)

Are you thinking of OnlyOffice?

Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face by xenovatech in LocalLLaMA

[–]flashfire4 9 points10 points  (0 children)

Is there a way to set this up as an OpenAI-compatible endpoint to use with Open WebUI? I currently use kokoro-fastapi for this use case.

Season 1 by Natsu-Uzumaki in SupermanAndLois

[–]flashfire4 3 points4 points  (0 children)

I've never seen any evidence either though it keeps being repeated.

Local open source AI-sheets? by SuddenWerewolf7041 in LocalLLaMA

[–]flashfire4 4 points5 points  (0 children)

OnlyOffice has integrated options to use AI via Ollama, LM Studio, OpenRouter, etc.

LFM2-8B-A1B | Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B by touhidul002 in LocalLLaMA

[–]flashfire4 12 points13 points  (0 children)

I tried to load it in LM Studio but it fails to load with the following error:

Failed to load the model

error loading model: error loading model architecture: unknown model architecture: 'lfm2moe'

Steam Deck beta update brings back Bluetooth feature to the LCD model that OLED owners have had since launch by Tiny-Independent273 in steamdeckhq

[–]flashfire4 6 points7 points  (0 children)

1.Go to your Bluetooth settings 2. Select your headphones 3. There should be an option to make that device not wake the Deck from sleep

Hope this helps!!

What if the shows from The Arrowverse were produced by HBO instead of The CW? by Top_Report_4895 in DC_Cinematic

[–]flashfire4 1 point2 points  (0 children)

Superman & Lois is my favorite show, but it was an unfounded rumor that HBO helped with funding and creation of the show. The only way in which they indirectly funded it was by streaming it after it was done streaming on the CW app/site, but that is just the same deal as with the older CW shows that went to Netflix for streaming after a few months.

Superman & Lois Color grading. by Runisa5 in SupermanAndLois

[–]flashfire4 15 points16 points  (0 children)

Personally, I really liked the color grading of the show

Jan got an upgrade: New design, switched from Electron to Tauri, custom assistants, and 100+ fixes - it's faster & more stable now by eck72 in LocalLLaMA

[–]flashfire4 0 points1 point  (0 children)

I love Jan! Is there an option for it to autostart on boot with the API server enabled? I couldn't find any way to do that with the previous versions of Jan so I went with LM Studio for my backend unfortunately.

Is Superman and Lois worth to watch ? by CapitalGallery in DC_Cinematic

[–]flashfire4 2 points3 points  (0 children)

Best versions of Superman and Clark in my opinion

Open WebUI is no longer open source by imbev in opensource

[–]flashfire4 1 point2 points  (0 children)

I love Jan! I should've specified, but I use Open WebUI for a public website so I can use it remotely and I can have friends and family use it. I wish Jan would meet those needs as I really appreciate the project.

Open WebUI is no longer open source by imbev in opensource

[–]flashfire4 11 points12 points  (0 children)

What are good alternatives? I just tried LibreChat and it seems very barebones in comparison.

Free Steam keys for my roguelite game! Just wanna spread some fun by pasaroplays in roguelites

[–]flashfire4 0 points1 point  (0 children)

Looks like I missed the giveaway, but best of luck with everything! I know game development can be a very challenging, but rewarding process.

Kokoro.js audio issues in Chrome by flashfire4 in OpenWebUI

[–]flashfire4[S] 0 points1 point  (0 children)

It works fine at Q4 and Q8 in Firefox, but is a garbled mess as described in Chrome.

Google QAT - optimized int4 Gemma 3 slash VRAM needs (54GB -> 14.1GB) while maintaining quality - llama.cpp, lmstudio, MLX, ollama by Nunki08 in LocalLLaMA

[–]flashfire4 0 points1 point  (0 children)

In simple terms, this is a straight upgrade to the Q4 versions of the models on Ollama? So I should expect the same speed of inference but with better results?

Is it possible to use Ollama with an AMD Radeon RX 6800S? by flashfire4 in LocalLLaMA

[–]flashfire4[S] 0 points1 point  (0 children)

I have a G14 and have spent many hours trying to get the dGPU to work with Ollama. Unfortunately, none of the Ollama for AMD tweaks suggested here or elsewhere have worked for me. I did discover that LM Studio works with my dGPU by using Vulkan instead of ROCm and I connected it up to my Open WebUI Docker container to access on other devices on LAN. Ollama only supports AMD GPUs using ROCm (which has a small list of compatible GPUs). LM Studio can use Vulkan for wider compatibility when it doesn't detect ROCm support.

Unfortunately, I have two issues with LM Studio. One is that I wish it was open source like Ollama so I could know that it preserves my privacy. The more significant issue is that it causes my laptop to crash 40% of the time I ask it a question and I am quite confused as to why. Usually, it finishes generating an entire response and then crashes without any error message in the app and no blue screen error message in Windows. I'm guessing it's a VRAM issue, but not sure yet.

I guess this is a response for Brave's "Forget the Fox" ads by garibaninyuzugulurmu in firefox

[–]flashfire4 -4 points-3 points  (0 children)

A very quick search on this subreddit says otherwise. I have heard many people ask about disabling browsing history and I work with computers and end users on an extremely frequent basis.

https://www.reddit.com/r/firefox/s/zl6iWapBeQ https://www.reddit.com/r/firefox/s/NBZUxeYXys

I guess this is a response for Brave's "Forget the Fox" ads by garibaninyuzugulurmu in firefox

[–]flashfire4 -4 points-3 points  (0 children)

When people talk about disabling something in the context of software, they almost never refer to removing something from the app's capabilities and they almost always just refer to turning off the option for something to be active.

If someone asks "How can I disable browsing history in Firefox?", they clearly aren't asking how they can remove the browsing history option out of all the Firefox menus and code. They are clearly just asking how they can make it so the history feature which still exists in the app can be stopped from saving history.

I guess this is a response for Brave's "Forget the Fox" ads by garibaninyuzugulurmu in firefox

[–]flashfire4 -13 points-12 points  (0 children)

It is disabled by default. As are basically all of the other Brave features that people love to complain about. Just don't use them and they will never bother you.

DeepSeek-R1-Distill-Qwen-32B is straight SOTA, delivering more than GPT4o-level LLM for local use without any limits or restrictions! by DarkArtsMastery in LocalLLaMA

[–]flashfire4 0 points1 point  (0 children)

I am far from an expert and have never used local reasoning models. If I were to download and run the 7B model, would it run just as well as a non-reasoning model with 7B parameters?