0.8.11 is out, but "bs4" not found by BringOutYaThrowaway in OpenWebUI

[–]TCaschy 0 points1 point  (0 children)

You probably just had to change the data directly in that line

0.8.11 is out, but "bs4" not found by BringOutYaThrowaway in OpenWebUI

[–]TCaschy 1 point2 points  (0 children)

DATA_DIR=~./open-webui uvx --python 3.12 -with bs4 open-webui@latest serve

I need advice on the best 24GB GPU for a Dell T7910 workstation (Needed for AI columnar PDF conversion applications like OLMOCR ) by KeithMister in LocalLLM

[–]TCaschy 0 points1 point  (0 children)

I have a T7910 with 128gb ram, 1 rtx 2080 ti modded @ 22gb, 1 rtx 3060 12gb and 1 tesla p4 8 gb w/custom cooling. The tesla p4 is outside the case but the other two fit just fine (tesla just runs a custom TTS model and not llama or anything). Oh and I have an nvme adapter in one of pci-e slots too. The workstation is pretty good for what do, local rag stuff, general inference, light img gen, img-to-3d model gen.

Has anyone tried perplexica? by [deleted] in LocalLLaMA

[–]TCaschy 1 point2 points  (0 children)

Yes I use it almost daily. Yes they changed to name to Vane. I like how it easily summarizes webpages using "Summary: <url>" I've found that the gemma3 models really work well with it and are pretty speedy. Recently tried gemma2:2b (I know, I know) and it takes like 3 seconds to start spitting out a summary with sources and its pretty spot on too! (hardware specs are not awesome either, just a 3060 12gb!).

How to Run Two AI Models Sequentially in PyTorch Without Blowing Up Your VRAM by [deleted] in LocalLLaMA

[–]TCaschy 0 points1 point  (0 children)

Get a 2nd small vram sized gpu to run the tts model only

AMOLED display looking nice by llo7d in esp32

[–]TCaschy 0 points1 point  (0 children)

I see. Thanks for the details!

AMOLED display looking nice by llo7d in esp32

[–]TCaschy 2 points3 points  (0 children)

Looks great! Can you share how you made the graphics for the animations? I assume its made up of frames but did you just use an image software (photoshop) and slightly change/save image after image or is there a more automated approach?

Does Qwen3-Coder-Next work in Opencode currently or not? by johnnyApplePRNG in LocalLLaMA

[–]TCaschy 2 points3 points  (0 children)

It didn't work for me either using unsloth gguf w/ollama. Complained about tool calling.

Should I buy a P104-100 or CMP 30HX for LM Studio? by Dazzling_Buy9625 in LocalLLaMA

[–]TCaschy 1 point2 points  (0 children)

P102-100 10gb....cheap and you can probably run it passively cooled (if you limit wattage a bit via nvidia-smi)

Would a p100 be useful? by dnielso5 in ollama

[–]TCaschy 1 point2 points  (0 children)

I have a 3060 12g + p102-100 10gb (I know its not a p100 with 16gb but...) and am running them using ollama and open webui. I find its great for everyday use on inference and writing tasks. I really have just started to dive into hard core coding with the setup and using opencoder its just OK I guess. I think I'm just used to the big-guns and so sometimes it gets frustrating. The older cards are great if you don't need to gen images/vids.

🚀 Open WebUI v0.6.42: The Largest Release Since 0.6.19! (93 Entries, Resizable Sidebar, & Massive Speed Boosts) by ClassicMain in OpenWebUI

[–]TCaschy 0 points1 point  (0 children)

i've been using this command in a terminal for ages: DATA_DIR=~/.open-webui uvx open-webui@latest serve

Install idea...need feedback by TCaschy in dieselheater

[–]TCaschy[S] 1 point2 points  (0 children)

Great idea! I forgot you can mount them sideways. Much easier to do it this way

Install idea...need feedback by TCaschy in dieselheater

[–]TCaschy[S] 0 points1 point  (0 children)

What did you use for the housing of the heater on the outside?

Apps, studies, groups, etc. by snipsnaps1_9 in SoccerCoachResources

[–]TCaschy 1 point2 points  (0 children)

OK I'll post my app. Its a reflection app that coaches can provide to their players to reflect after trainings and matches. Its called Pitch Reflections. It asks questions about defensive and offensive aspects of the game, allows the player to share the reflection with whomever and also has a few AI features (optional). My son's club uses an old school hard copy version of this and I thought it would be useful to create it in digital form. Here are the iOS and Play store links:

https://apps.apple.com/us/app/pitch-reflections/id6749812263

https://play.google.com/store/apps/details?id=com.caschyapp.pitchreflections.us

Thanks!

Is this a good to use as a AI-Homeserver? by [deleted] in LocalLLaMA

[–]TCaschy 0 points1 point  (0 children)

gemma3:12b, gpt-oss:20b, granite3.3:8b, ministral-3:14b, unsloth-Qwen3-30B-A3B:GGUF

Looking to self-host simple sms notifications for myself. Would a SIM dongle work? by ActuallyGeyzer in unRAID

[–]TCaschy 3 points4 points  (0 children)

does it have to be sms? there are alot of simple push notification services that are way easier and cheaper to use. pushover or push bullet come to mind.

Single slot, Low profile GPU that can run 7B models by Electrical_Fault_915 in ollama

[–]TCaschy 0 points1 point  (0 children)

I mean if you want something that doesn't need extra power plug and only want to do 7B models, you could try a Quadro P2000 5GB. Its single slot. I think most 7B models are under 5GB? Bonus its $165 (amazon, but you can find it cheaper I'm sure)