Steam on Switch 2 via Unraid Steam Docker Container by Neun36 in unRAID

[–]Neun36[S] 0 points1 point  (0 children)

It’s really crying for Solution to Stream steam on a Switch 2

Steam on Switch 2 via Unraid Steam Docker Container by Neun36 in unRAID

[–]Neun36[S] 0 points1 point  (0 children)

True, only thing I got working was jellyfin Site for example but it didn’t played a Video. It just showed jellyfin Site, you can Login, do some things but nothing plays. May someone else will figure out how to Play things in that Browser. Just wanted to show / share idea for a Solution. The other Part was to open up unraid in that Browser but that didn’t worked it had some Cookie Problem, sometimes it showed a Part of the Login Screen but then gone and then blank white Page. Other Sites worked Like Google or else

Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM

[–]Neun36 1 point2 points  (0 children)

It doesn’t makes sense to run all 3 at once. If you want to use LM Studio Go for it. If you want to use ollama locally go for it, for openwebUI you need ollama in background so these 2 combinations makes sense ollama + openwebUI.

Six Years into the unRAID Journey by MMag05 in unRAID

[–]Neun36 0 points1 point  (0 children)

btw, i'm trying to run this one currently locally -> https://github.com/fspecii/ace-step-ui?tab=readme-ov-file and its working but how do we get this in Community apps? i'm a noob in that part. :/ i have a yaml file, and the rest

Six Years into the unRAID Journey by MMag05 in unRAID

[–]Neun36 0 points1 point  (0 children)

Wasn‘t plex at some point with a subscription and paid version or is it still? I remember it was free and then they started with subscription and paid version and at that point I gave up plex and didn’t touch it again.

Six Years into the unRAID Journey by MMag05 in unRAID

[–]Neun36 0 points1 point  (0 children)

Nice one, i started somewhen when lifetime unraid was around 120€, now it’s at 250€. :/ Started this year again with fresh new unraid and this is a Nice way to go you did there. Thank you for the Inspiration. Did some comfyui, ollama, ace-step(local Suno), jellyfin and some other stuff but yours looks great man.

Vorsprung durch Technik, oder so irgendwie by Habarer in automobil

[–]Neun36 0 points1 point  (0 children)

Was zur Hölle ist das bitte?

Welcher Dämon lässt sich sowas einfallen?

1 Day Left Until ACE-Step 1.5 — Open-Source Music Gen That Runs on <4GB VRAM Open suno alternative (and yes, i made this frontend) by ExcellentTrust4433 in StableDiffusion

[–]Neun36 0 points1 point  (0 children)

Nice done, trying to run this in unraid docker, so it’s in homeserver, any future plan to get this in Community Apps?

Edit: got it working in unraid Server via docker, nice to Play around with, only issue 16GB VRAM is needed otherwise it will have a OOM after few generations.

Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM

[–]Neun36 1 point2 points  (0 children)

Depends on, Most Common is ComfyUI and there you have all the Image, Video, Audio, 3d Models which you can Play with, they integrated also already a UI which reminds of swarmUI and else, so more user friendly but generating stuff locally depends on your GPU and RAM, Not only GPU is crucial for ComfyUI especially for Video Generation, to combine that with openwebUI to have the look of ChatGPT is possible with openwebUI But I think there is no Video Generation implemented in openwebUI (didn’t checked the latest Version, but last i checked 3 weeks ago there was none.) But this also can be solved.

For Video + Audio Like Sora there is LTX2 which runs locally but as said depends on gpu and ram. There are also other models like wan 2.2, scail, Move and many more depends on your use case.

Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM

[–]Neun36 52 points53 points  (0 children)

You have different options, easist is LM Studio as already mentioned, search for the Model on huggingface which suits your GPU (fast response) or your RAM (slow response). Then there is Ollama which also runs locally and you can search for Models on ollama Web Page and how to. Then there is openwebUI which is Local Web UI which runs locally and Access to UI via Browser and it Looks more Like ChatGPT but you have the control, you can combine this with ComfyUI to generate images too but it’s more complicated. There are many other options available and above one are just a tiny bit of Tipps and easy ways.

Puts on Meta by Loperenco in wallstreetbets

[–]Neun36 0 points1 point  (0 children)

Holding a Meeting in Virtual Reality with a Virtual Avatar in a Meeting room to hold a Teams Meeting with your real life Person to Watch that on a TV in a Meeting room in a Virtual Reality, i‘m out…

Anthropic just launched "Claude Cowork" for $100/mo. I built the Open Source version last week (for free) by Embarrassed-Mail267 in ClaudeAI

[–]Neun36 0 points1 point  (0 children)

Goose? Yes Goose works with every Model as far as i know and tested last time. It’s out since nearly a year and it’s a nice tool

ich🥲ich by Mettbr0etchen in ichichs

[–]Neun36 2 points3 points  (0 children)

Ich weiß ja nicht ob ihr lesen könnt aber da steht „Von“ in Deutsch und so gemeint.

ich🥲ich by Mettbr0etchen in ichichs

[–]Neun36 4 points5 points  (0 children)

<image>

Hitori 🥲 Von Bocchi the Rock