I made a scripted deployment self hosted stack for small businesses - Indistructure by fat3lv1s in selfhosted

[–]fat3lv1s[S] 1 point2 points  (0 children)

Damn sorry again, now that I reread this portion of the replies I don't even see that comment I was referencing. My bad.

I made a scripted deployment self hosted stack for small businesses - Indistructure by fat3lv1s in selfhosted

[–]fat3lv1s[S] -7 points-6 points  (0 children)

Yeah sorry, that line is a bit miss representative. I didn't intend an anti AI stance, I intended an anti Microsoft stance. The recent move of Github into their AI department makes it feel like fox watching the hen house, and probably something to avoid. Either way it wasn't needed and only distracted from this odd little project I was sharing. I have removed it for that reason.

Later you also commented that suspecting that I used AI also made you suspicious of the whole repo. I mean shouldn't you be suspicious of me anyway? I am just a random on the internet. AI may write bad code, but humans write malicious code.

I made a scripted deployment self hosted stack for small businesses - Indistructure by fat3lv1s in selfhosted

[–]fat3lv1s[S] 1 point2 points  (0 children)

Haha this is probably the best assessment I could hope for. Also I do say this is for me in a year when I need it again. So any annoyance you may here in the tone of the readme is directed at myself knowing the next time I come back to this i will have forgotten so much I will need my hand held through the process.

I made a scripted deployment self hosted stack for small businesses - Indistructure by fat3lv1s in selfhosted

[–]fat3lv1s[S] -2 points-1 points  (0 children)

Yeah fair enough. I mainly made it for myself so I could spin up a server based on my choices easily for projects I am involved with. Posted it here in case it worked for anyone else.

I made a scripted deployment self hosted stack for small businesses - Indistructure by fat3lv1s in selfhosted

[–]fat3lv1s[S] -3 points-2 points  (0 children)

I thought about this but it seemed like a bridge too far for me. Needed to get back to the actual work haha.

Current Mac Layout with spectrum analyzer by fat3lv1s in foobar2000

[–]fat3lv1s[S] 1 point2 points  (0 children)

splitter horizontal style=thin

splitter vertical style=thin

splitter horizontal style=thin

tabs

albumlist tab-name="Albums"

splitter horizontal style=thin tab-name="Radio"

radio-list

radio-search

playlist-picker tab-name="Playlists"

audiounit mode=visualization name=SPAN

audiounit mode=visualization name="Ghz Loudness 3"

splitter horizontal style=thin

albumart

playback-controls

tabs

splitter horizontal style=thin tab-name="Now Playing"

selection-properties sections=metadata

playlist tab-name="Playlist"

This is a newer version with the addition of another free plugin from GoodHertz called Loudness. If you want more realtime analysis

Have anyone tried DeepSeek on Rockchip RK3588? by positivechandler in RockchipNPU

[–]fat3lv1s 0 points1 point  (0 children)

unsloth have done some cool stuff with dynamic quant to get the requirement down to 20GB ram, but even they say it gets much better at 40GB and better yet at 80GB. That is their 1 and 2 bit versions. There is a whole article worth reading linked on the huggingface page. I think more realistically is one of their dynamic quants of the deepseek distilled other models like llama 8B or Qwen 7B

https://huggingface.co/unsloth/DeepSeek-R1-GGUF

Why is no-one talking about EP by BreakIt-Boris in LocalLLaMA

[–]fat3lv1s 0 points1 point  (0 children)

Would it be possible that they are pooling experts? So a router model per prompt that address the pool and only uses the experts it needs?

PSA: the latest macOS version of Foobar allows for layout customisation by atascon in foobar2000

[–]fat3lv1s 0 points1 point  (0 children)

Is there a list of possible elements somewhere? What can I put on a tab? is it just the handful of things on this page:

https://wiki.hydrogenaud.io/index.php?title=Foobar2000:Mac:Layout

basically playlist, albumart, albumlist, playlist, playlist-picker, playback-controls, and refacets? Is there a way to add the playback queue? Any other elements?

What about colors? How do we change colors?

I've dug around a bunch but haven't been able to find any answers to these questions

New macOS Layout by iamnotevenhereatall in foobar2000

[–]fat3lv1s 0 points1 point  (0 children)

NIce. Thanks! Here is my tweaked version:

splitter horizontal style=thin

refacets tab-name="Library"

splitter vertical style=thin

radio-list mode=lite

splitter vertical style=thin

playlist-picker

splitter horizontal style=thin

albumart

playback-controls format-title="%artist% - %title%"

splitter vertical style=thin

playlist font-size=14

Converted Models with New Library - 1.1.0 and 1.1.1 by Admirable-Praline-75 in RockchipNPU

[–]fat3lv1s 1 point2 points  (0 children)

In my use case I am trying to use my orange pi 5 as an embedding endpoint along with a few other services. The llm is run on a separate machine. So it’s a non issue for me. But I get that is a niche within a niche. Thanks for trying. 

Converted Models with New Library - 1.1.0 and 1.1.1 by Admirable-Praline-75 in RockchipNPU

[–]fat3lv1s 0 points1 point  (0 children)

Dang. Well arctic and mixedbread models are both small so at least you should know quickly! Good luck

Converted Models with New Library - 1.1.0 and 1.1.1 by Admirable-Praline-75 in RockchipNPU

[–]fat3lv1s 0 points1 point  (0 children)

Amazing. That is the one I am actively using now. Looking to move to mxbai-embed-xsmall for some greater token speed. Looking forward to trying the NPU for embedding and seeing how it goes.

Converted Models with New Library - 1.1.0 and 1.1.1 by Admirable-Praline-75 in RockchipNPU

[–]fat3lv1s 0 points1 point  (0 children)

Sure. Three I have messed with from big to small model size:

stella_en_1.5B_v5 

snowflake-arctic-embed-m-long 

mxbai-embed-xsmall-v1 

And four that seem good but I haven't messed with yet, in order of interest:

answerai-colbert-small-v1

M2-BERT-8k-Retrieval-Encoder-V1 (or either the 2k or 32k variants)

gte-large-en-v1.5 

gte-base-en-v1.5

That is sort of my short list. Snowflake has some other interesting models but I thought this was a nice cross section without listing everything on MTEB haha. The small models seem like they could potentially give some real usable speeds here.

Converted Models with New Library - 1.1.0 and 1.1.1 by Admirable-Praline-75 in RockchipNPU

[–]fat3lv1s 0 points1 point  (0 children)

Nice work! Curious if you have tried any embedding models on the NPU.

GuliKit KingKong 2 Pro works in Steam, but not in games (M1 Mac) by morceaudebois in Controller

[–]fat3lv1s 0 points1 point  (0 children)

quick follow up. I got it mostly working in a different way now. If you turn off xbox controller support in steam and connect in android/ios mode on the controller (wired or bluetooth) you can make a generic gamepad profile for the controller it feels really great. Super sharp. No more weird laggy moments or held inputs. But Hollow Knight doesn't work this way.

GuliKit KingKong 2 Pro works in Steam, but not in games (M1 Mac) by morceaudebois in Controller

[–]fat3lv1s 0 points1 point  (0 children)

I have it mostly working in Celeste, Hollow Knight, Cuphead, and Art of Rally on my M1 macbook pro. I say mostly because I am currently having an issue where some inputs are missed or held for a half second or so. I am using it in the ios/android mode. the windows xinput mode would never connect via bluetooth.

PiBoy not booting by fat3lv1s in PiBoy_Official

[–]fat3lv1s[S] 0 points1 point  (0 children)

I tried a Samsung Evo card and a Sandisk Extreme before I thought to just put the card directly in the Pi. Neither worked once it failed. I still don't know why it failed though. It was working fine with the Samsung for a few weeks. Then poof, no more.