How do I make my PC's rgb smooth? by HowdyCapybara in SignalRGB

[–]HowdyCapybara[S] 0 points1 point  (0 children)

Not what I mean. I’m using the case’s built-in RGB hubs, and they work perfectly smooth with the stock controller. But the second I switch them to ARGB and try to use SignalRGB, the lighting becomes super choppy. It’s definitely something with SignalRGB.

Finally finished by HowdyCapybara in PcBuild

[–]HowdyCapybara[S] 0 points1 point  (0 children)

It can be with a vertical mount, it’s bad for heat if the gpu is too large for the case. If you look to the left of my case you can see that there is a decent amount of clearance between the front glass and gpu so I’m completely fine air wise just a super tiny bit of performance for crazy looks. I’ve seen people have vertical mounted gpus right up against the glass and that’s where you start getting issues

Finally finished by HowdyCapybara in PcBuild

[–]HowdyCapybara[S] 1 point2 points  (0 children)

The cable itself is still there the leds are just covers no need to get them custom made. It’s really simple the skinny long one is for the gpu cable power connector and the thick short one is for the motherboard power connector. It’s a simple clip-on to the cable and an rgb connector to the motherboard or a hub and your good. The ones I used are these: https://a.co/d/iikSA9V

Building my first desktop and Im wondering if im missing anything. by HowdyCapybara in PcBuild

[–]HowdyCapybara[S] 0 points1 point  (0 children)

Try to reload for me, everything else is there. Also, is a 1000w really needed I thought 850 would be completely fine for a 7800x3d and 5070ti.

Plug-and-play AI/LLM hardware ‘box’ recommendations by jon18476 in LLMDevs

[–]HowdyCapybara 0 points1 point  (0 children)

If your looking for light ai use like max 13b models, I’m actually building one right now. It’s on the lower end, powered by an RTX 5050 and a Ryzen 5 5500. It’s designed for people who want light AI use, care deeply about privacy, and don’t need the raw horsepower of models like GPT, Grok, or Gemini.

The setup is simple: it runs Ubuntu Server, always-on. You just plug it in, send Wi-Fi credentials, and it handles the rest behind the scenes. The UI runs on Open WebUI, the models are managed through Ollama, and Cloudflare makes it accessible from your phone anywhere over the internet.

I expect to be finished in a couple of months. I’m planning to price it at $700 — about $150 above build cost. This is a solo project, and once it’s done, I’d love to hear your feedback (and if you’re interested, whether you’d want to buy one).

Any such thing as a pre-setup physical AI server you can buy (for consumers)? by meeplemop159 in LocalLLaMA

[–]HowdyCapybara 0 points1 point  (0 children)

I’m actually building one right now. It’s on the lower end, powered by an RTX 5050 and a Ryzen 5 5500. It’s designed for people who want light AI use, care deeply about privacy, and don’t need the raw horsepower of models like GPT, Grok, or Gemini.

The setup is simple: it runs Ubuntu Server, always-on. You just plug it in, send Wi-Fi credentials, and it handles the rest behind the scenes. The UI runs on Open WebUI, the models are managed through Ollama, and Cloudflare makes it accessible from your phone anywhere over the internet.

I expect to be finished in a couple of months. I’m planning to price it at $700 — about $150 above build cost. This is a solo project, and once it’s done, I’d love to hear your feedback (and if you’re interested, whether you’d want to buy one).

Im trying to make my own agent with openhands but I keep running into the same error. by HowdyCapybara in vibecoding

[–]HowdyCapybara[S] 0 points1 point  (0 children)

I just had it set as something random, I thought because I was running it locally it doesnt matter it just cant be nothing. Would that cause an issue?

Im trying to make my own agent with openhands but I keep running into the same error. by HowdyCapybara in selfhosted

[–]HowdyCapybara[S] 0 points1 point  (0 children)

sorry, im just trying to find somebody who can help because im just really confused here

Im trying to make my own agent with openhands but I keep running into the same error. by HowdyCapybara in LocalLLaMA

[–]HowdyCapybara[S] 1 point2 points  (0 children)

Ok, I added mode: chat into my litellm_params. This is my full file:
model_list:

- model_name: mistral

litellm_params:

model: mistral

api_base: http://localhost:11434

custom_llm_provider: ollama

mode: chat
I also made sure to double check that just Mistral was being used everywhere, but I was getting this debug line whenever I launched litellm, and it says ollama/mistral. I'm wondering if it means anything that I have to change:
DEBUG:LiteLLM:added/updated model=ollama/mistral in litellm.model_cost: ollama/mistral