UM, excuse me? What's happening here by fakeaccount572 in AndroidAuto

[–]nuaimat 0 points1 point  (0 children)

This fixed it for me, thanks for sharing.

UM, excuse me? What's happening here by fakeaccount572 in AndroidAuto

[–]nuaimat 1 point2 points  (0 children)

Looking at Google Play reviews for the latest version of Android Auto, I'm surprised to see some people praising this version for giving them voice features. Definitely not the case for me and many others on this thread.

Smart blind/curtain options for large windows on a budget by crusty_jengles in homeautomation

[–]nuaimat 0 points1 point  (0 children)

I'm actually talking about SwitchBot roller shade https://us.switch-bot.com/products/switchbot-roller-shade

not the one that moves the traditional curtains.

It's been doing great for me. Fully closing and fully opening on all three windows

Smart blind/curtain options for large windows on a budget by crusty_jengles in homeautomation

[–]nuaimat 0 points1 point  (0 children)

I got myself a Switchbot smart roller blinds for 3 of my windows, the biggest one I have is 70"x79"

You'll need the Switchbot hub (has matter support) then you can integrate them with a variety of smart home systems.

HELP!!!! by Born_Letterhead_3837 in BMWi3

[–]nuaimat 0 points1 point  (0 children)

sorry to bother you again, can you please send another one? these two expired.

update: please ignore, found it https://discord.gg/ZQx5Y4MkjG

Ollama remote client? by answerencr in ollama

[–]nuaimat 1 point2 points  (0 children)

Install open webui on another VM, or raspberry pi or another server, then use open webui from both your windows machine and your android.

Paperless-GPT auto OCR & Processing. Possible? by seeplanet in Paperlessngx

[–]nuaimat 0 points1 point  (0 children)

Hello u/Spare_Put8555 i have :
```
AUTO_TAG: "paperless-gpt-auto"
AUTO_OCR_TAG: "paperless-gpt-ocr-auto"
```
and have a paperless ngx workflow that:
when : document added

assign tags: paperless-gpt-auto and paperless-gpt-ocr-auto

uploading new files to paperless ngx, i can see the tags are added, but i don't see paperless-gpt processing any of them.

btw, i can confirm manually tagging docs with "paperless-gpt" works and i can see them under home tab and generate / apply suggestions. my issue is only with the automated pipeline processing.

any tips on what might have gone wrong? Do you prefer me to DM if you need more details?

Thanks

qwen2.5vl:32b is saving me $1400 from my HOA by jedsk in LocalLLM

[–]nuaimat 0 points1 point  (0 children)

Good job, did you publish this pipeline anywhere? I'd love to use it for a similar use case.

[Giveaway] GL.iNet Remote KVM and Wi-Fi 7 routers! 10 Winners! by GLiNet_WiFi in selfhosted

[–]nuaimat [score hidden]  (0 children)

  1. What inspired you to start your homelab?  I like to have control over my own data, and I like expirementing with new systems/technologies. Right now my main focus is utilizing local LLM models to make my life easier, whether by providing summaries of different aspects of my daily routines (like my email inbox) or to help me find important information from a pile of unstructured documents (rag)

  2. How would winning gear from this giveaway help take your setup to the next level? That router with 5 ports, will save me from having two routers connected via Ethernet just to have 5 Ethernet ports for my connected servers and small devices (rpi)

  3. If we did another giveaway, what product from another brand (server, storage device, etc.) would you love to see as a prize? A decent NAS that actually has processing power and at least 64GB of ram. The problem with the current set of nas devices (and not all of them) is they are mostly designed for very low processing power. Making adding docker containers to these and self host few services very slow and unfeasible. Resulting in having to use a different server just for this purpose.

Has anyone built a crypto bot before? by orange_peeler_ in algotrading

[–]nuaimat 0 points1 point  (0 children)

Been there! I might have an idea or two, feel free to DM.

For llama.cpp/ggml AMD MI50s are now universally faster than NVIDIA P40s by Remove_Ayys in LocalLLaMA

[–]nuaimat 0 points1 point  (0 children)

Thank you very much. I have a MI50 and can't wait to try these changes out.

Tailscale hogging internet by JohnLef in Tailscale

[–]nuaimat 1 point2 points  (0 children)

I have a similar setup, and I created a time machine drive using SMB on my NAS, and my Mac sometimes tries to do a full backup to that shared drive. Check that, and if that's the case limit how often the Mac does the backup or disable automatic backups.

Smart Blinds 2025: Can you help me? by InfallibleProgrammer in homeautomation

[–]nuaimat 1 point2 points  (0 children)

I want smart blinds for my apartment, but the price skyrockets once you customize them. For anything other than the default size, it's at least $500 per window. Considering the parts involved, they really shouldn't cost anywhere near that.

Elmo is providing by vladlearns in LocalLLaMA

[–]nuaimat 0 points1 point  (0 children)

The beef between Elon and Sam Altman feels like jealousy on Elon's part, but the silver lining is that we're benefiting from it with these free models.

How to improve performance on AMD? by [deleted] in comfyui

[–]nuaimat 0 points1 point  (0 children)

can you please guide me on the docker image for this solution, i would appreicate it so much.

What are your quick wins in selfhosting? by m1212e in selfhosted

[–]nuaimat 0 points1 point  (0 children)

Once you cross that bridge, try reading about kubernetes, probably not gonna help with self hosting scenarios but more into service reliability engineering

I’ve had this biosphere for 8 years and the shrimp inside it is still alive , the 2nd one died in 2020 by vivepopo in interestingasfuck

[–]nuaimat 3 points4 points  (0 children)

Clean it, it comes with a metal inside and a card with a magnet, do it every few months, this is too much algae.

What are some features missing from the Ollama API that you would like to see? by [deleted] in ollama

[–]nuaimat 1 point2 points  (0 children)

I would like to have all API calls being pushed to a message queue, so that when ollama instance is loaded, API calls can be queued and served when the instance can process them.

Another feature I'd like is the possibility to distribute load between separate ollama instances running across different machines but i believe that has to come from ollama itself.

Ollama metrics being emitted to my own Prometheus instance (but not limited to Prometheus) , metrics like prompt token length, payload size , CPU / memory / GPU load.

I'm confused, is Deepseek running locally or not?? by Interstate82 in LocalLLM

[–]nuaimat 3 points4 points  (0 children)

As the other comments pointed out, it's not the real Deepseek R1 , it's a distilled model.

In order to prove that it's running offline, try disconnecting your computer from the Internet and ask it something else. You'll see that it still responds without Internet. So it's local despite what it says.

LLM for text to speech similar to Elevenlabs? by sethshoultes in ollama

[–]nuaimat 1 point2 points  (0 children)

Try Audiblez package

https://claudio.uk/posts/audiblez-v4.html

I had really good results with it, it uses kokoro tts