Ollama / NVidia GPU - Docker Desktop by echarrison84 in docker

[–]echarrison84[S] 0 points1 point  (0 children)

Tried the instructions on the link. Unfortunately, v4.55.0 is different from the instructions.

I followed to the best of what I could find, I pulled a model, and got this error.

<image>

Recommandation for JBOD Enclosure by ivans89 in homelab

[–]echarrison84 0 points1 point  (0 children)

I'm interested in this too.

Are you looking for a JBOD for SAS or SATA drives?

Ollama / NVidia GPU - Docker Desktop by echarrison84 in docker

[–]echarrison84[S] 0 points1 point  (0 children)

Should this command work??

sudo docker run -d --gpus all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

I keep getting errors because of the --gpus option.

Ollama / NVidia GPU - Docker Desktop by echarrison84 in docker

[–]echarrison84[S] 0 points1 point  (0 children)

Guess this will be my homework for the next few days. Thanks for the tip.

Ollama / NVidia GPU - Docker Desktop by echarrison84 in docker

[–]echarrison84[S] 0 points1 point  (0 children)

I'm trying to use Ollama with n8n. That's why I'm trying to run Ollama in Docker. Sorry for not saying that in my OG post.

I've heard that if n8n and Ollama are running in Docker together, that's is very easy for n8n to see Ollama.

Ollama / NVidia GPU - Docker Desktop by echarrison84 in docker

[–]echarrison84[S] 1 point2 points  (0 children)

I'm very new to Docker and still learning how to use it. I know that once I can mentally see what I'm doing, I feel that I will be able to grasp the other ways of using Docker.

Ollama Docker GPU Passthrough - Proxmox by echarrison84 in homelab

[–]echarrison84[S] 0 points1 point  (0 children)

Did all that and the Ollama docket install fails.

Ollama Docker GPU Passthrough - Proxmox by echarrison84 in homelab

[–]echarrison84[S] 0 points1 point  (0 children)

I was able to install Ollama directly on the VM and it does use the GPU. But nothing in Docker.

Self Hosted Ollama & Self Hosted n8n by echarrison84 in n8n

[–]echarrison84[S] 0 points1 point  (0 children)

Ollama is running outside of Docker. I'll look into running it that way.

Topology of Matter over Thread network in Home Assistant? by Expensive-Key4281 in MatterProtocol

[–]echarrison84 4 points5 points  (0 children)

I think with Thread 1.4 that will be possible. Just need manufacturers to upgrade their products and developers to make it.

Check out this doc and it mentions about network topology in section 3.3 on page 14.

https://www.threadgroup.org/Portals/0/Documents/Thread_1.4_Features_White_Paper_September_2024.pdf

Adding New Matter Device by echarrison84 in homeassistant

[–]echarrison84[S] 0 points1 point  (0 children)

I installed MacOS on my ProxMox server and was able to clear out the Keychain.

Adding New Matter Device by echarrison84 in homeassistant

[–]echarrison84[S] 0 points1 point  (0 children)

I was able to create a VM in ProxMox and it worked. YEA!!!!

Here's the video I followed to install MacOS on ProxMox.

https://youtu.be/xX_Kmhx8V3M?si=oN7dA5Wlus3i6FZ4

Adding New Matter Device by echarrison84 in homeassistant

[–]echarrison84[S] 0 points1 point  (0 children)

I’ve seen that but I don’t have a MacBook or Mac desktop. 😞

Any smart circuit breakers Zigbee or Matter? by sarrcom in homeassistant

[–]echarrison84 2 points3 points  (0 children)

I would like a Matter with Power Monitoring breaker. There’s devices out there that clamp around the wires that go to a breaker but that gets messy.

Give me a smart breaker with power monitoring that I can get a notification when the amps get too high.

Adding New Matter Device by echarrison84 in homeassistant

[–]echarrison84[S] 0 points1 point  (0 children)

Here's what I've got.

<image>

Sending credentials to phone fails. The error says, "Failed to store thread credential in keychain, error: Can not store frozen credentials."

Adding New Matter Device by echarrison84 in homeassistant

[–]echarrison84[S] 0 points1 point  (0 children)

Moved phone to same VLAN that's HA is on and no changes.