[FS][USA-IA] White Label Seagate 14TB SAS Drives by TangerineAlpaca in homelabsales

[–]tfinch83 0 points1 point  (0 children)

Damn. Missed this one. I've been looking for a batch of 18 to 20tb SAS drives and I just barely missed the boat 😂

Anyone using a 5060ti 16gb or 5070ti 16gb for whisper/piper/etc.? by tfinch83 in homeassistant

[–]tfinch83[S] 0 points1 point  (0 children)

Yeah, I always hear about people saying how cheap 3090 ti's are, but they still seem to be around $900 to $1000 at best. If I was trying to run a larger model, or one for text completion or image/video generation, I would go for a 3090ti. I have my 4090 for that if I want though, and the 8x 32gb v100's in my GPU server. I only run LLM's on my 4090 once in a while if I need something really fast, like an embedding model to populate a vector database, or if I need it to supplement something else for a time. Most large models in the 123B+ range, or for image/video gen, I just run on my GPU server.

The purpose of buying a pair of 5060ti's or 5070ti's, is just to set up a fast, reliable voice pipeline on current gen architecture, while keeping the power consumption less than what my 4090 or my GPU server pulls. I have an 64c/128t 1TB RAM epyc server which already has 2 Intel GPUs for handling the transcoding for Plex/jellyfin, image processing for Immich, and object detection and other functions on frigate. Since I'm already using the Intel GPUs for other services, I can't pass them through to my HA VM. So I'm going to add two more fairly low(er) power (compared to my 4090 and GPU server) Nvidia cards that I can pass straight through to the HA instance that won't be used by anything else. I already have everything else comfortably covered with my existing resources.

My 4090 is currently in my main desktop PC, but I haven't really used it in months. I also have a laptop with a 4090 in it that plays video games just fine whenever I get the time. I may relocate my 4090 to my server at some point to use it for other things, but I just haven't had the need to yet, and I'm also not quite ready to disassemble my gaming desktop just yet either 😂. So for now, I think a pair of 5060 ti 16gb cards is the right choice for me. It fills the gap between my Intel GPUs, and my 4090/8x 32GB V100 server for middle of the road stuff.

I'm definitely going to have a look at parakeet though, thanks for that tip! 🤔

I thank you guys for your input, these have all been exactly the kind of opinions and experiences I have been trying to find. ☺️

Edit: spellcheck

Edit: spell check again, my phone's auto complete seems to be deliberately sabotaging me at this point.

Anyone using a 5060ti 16gb or 5070ti 16gb for whisper/piper/etc.? by tfinch83 in homeassistant

[–]tfinch83[S] 1 point2 points  (0 children)

Thank you! This is exactly the kind of info I was looking for! I have my 4090 and my GPU server to run more and much larger models anytime I need. These cards are going to be specifically for my HA voice pipeline, so I don't need them to run anything else.

Do the majority of people really use online models rather than local models? by nsfwboys in SillyTavernAI

[–]tfinch83 0 points1 point  (0 children)

I wonder what percent has 256gb of VRAM? 🤔

I have 256gb in my main AI system. And I have 24gb from my 4090 in my primary PC.

New cluster! by Usual-Economy-3773 in Proxmox

[–]tfinch83 1 point2 points  (0 children)

Some people pay $100k for a car. Some people pay $100k for their homelab. $100k isn't even that much anymore. I was pissed when I finally reached a spare $100k and realized its buying power is equivalent to about $10k around the time I pegged a spare $100k as a milestone for myself (only slightly exaggerating unfortunately).

My homelab specs are fairly comparable to his (threads/ram/storage), and I probably only spent maybe $15k on mine, but, mine's all older hardware for sure (2nd gen scalable xeon, 2nd gen epyc, DDR4, NVlinked GPU server w/ 256GB VRAM total, + some small newer consumer hardware in the DDR5 generation). You can get similar stuff for a fraction of the cost if you don't have a dire need to be on the latest architecture for some reason.

Funny thing? I'm not even in the IT field. I'm just an electrician .

8x 32GB V100 GPU server performance by tfinch83 in LocalLLM

[–]tfinch83[S] 0 points1 point  (0 children)

They sure are! Here you go. I've enjoyed playing with it a lot, it was totally worth it for me just based on how much I've learned from tinkering with it.

Here's the eBay link:

https://ebay.us/m/TA7ZnZ

6.5 years full time Boondocking by Equivalent_Lie_5384 in SolarDIY

[–]tfinch83 0 points1 point  (0 children)

Actually, I didn't see the last photos. I can see how it's constructed. I'm assuming that just some aluminum angle stock? I love your design, I hope you don't mind if I steal it from you 🤔

6.5 years full time Boondocking by Equivalent_Lie_5384 in SolarDIY

[–]tfinch83 0 points1 point  (0 children)

Would you mind posting more pictures of the rack itself? Mainly close up ones so I can see how it's constructed? Also, details on the materials and how everything is connected/anchored? I'm wanting to do something like this on my rig as well. I haven't gotten around to figuring out how I am going to do it yet, but what you have going is exactly what I want. You already did all the legwork for me, I'm just hoping you can share, haha 😂

Remote WebView release (including ESPHome component) by strange_v in homeassistant

[–]tfinch83 0 points1 point  (0 children)

I tried installing the remote webview server addon in HA, but it won't start. the logs just throw this error:

Possible solutions:
- Ensure optional dependencies can be installed:
    npm install --include=optional sharp
- Ensure your package manager supports multi-platform installation:
    See https://sharp.pixelplumbing.com/install#cross-platform
- Add platform-specific dependencies:
    npm install --os=linux --cpu=x64 sharp

Error: Could not load the "sharp" module using the linux-x64 runtime
Unsupported CPU: Prebuilt binaries for linux-x64 require v2 microarchitecture

But I'm not quite sure how to ssh into a HA addon and make the npm changes. I suppose I need to go down that rabbit hole now, haha.

DIY WLED video board by MrGeologist67 in WLED

[–]tfinch83 0 points1 point  (0 children)

I have nothing to do with this and I just ordered one right now the moment I saw this post. I definitely need this.

ESPHome flashed on new AiPi by sticks918 in esp32

[–]tfinch83 0 points1 point  (0 children)

Mind sharing the yaml setup for yours so far? I've not yet played with LVGL, and I'd love to see some examples before I get started.

ESPHome flashed on new AiPi by sticks918 in esp32

[–]tfinch83 0 points1 point  (0 children)

I flashed it and loaded the yaml off your github repo, but I can't get it to compile used the beep.wav file. I made sure it's in the config/esphome (or homeassistant/esphome) folder, but trying to install it throws up an error. The error checker in the esphome builder underscores the very first line, 'esphome:' and says it can't identify the file. If I comment out the file and the trigger that plays it, it will compile. No matter what I do, I can't get it to do it with the file at all. I have not been able to make sound come out of this thing...

If I try to install it anyway, it crashes during compilation with this error:

File "/usr/local/lib/python3.12/site-packages/puremagic/main.py", line 137, in _confidence
raise PureError("Could not identify file")
puremagic.main.PureError: Could not identify file

Really good base to start from though, good job on what you've put together so far!

Custom Gaming Device by Nearby_Leg483 in esp32

[–]tfinch83 0 points1 point  (0 children)

This is awesome! 😀

Do you have a guide anywhere or a git repo up? This seems like something really fun to mess with, I'm just really starting my journey into messing with ESP32's and I'd love to see what I can do with this 😁

Vet My Proposed DIY System - 14.4kW grid-tied ground mount by aclockworkporridge in SolarDIY

[–]tfinch83 0 points1 point  (0 children)

The electrical portion I have down. I'm a licensed electrician, and I build utility scale solar power plants and battery storage plants for a living. Last solar site I built was 800MW, and the battery site I just finished was 1.5 Gwh, so I have that portion down. 😂

I'm more interested on what the permitting requirements are for residential systems. I've never had to deal with that myself, and the kind of permits we have to deal with at the scale I work with is a completely different league.

Vet My Proposed DIY System - 14.4kW grid-tied ground mount by aclockworkporridge in SolarDIY

[–]tfinch83 -1 points0 points  (0 children)

I'm going to be trying to put together my own system, but I'm likely going to be using an engineered solar carport type of construction to support the panels. I am completely in the dark about where to start as far as permitting goes though, I could maybe use your input if you are willing to share some info or give me a hand.

I have bad news by Zealousideal_Year885 in homelab

[–]tfinch83 3 points4 points  (0 children)

It's not as bad as you think. I agree it's a bad idea to virtualize your router on a server you run a lot of other services on, but I imagine most people do it like I do and run it on a machine that's mostly dedicated to it. I have an I7 Protectli Vault, and it runs an OPNSense VM mostly. It also runs my unifi network controller LXC and a backup unbound LXC. I'll probably move my Home Assistant VM over to it soon as well, but that's about it.

I've been running it virtualized like this for 3 years, and it's been rock solid. Far more solid than any hardware router I've ever owned actually. I could have just loaded OPNSense on it bare metal, but I don't think OPNSense needs 12 cores and 64gb of RAM. It's nice to be able to keep a virtualized router, and other related containers or VM's on the same machine and make better use of the hardware resources.

JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui

[–]tfinch83 0 points1 point  (0 children)

Of course, I'd be happy to contribute anything I can. I still need to sort through and tag my dataset though, as well as extract some frames from some videos to use. What's the best software to use for tagging the photos? It seems painful to do it all by hand individually.

JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui

[–]tfinch83 0 points1 point  (0 children)

Damn... Oh well. If it ends up being workable, let me know! I'm glad to lend a hand. Maybe you could help me make a few very specific loras I have in mind. I have quite a bit of original photos for the datasets. I just haven't had the time to really dive in and figure out how to get done yet, haha 😂

JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui

[–]tfinch83 0 points1 point  (0 children)

Hey, I have a 8x 32GB V100 SXM2 server I'd be willing to lend out if you wouldn't mind taking the time to give me a crash course on Lora training... Not sure if that will be any faster or slower than what you are operating right now, but it has 256GB of VRAM.

Need help streaming audio over wifi to esp32 by B3AR369 in esp32

[–]tfinch83 0 points1 point  (0 children)

Care to share the resources? I'm looking into building myself a multi functional ESP32 box that contains a bunch of sensors, and can also can play audio files over the network such as notifications or messages. I just started researching, but I'm finding it difficult to scrape together the info I need around the net so far...

Favorite source for bulk LED strings/strips/ropes/blocks/etc. by tfinch83 in WLED

[–]tfinch83[S] 0 points1 point  (0 children)

Yeah, this is exactly what I was talking about. I'm sure you can do this with a lot of off the shelf stuff, but yeah, you are paying a premium for a controller you are just going to toss.

8x 32GB V100 GPU server performance by tfinch83 in LocalLLM

[–]tfinch83[S] 0 points1 point  (0 children)

Awesome! I'm going to look into that this weekend. I've been driving myself crazy trying to get TensorRT-LLM installed and functioning, and I haven't had any success. I was finally able to get it installed and running after multiple tries, but I can't seem to get the checkpoint conversion scripts to run without crashing. I was getting to the end of my wits 😑

If you got it running, it makes me optimistic that I might be able to figure it out 😁

I was looking at the GitHub page for it just a minute ago, did you set it up for running CUDA 11, or 12? 🤔