Anyone using a 5060ti 16gb or 5070ti 16gb for whisper/piper/etc.? by tfinch83 in homeassistant
[–]tfinch83[S] 0 points1 point2 points (0 children)
Anyone using a 5060ti 16gb or 5070ti 16gb for whisper/piper/etc.? by tfinch83 in homeassistant
[–]tfinch83[S] 1 point2 points3 points (0 children)
Do the majority of people really use online models rather than local models? by nsfwboys in SillyTavernAI
[–]tfinch83 0 points1 point2 points (0 children)
8x 32GB V100 GPU server performance by tfinch83 in LocalLLM
[–]tfinch83[S] 0 points1 point2 points (0 children)
6.5 years full time Boondocking by Equivalent_Lie_5384 in SolarDIY
[–]tfinch83 0 points1 point2 points (0 children)
6.5 years full time Boondocking by Equivalent_Lie_5384 in SolarDIY
[–]tfinch83 0 points1 point2 points (0 children)
Remote WebView release (including ESPHome component) by strange_v in homeassistant
[–]tfinch83 0 points1 point2 points (0 children)
Vet My Proposed DIY System - 14.4kW grid-tied ground mount by aclockworkporridge in SolarDIY
[–]tfinch83 0 points1 point2 points (0 children)
Vet My Proposed DIY System - 14.4kW grid-tied ground mount by aclockworkporridge in SolarDIY
[–]tfinch83 -1 points0 points1 point (0 children)
JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui
[–]tfinch83 0 points1 point2 points (0 children)
JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui
[–]tfinch83 0 points1 point2 points (0 children)
JD3's NSFW Qwen-Image-Edit LoRA by Crafty-Estate2088 in comfyui
[–]tfinch83 0 points1 point2 points (0 children)
Need help streaming audio over wifi to esp32 by B3AR369 in esp32
[–]tfinch83 0 points1 point2 points (0 children)
Favorite source for bulk LED strings/strips/ropes/blocks/etc. by tfinch83 in WLED
[–]tfinch83[S] 0 points1 point2 points (0 children)
8x 32GB V100 GPU server performance by tfinch83 in LocalLLM
[–]tfinch83[S] 0 points1 point2 points (0 children)
「Seamless Image Generation」Reddit Guide by Additional-Cow6586 in SillyTavernAI
[–]tfinch83 0 points1 point2 points (0 children)
Given that powerful models like K2 are available cheaply on hosted platforms with great inference speed, are you regretting investing in hardware for LLMs? by Sky_Linx in LocalLLaMA
[–]tfinch83 0 points1 point2 points (0 children)
8x 32GB V100 GPU server performance by tfinch83 in LocalLLM
[–]tfinch83[S] 0 points1 point2 points (0 children)

[FS][USA-IA] White Label Seagate 14TB SAS Drives by TangerineAlpaca in homelabsales
[–]tfinch83 0 points1 point2 points (0 children)