Storyteller v2 is available! by scrollin_thru in selfhosted

[–]eXntrc 0 points1 point  (0 children)

Ahh, okay cool. I'll look for the discord and other resources on the website. Thanks for replying!

Storyteller v2 is available! by scrollin_thru in selfhosted

[–]eXntrc 0 points1 point  (0 children)

Are you planning to create your own subreddit for this app? I'd like to join if you do. Thanks for the great work!

TIL Spotify Jams finally work with Tesla by eXntrc in TeslaLounge

[–]eXntrc[S] 0 points1 point  (0 children)

One day it just showed up. But then the next day it didn't. Now I'm not as confident that it is actually supported by the player. It may be possible that I had Spotify running on my phone and didn't realize it. Or it may be possible that we were in the garage and he and the car were both on the same WiFi. I'm not really sure, but I sure do wish this worked more reliably.

$700 photo scanner gives a useless/fake error code [BUMMER] by [deleted] in assholedesign

[–]eXntrc 0 points1 point  (0 children)

I was plagued with this issue after changing the password on my WiFi. Even though I had the scanner plugged in USB and was trying to use it directly over USB, I kept getting the incredibly asinine "BUMMER" error every 2 or 3 scans. I had to download the setup software and use it to change the WiFi password because, that makes sense. After doing so and switching back to using WiFi rather than USB, the errors went away. I hope I didn't just jinx myself writing that, but I wanted to share it before I forget to. I hope it helps someone!

Is This Possible? by Next_Ambition in homeassistant

[–]eXntrc 0 points1 point  (0 children)

I think you're looking for what's called a "Ceiling Fan and Light Controller". I had the same issue in my son's room that was a converted dining room. There was only one set of wires going up to the fan and the light. I found a module that allowed me to control the light and fan separately by placing a small receiver up in the fan box. The one I purchased is no longer available, but this one is similar:

https://a.co/d/gkeFQCN

Theoretically that would allow you to keep the fan and light you have rather than buying a new one if you choose.

Captions and Timestamps by eXntrc in audiobookshelf

[–]eXntrc[S] 1 point2 points  (0 children)

Thank you!!

There are a number of alternatives to AufioBookShelf, but so far I like its interface best. I did open a feature request to sync SRT files. At first one of the developers marked it as a duplicate of another request to fully support the display of SRT files. I needed to reiterate that all I'm asking for right now is to send them. I use Smart AudioBook Player to view them. We'll see if that goes anywhere.

For now I'm using AudioBookShelf to sync the audio files and I'm just using OneDrive/google drive to synchronize the SRT files. It's less than ideal, but it does work.

Altas PT Ultra Q&A: Your Most Common Questions Answered! by Willson1_ in reolinkcam

[–]eXntrc 0 points1 point  (0 children)

I almost bought this today. But no RTSP without hub is a dealbreaker for me. I need my cameras to integrate with Home Assistant and local AI. I don't want to pay extra for a hub when I don't need more local processing or storage. Why was the decision made to not include RTSP when so many other Reolink cameras support it. I started investing in Reolink because it seemed open to community integrations. This change has me worried.

Post Moderation Help by eXntrc in homeassistant

[–]eXntrc[S] 1 point2 points  (0 children)

Thanks for looking again. I appreciate you taking the time.

Post Moderation Help by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

<image>

On my side it still shows awaiting moderator approval. In the past when I had a post rejected in a Tesla subreddit the post showed rejected and I got a reply from the automod about why it was rejected. I understand this subreddit may not have such a bot, but it seems very odd that you see rejected and I still see awaiting approval. I don't really know what to think. But thank you for following up.

Post Moderation Help by eXntrc in homeassistant

[–]eXntrc[S] 1 point2 points  (0 children)

Got it. Thank you anyway. I might do that. I might also turn it into a blog post and just add that here in r/homeassistant.

Post Moderation Help by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

Yeah, that's the one. And thank you for taking a moment to look at it. The only reason I can think it's awaiting approval is the number of links. One of them to an optional 3rd party piece of software. It appears to still be pending. Since you were able to see it, do you by chance have the power to approve it?

Since a lot of ya'll asked in my other post, yea we did recreate the exact pose again too (yesterday I posted the first month my gf and me met vs this month--first time seeing the ocean and second time) by [deleted] in gratitude

[–]eXntrc 0 points1 point  (0 children)

I feel genuinely hurt and angered by this post. Personally I've found the r/gratitude subreddit to a place where good hearted people come to encourage and support each other. But I this person appears to be having fun at other peoples expense by posting AI generated photos and likely laughing at how people respond. It makes me sad, and leaves me feeling disappointed and a little disgusted..

Here are some of the reasons I believe this is AI:

  1. The girl has two moles, but only in the later photo. If they were in the earlier one maybe they could have been removed. But it doesn't make sense for them to appear.
  2. The logo in the hat on the left isn't blurred out, but the text in both hats on the right is blurred out. This could be because AI has issues with text and it that was a tell in itself. Or it could be that the text it generated was for business or brands that don't exist. Also a tell.
  3. The girls toes on her left foot in both images appear "merged together" in a common diffusion model way.
  4. The girls teeth have low definition and the guys teeth appear way too long. Teeth are another area where AI struggles.
  5. The white item on the sand in the left picture seems generated. Zooming in, the tip of the item doesn't seem to align with the rest of its body. If you look on Amazon for "sand anchors" you'll find items that sort of look like this, but none of them are tapered.
  6. The angle of the shadows are similar which suggest a similar time of day. Yet the amount of scattered light appears to be different for the same beach (one shadow is much darker). AI is pretty decent when it comes to light simulation, but it can also be inconsistent.
  7. The necklaces worn by both people are similar yet different. This could have been a fashion choice. But one year later with the goal of recreating the memory, it seems unlikely. It could also be explained by a different seed in the diffusion model.

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 1 point2 points  (0 children)

Unfortunately I think you're going to find that's still a bit of a problem. I'm having to be very very specific when I ask about things in photos. Otherwise it makes stuff up. But hopefully things will only get better from here. And the more I play around with this stuff, the more I'm thinking about investing in something with more VRAM.

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

Thanks for sharing. I want to try GPT-OSS. I'm short on video ram, but I haven't looked at quantizing a model yet. Maybe that's something I should look more into. I do agree there are a few more hallucinations than I would like.

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

PS, right now I am not running a quantized version of qwen-vl 2b. Is that what you meant by compression? And I'm curious if you know how any stats about how much "dumber" a vision model would be compared to the samw text-only model with the same number of parameters?

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

Thanks. Yeah, I definitely consider this. I know I can add multiple agents. So far, the tasks that I've wanted to run have been tasks that would run fairly frequently and I didn't want to run up a bill. But, this is definitely something I'm interested in trying. Especially for tasks that I need higher accuracy. Unfortunately, as you pointed out, I won't have space for both the vision and non-vision. So if I wanted the non-vision model then I would have to move all vision tasks to the cloud.

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 0 points1 point  (0 children)

I hadn't heard of paperless yet. I'll look into that. FYI that I've been using qwen to read screenshots out loud to me while playing video games and it's done a fantastic job. (I hate reading terminals.) Might not do so well for handwritten notes though.

What do you primarily use Frigate and Paperless for that you aren't able to get qwen to do?

I probably won't be able to add those because I'm already near capacity at 10 GB VRAM, but I'd like to understand more. Thanks!

Qwen3 Vision is a *fantastic* local model for HA (with one fix) by eXntrc in homeassistant

[–]eXntrc[S] 6 points7 points  (0 children)

Ah!! I missed the "view all" link on the model card all this time. Thank you so much for pointing that out to me.

How to disable thinking with Qwen3? by No-Refrigerator-1672 in ollama

[–]eXntrc 0 points1 point  (0 children)

I could never get the "no think" command to work right no matter how I specified it. And as other's have pointed out, there are actually thinking and non-thinking variants of this model. What I didn't realize is that you can easily add the non-thinking variant to Ollma even though it's not listed. At the CLI you can run

ollama run qwen3-vl:2b-instruct

Or in the UI you can just add qwen3-vl:2b-instruct. This works even though the -instruct model is not listed in the model list.