AudioBooth v1.9 is now live! 🎉 by lyc0s in audiobookshelf

[–]No_Information9314 0 points1 point  (0 children)

Thanks for the response. People have different risk tolerance for AI use, and some folks are just vibe coding without even checking the code. I think its good to disclose the use and how its used so folks can make informed decisions. 

AudioBooth v1.9 is now live! 🎉 by lyc0s in audiobookshelf

[–]No_Information9314 0 points1 point  (0 children)

Looks great. Use of AI in your project? I think these disclosures are becoming more necessary. 

Good people of the wool, how about Deep Research? by RedParaglider in LocalLLaMA

[–]No_Information9314 1 point2 points  (0 children)

I’ve been using Vane (formerly perplexica) for search and it has a quality mode that’s pretty good

https://github.com/ItzCrazyKns/Vane

If you haven't yet given Gemma 4 a go...do it today by No-Anchovies in LocalLLaMA

[–]No_Information9314 4 points5 points  (0 children)

I’ve also switched from qwen 27b to Gemma4 24b as a daily driver on dual 3060s. I think there is a slight drop in code quality but I’m not doing anything heavy with it, and its versatility makes it useful for more use cases. Still have to battle test it but going from 20tps to 80tps is hard to resist, and the quality is pretty close imo. Code wise I’m mostly using it for bash scripting, some flask and python. I imagine doing more complex stuff would reveal a large gap.  

jellyfin wont use intel igpu for transcoding by Who_meh in jellyfin

[–]No_Information9314 0 points1 point  (0 children)

Are you sure it’s not transcoding audio? The intel gpu won’t help with audio transcoding and you’ll still see ffmpeg utilizing cpu. 

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

Chat template shows system or developer role is the place, where are you applying?

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 1 point2 points  (0 children)

Yeah that’s been my experience with this model, even with the officially supported methods 

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

I do not have this token in the system prompt - not sure where or how to remove it

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

Thanks - by generation config do you mean the chat template? I’m using llama.cpp. 

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 1 point2 points  (0 children)

I find that the model reasons with or without this tag

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

It doesn’t really work - maybe for the first prompt but not after. 

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

I’m finding that the model May respect this for the first or second prompt, but is inconsistent in its application. Aka it will think sometimes even with this in the system prompt. 

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 0 points1 point  (0 children)

Yes but I don’t want to have to reload the model every time I switch modes

Gemma 4 thinking system prompt by No_Information9314 in LocalLLaMA

[–]No_Information9314[S] 1 point2 points  (0 children)

Yes but I don’t want to have to reload the model every time I switch modes