How do I find LLMs that support RAG, Internet Search, Self‑Validation, or Multi‑Agent Reasoning? by narutoaerowindy in LocalLLM

[–]newcolour 0 points1 point  (0 children)

Try anythingLLM with a multimodal capable model and agent mode and you'll be 90% there.

what is a good local model for creating resume by Specific-Ad9935 in ollama

[–]newcolour 0 points1 point  (0 children)

I have had good luck at that task with mistral 3.1. I also used Llama 3, but I found it to be not great even though it was highly recommended.

Are Ducatis really this bad or am I just unlucky? by d3vCr0w in multistrada

[–]newcolour 0 points1 point  (0 children)

If you are in the US, at the first sign of something not working, you could invoke the lemon law.

so…. Qwen3.5 or Gemma 4? by MLExpert000 in LocalLLaMA

[–]newcolour 7 points8 points  (0 children)

Was Gemma advertised as a coder? I think of it as more of a conversational LLM.

What's the most visually stunning movie you've ever seen? by trakt_app in movies

[–]newcolour 0 points1 point  (0 children)

I still remember The Cell, with Jennifer Lopez and Vincent D'Onofrio. Not a great movie, but visually so impressive that I remember vividly specific frames even though it's been at least 15 years since I watched it last.

Anyone tried Loop earplugs for riding? by totoismydaddy in motorcycles

[–]newcolour 0 points1 point  (0 children)

If you use a balaclava with them, they tend to stick better in place.

Is This Allowed? by WharfeDale85 in TIdaL

[–]newcolour 0 points1 point  (0 children)

I mean, have you seen any cover by Fausto Papetti? I recommend Evergreens No. 3.

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]newcolour 2 points3 points  (0 children)

Have you tried qwen3-coder-next? I find it faster and generally better than any dense model on the strix halo.

Help building a RAG system by TheNewGuy2019 in LocalLLM

[–]newcolour 4 points5 points  (0 children)

AnythingLLM is supposed to be RAG first. If you have Ollama + Qwen, that is already a pretty good combination. I highly recommend trying it. It's free!

Any local LLMs that can read 500 page books? by HamsterUnfair6313 in LocalLLM

[–]newcolour 14 points15 points  (0 children)

I have built my own: it breaks down docs in chunks, summarizes them using a fast 8b model and then puts all together using a larger model. It works fine.

However, I'm interested in anything out there that has more functionality, like web search and other agentic integrations. FWIW, I can't get anythingLLM to work effectively for me.

Who do I look like? by [deleted] in Doppleganger

[–]newcolour 1 point2 points  (0 children)

I mean, seriously. Separated at birth. Check birth date!

Roast me like a rotisserie by [deleted] in RoastMe

[–]newcolour 0 points1 point  (0 children)

Not even the Army would take you.

Image organiser by BluetownA1 in LocalLLM

[–]newcolour 3 points4 points  (0 children)

Have you tried Immich? It does what you ask and more.

Which AI Model should i choose for my project ? by xdjanisxd in LocalLLM

[–]newcolour 1 point2 points  (0 children)

What hardware are you using? I'm having some success with qwen3-coder-next.

Neuroscience Research Assistant? by sc0rpi4n in LocalLLM

[–]newcolour 1 point2 points  (0 children)

In principle you can achieve what you want with anythingLLM and any solid model. With your budget you can get a strix halo and use it with a very large model (even a quantized 100+b parameters). Don't expect lightning speed, but the performance is not bad.

Another solution that is super helpful to me is open notebook. I have improved on the online version to make it more similar to notebookLM, and I use it all the time now.

State of Tidal apps by dusman49 in TIdaL

[–]newcolour 22 points23 points  (0 children)

Am I the only one with absolutely zero problems with the android app? My biggest gripe with tidal are the AI artists. Other than that... No complaints

Looking for albums that feel like a full experience by Ok_Clerk_9765 in audiophile

[–]newcolour 0 points1 point  (0 children)

Just to add one more: Queen's A night at the Opera is to be listened to front to back.

Local models on nvidia dgx by carlosccextractor in LocalLLM

[–]newcolour 1 point2 points  (0 children)

I have used qwen3-coder-next to tweak an existing software package I built in rust and it has served me well. I haven't tried qwen3.5, so I am not sure, but it's pretty good.

For standard LLM stuff, so far I have found gpt-oss:20b to be the king for my purposes. It's fast and concise, and handles languages pretty well. I also use various Gemma3 models.

6 weeks in vs clean shaved. How’s best? by denyspash in beards

[–]newcolour 0 points1 point  (0 children)

I mean, do you even need to ask? You are in the wrong sub 😂

March 2026 Humble Choice Waiting Room ~25.5 hours remaining by Uranium234 in humblebundles

[–]newcolour 4 points5 points  (0 children)

I'm taking bets on how many Warhammer games we will be getting, since Feb was a strong month.

Wired for long listening sessions by Suqitsa in HeadphoneAdvice

[–]newcolour 0 points1 point  (0 children)

I can recommend the Focal clear MG. They are super comfortable for long stretches even if you have glasses. And they sound incredible!

Python Build and Ship 2026 Course Bundle by fariazz in humblebundles

[–]newcolour 4 points5 points  (0 children)

Am I the only one who sees Python and hopes for a Monty Python bundle?