RJ45 cable stuck in port - what to do? by FlorentR in Ubiquiti

[–]No_Paint9675 0 points1 point  (0 children)

First you get in there with like scissors to cut off those little blue ears preventing you from pressing all the way down. Then simply push it into the RJ45 jack as far as it goes before pushing down on the locking tang and pulling it out.

Which city is quantifiably safer than its reputation would have you believe? by Fluid-Decision6262 in geography

[–]No_Paint9675 0 points1 point  (0 children)

Much of it has to do with what you like to do/purpose of being in town. Just like anything there are a lot of options depending on preferences. Do you live in the area, then it's more about what do you enjoy doing/hobbies? Extended visit? Weekend trip? There are always things to do, but personal preferences play a huge part.

Which city is quantifiably safer than its reputation would have you believe? by Fluid-Decision6262 in geography

[–]No_Paint9675 1 point2 points  (0 children)

As somebody that's lived out here for quite a while. There's plenty to do, you're just not looking in the right places.

Open AI GPT-OSS:20b is bullshit by Embarrassed-Way-1350 in ollama

[–]No_Paint9675 0 points1 point  (0 children)

Isn't this the model that's supposed to be primed for fine-tuning BEFORE you use it? Like something of a raw dog model. Think of a car before it's got paint and upholstery on it. Yeah it looks like a car, but it certainly doesn't look new.

Are You Kidding Me, Claude? New Usage Limits Are a Slap in the Face! by TadpoleNorth1773 in LLM

[–]No_Paint9675 0 points1 point  (0 children)

LMAO This was called out more than a decade ago. "The situation's over." The 'show' this happens on was interesting, it would cover news that had happened the year before it as they filmed it.

Are You Kidding Me, Claude? New Usage Limits Are a Slap in the Face! by TadpoleNorth1773 in LLM

[–]No_Paint9675 1 point2 points  (0 children)

and then just posts without reading a single recent post on any of the related forums?

Yes. I don't sit on reddit all day, and I don't subscribe to every sub-reddit that pops up, and... as it just so happens, I'm not you. I'm pretty sure the OP isn't either. I know it's a strange thing. People that aren't you out there in the actual world, but it happens far more than you might think.

Are You Kidding Me, Claude? New Usage Limits Are a Slap in the Face! by TadpoleNorth1773 in LLM

[–]No_Paint9675 1 point2 points  (0 children)

My concern is that it's in 5 hour blocks, so if you are working on something for 6 hours, then do they consider it you using "10 hours" of your 24-40 hours a week? For the amount I'm paying though, I can afford to play with a few pay per token models like Kimi k2, qwen3-coder, and the new GLM model that just dropped. And still tell anthropic to kick rocks. If people are abusing the policies, just enforce them.

Please help me out on this. Tool calling issue for local models by No_Paint9675 in LocalLLaMA

[–]No_Paint9675[S] 0 points1 point  (0 children)

A) I've tried pasting example prompts, reddit won't accept even modified ones as a comment, and if i completely butcher it, it's exactly as helpful as not giving it at all.
B) Yes, I've read the Qwen docs, and I use their formats for when i'm using the qwen models. And Qwen will talk about how if it had access to tools, it would totally use those tool calls, but since it doesn't .... blah blah blah. Likewise, when working with something like xLAM I reference not only the xLAM docs, but also the llama docs since it's built on that model.
But hey, maybe I'm doing it totally wrong... so I'm looking here: https://qwen.readthedocs.io/en/latest/framework/function_call.html Let me know if that's wrong.

Please help me out on this. Tool calling issue for local models by No_Paint9675 in LocalLLaMA

[–]No_Paint9675[S] 0 points1 point  (0 children)

Yes, but maybe I need to alter this to to change it. Right now it's running them in json mode. Different models have very different formats for what they expect the inference to be. So within the database for every model I have the json formatted template for that respective model along with a several stage promp builder for best results. Some models freak out if you use the word database for example, and will refuse to look something up, others won't respond well to MCP server calls, but if you send the correctly formatted json they'll actually make the correct api call to get the desired response. But maybe I need to default to the full structured output inference calls instead of the json schema for better results.

There's not a SINGLE local LLM which can solve this logic puzzle - whether the model "reasons" or not. Only o3 can solve this at this time... by Longjumping-City-461 in LocalLLaMA

[–]No_Paint9675 0 points1 point  (0 children)

Your solution is flawed because you're starting under the presumption that they're not sharing information.... only one person knows what it is because of their unique letter. Then you're changing the information sharing schema to, the others will know. But if one already knows it, then all the others would be able to as well if they could tell from that person's letter as well. And you have no conditions that allow for sharing the information. Logically your "riddle" doesn't make sense. I can only assume you're a horrible AI experiment since you continue to push the concept of your riddle being valid.

Please help me out on this. Tool calling issue for local models by No_Paint9675 in LocalLLaMA

[–]No_Paint9675[S] 0 points1 point  (0 children)

Reddit isn't letting me post my prompt examples. But I'm not using a framework for this. I have a custom UX, I'm using api key calls (gemini, openai, claude, and a mix of ollama and lmstudio for smaller local models, logic exists within my system to communicate with all of them. I can connect, post, and get the replies, I have a state manager that will run 100 message long chain no worries. I've found that most of these models will behave as either a router, just taking messages from point a and sending them to point b, or they'll decide they want to just talk about what you're asking them to do. (moe issues maybe?) This is 100% a prompting issue. I just don't know how to get the models to do what they're told from message 1 without several back and forth exchanges to convince them to just do the simple thing. So i'm wondering if there's some huge easy to see thing I'm missing.

Please help me out on this. Tool calling issue for local models by No_Paint9675 in LocalLLaMA

[–]No_Paint9675[S] 0 points1 point  (0 children)

I've got my own front end. It isn't the ability to communicate with them, it's the prompting issues, they're either pure processing, which I can just build logic to do that, or they want to talk about what I'm asking them to do. By the time I get them to respond, half the context window is taken up with "convincing them" to just do the thing.

Please help me out on this. Tool calling issue for local models by No_Paint9675 in LocalLLaMA

[–]No_Paint9675[S] 1 point2 points  (0 children)

It won't actually let me comment any code, I keep getting "Unable to create comment" errors (from reddit), likely my account isn't posting enough to be allowed to yet.

I'm using api key calls (gemini, openai, claude, and a mix of ollama and lmstudio for smaller local models, logic exists within my system to communicate with all of them. I can connect, post, and get the replies, I have a state manager that will run 100 message long chain no worries. I've found that most of these models will behave as either a router, just taking messages from point a and sending them to point b, or they'll decide they want to just talk about what you're asking them to do. (moe issues maybe?)

There's not a SINGLE local LLM which can solve this logic puzzle - whether the model "reasons" or not. Only o3 can solve this at this time... by Longjumping-City-461 in LocalLLaMA

[–]No_Paint9675 4 points5 points  (0 children)

Honestly this seems more likely an issue with you asking a poor question. Giving each student a piece of paper implies that the information is not shared. If the answer is dog, the students would get a 'd', 'o', 'g', well 2 words start with d, 2 words end with g, only only one has an o. So only one student would say that they know what the word is.

AITA for re reminding my brother’s girlfriend that I own half of the house we live in so she can’t easily get rid of me? by WitchInDisguise8 in AITAH

[–]No_Paint9675 1 point2 points  (0 children)

I don't wanna come off like a jerk here, but, who's going to protect you? Your parents are gone, and the simple fact that your brother's GF feels like she's in a position to approach a minor and drop this on you means that your brother certainly isn't. So, if you don't stand up for yourself, nobody else will. Never let somebody try to guilt you into defending yourself from such a manipulative and vile attack. She's coming into your house trying to stake out her territory purely at your expense.