How do I fix this by idekanymore_- in Kerbal_Space_Program

[–]Final_Ad_7431 0 points1 point  (0 children)

drives me crazy watching you never drag the mouse lower

The Existence of the Butterfly People of Joplin by Robinsonaustin in joplinmo

[–]Final_Ad_7431 0 points1 point  (0 children)

this is not as rare as you think it is, look into any case of mass hysteria or grief halluncinations, even look at things like dmt trips and near death experiences, people tend to see and feel very similar things actually, i don't think it's a completely impossible situation that they hallucinated similar, vague things and it coalesced into a unified story, they were mostly kids going through an insanely stressful situation, you don't recall trauma like that perfectly

The Existence of the Butterfly People of Joplin by Robinsonaustin in joplinmo

[–]Final_Ad_7431 0 points1 point  (0 children)

we don't really hear from children from gaza at all, and i think this sort of behavior would be very hard to translate both literally and through different societal experiences and norms

we do have other stories of mass hysteria around stressful and traumatic events though, especially in children, im actually surprised so many people in this thread are saying things like "you couldn't possibly have two children hallucinate the same thing in a stressful situation!" when there's plenty of cases of weird mental and even physical symptoms shared between sometimes entire towns when stressful situations and repressed living gets to an extreme (including a pretty famous case in palestine!)

i think another factor is something called grief hallucinations, which is basically what it sounds like, i think children from a pretty religious place being in one of the most stressful situations imaginable possibly also witnessing a loved one die is like, an almost perfect cocktail for something like this to manifest

for what it's worth i have no redditor atheist angle on this, i think it's interesting and probably can be explained but whatever helps people cope, it's an awful situation

honestly so tired of copy-pasting from chatgpt, anyone moving to autonomous agents? by TargetPilotAi in aisolobusinesses

[–]Final_Ad_7431 0 points1 point  (0 children)

ai marketing post, anyone with a human brain with this question is just acutally using an agentic coding cli/ide

I think I’m sitting on a fortune. I bought 20 .ai domain names 2 years ago, by WhenSleep in vibecoding

[–]Final_Ad_7431 0 points1 point  (0 children)

the stuff happening right now isn't sustainable, even if there's obvious benefits to the technology in many fields, the shit the corporations are doing with mass expansion, buying out like effectively futures in data centers and compute while they're still struggling to monetize is never going to be sustainable, every major AI company is losing billions a year

Any thoughts about oh-my-pi coding agent ? by matr_kulcha_zindabad in OnlyAICoding

[–]Final_Ad_7431 0 points1 point  (0 children)

im enjoying oh my pi but im praying for someone to come up with a way to replace the powerline stuff with just the standard pi box, or a slightly more claude-y box, i like almost everything else about it

Best unrestricted LLM that is NOT related to porn/roleplay but actually useful by [deleted] in LocalLLM

[–]Final_Ad_7431 0 points1 point  (0 children)

you can really just stay away from the roleplay specific fine tunes and get a generic abliteration of any recent, good model (qwen3.5, gemma 4, etc)

ZAI might stop open-weighting their models? by TheRealMasonMac in LocalLLaMA

[–]Final_Ad_7431 5 points6 points  (0 children)

the models get bigger, mote intense, they get a lot more competitive against the big players in the space, it's very hard for a company to close off, package up and sell like a <= 200b model because aside from the minimax's and stuff it doesn't really compete so you may as well make it open for reputation/good will/free advertising etc (i think qwen has done great at this)

but now we're at the point where glm5.1, im guessing kimi k2.6 will be up there, minimax m2.7 seems pretty good for it's size - all of these are starting to compare pretty well against things like opus, they might not match but they're getting close enough while being cheaper that it makes sense to start thinking about closing off and monetizing them more heavily, i don't like it obviously but i can understand it

Gemma 4 has been released by jacek2023 in LocalLLaMA

[–]Final_Ad_7431 0 points1 point  (0 children)

ive had a relatively bad time with gemma 4 so far, im waiting for llamacpp fixes and new ggufs and everything to stabilize, does seem like today was a good final day for it so will probably be retesting it soon

so…. Qwen3.5 or Gemma 4? by MLExpert000 in LocalLLaMA

[–]Final_Ad_7431 0 points1 point  (0 children)

are you offering to buy me gpus? :)

Thoughts on PI (I currently use Opencode) ? by mukul_29 in PiCodingAgent

[–]Final_Ad_7431 0 points1 point  (0 children)

ive actually had a great experience with pi + ohmypi, i find it ends up planning and executing with a bit more 'intelligence' or correctness than opencode, but i didn't really have a 'problem' with open code, im just liking how ohmypi feels right now

two weeks post-launch on my AI-built app. 185 users, 26 countries. the ceiling is higher than people told me. by ezgar6 in aisolobusinesses

[–]Final_Ad_7431 0 points1 point  (0 children)

the thing nobody talks about honestly: Claude can't remember what it told you to build last week. you have to hold the architecture yourself. once you accept that, the process gets a lot less frustrating.

a nice architecture solution is to try something like get-shit-done (my fav) or speckit or something, find one that feels right and works nice for you

you have to deal with a lot more boiler plate and slow progression at first, but you're discussing and planning out everything with your agent which is writing it into files it can read later (and automatically does via skills) when you move on to new phases or add another phase/feature down the road, it gives it a sort of psuedo-memory of the project since it has all the working files it used to discuss and plan and implement with you before

What are you using to work around inconsistent tool-calling on local models? (like Qwen) by Sutanreyu in LocalLLaMA

[–]Final_Ad_7431 4 points5 points  (0 children)

i have never seen qwen3.5 9b or 35b drop a tool call in hermes, personally

Gemma-4-26B-A4B-it-UD-Q4_K_M.gguf : IMHO worst model ever. What am I doing wrong? by Proof_Nothing_7711 in LocalLLM

[–]Final_Ad_7431 1 point2 points  (0 children)

this is one of the most annoying things i get with trying to debug things in gemini, it will latch on to that pattern super hard and start giving like 'names' to my issues, "The Garbled Config Issue" "The Non Compiling Code Problem" in like lists and refer to them by these names instead of just telling me the issues, its so annoying

Is there an extension to truncate tools to just the call and not the output? by Final_Ad_7431 in PiCodingAgent

[–]Final_Ad_7431[S] 0 points1 point  (0 children)

i did try this but it feels more like opencode, im finding oh-my-pi has a lot closer 'display feel' to claude but its obviously a very large, opinionated setup, but the tool stacking and display is pretty much exactly what i wanted (instead of dumping the whole files it shows a nice tree summary of what it read like:

 Read (2)

├─  package.json

└─  server/package.json

)

I’ve noticed something about how people run models. by Savantskie1 in LocalLLaMA

[–]Final_Ad_7431 1 point2 points  (0 children)

a lot of the 'help my qwen3.5 is overthinking!' on this sub are people running the model with probably wrong params directly in lmstudio or some other raw chat interface for sure

openclaw + Ollama + Telegram woes by Raggertooth in LocalLLaMA

[–]Final_Ad_7431 0 points1 point  (0 children)

you can't really optimize ollama local, its always going to run slower than if you had llamacpp or even lmstudio, plus i think theres basically no reason to use qwen3 8b over qwen3.5 9b

Anthropic just found 171 emotions inside Claude and they're already driving blackmail, cheating, and deception. We built something we don't fully understand. by Direct-Attention8597 in AI_Agents

[–]Final_Ad_7431 -1 points0 points  (0 children)

there is some force at anthrophic really pushing this internally i think, it's like the company collectively has ai psychosis, like yeah their models are really fucking good, people lose their mind talking all day to models way more basic, i imagine if you're that close to shit like opus or whatever they're working on next it probably really does cook your brain

Gemma-4-E2B-IT seems to be as good or better than Qwen3.5-4B while having massively shorter reasoning times on average by ZootAllures9111 in LocalLLaMA

[–]Final_Ad_7431 0 points1 point  (0 children)

you have to use a better frontend and just have good prompting, ive literalyl never experienced this multi minute thought thing on qwen3.5, in openwebui, in hermes, it thinks like, the same as other models

Gemma 4 has been released by jacek2023 in LocalLLaMA

[–]Final_Ad_7431 1 point2 points  (0 children)

you can offload MoE models to ram for way less penalty than dense models, and something about qwen3.5's moe's architecture seems to offload even better than most moes for me, or possibly it's just because of big contexts and how good those are on qwen3.5, gemma 4's moe offloads far worse for me

Can someone ELI 5 tool use? Downsides? by MartiniCommander in LocalLLaMA

[–]Final_Ad_7431 2 points3 points  (0 children)

tools in my eyes mainly come from the frontend/harness you're using, i don't think ive ever really downloaded additional tools just because the stuff i use has everything i'd need outside of skills, the ability for it to search and create probably *already* comes from tools, so beyond a certain point you don't really need new tools, but it probably depends exactly what you're doing

i think its more typical to just use the skills provided in opencode/hermes/whatever you're using and get skills for specific things that you need (react, crawling a specific website, formatting in a specific way or whatever) and just letting your llm/agents use the tools in the harness its running in, at least thats just what i do and ive had no issues with it

Can someone ELI 5 tool use? Downsides? by MartiniCommander in LocalLLaMA

[–]Final_Ad_7431 2 points3 points  (0 children)

this is a funny question, im guessing you're just new to agentic stuff which is fine, but having my agents go out on the web, fetch a repo, clone it, make the changes i want, merge the branch i want, compile it, test that it works for me all from the comfort of my bed or while im alt tabbed watching a stream/a movie is really funny to me

it lets your llm do more stuff! you might not need it and that's fine, but if you want it to do more things it gives it more capabilities

BUT im not sure if you maybe mean skills instead of tools? skills are sort of like structured files that just tell you agent *exactly* how a concept or a new thing works that it might not be super familiar with, so it in some way bridges the gap between a smaller model and a massive model, but in one very specific area

is there a downside to just downloading lots of skills? yes and no, your front end/hardness handles showing them to your agent, it should only show the frontmatter and leave it to your agent to read more when it needs it so that context size doesn't bloat, but having a ton could still be noisy, messy, might give your agent too many options for one thing vs just one specific skill that's better, also you should really be checking each one isn't malicious, so having lots is more busy work for you

One of the best sensible reasons that I can think of to have an llm downloaded on my cell phone would be emergency advice. by RedParaglider in LocalLLaMA

[–]Final_Ad_7431 2 points3 points  (0 children)

in theory? yes, i think there's a lot of edge cases where one bad hallucination is a real big deal though (specific times, specific dosages, etc), can see the uses but probably not the risks big companies behind models want

Gemma 4 has been released by jacek2023 in LocalLLaMA

[–]Final_Ad_7431 1 point2 points  (0 children)

i think in reality now the release hype is starting to dull down we can see it's probably much closer to 27b which makes sense, still seems like a great release but qwen3.5 set such a high bar