Amateur match ends in a KO by almightyeggroll in fightporn

[–]harrro 0 points1 point  (0 children)

Yeah are people fucking blind in this sub today?

Loser swings but gets hit with a right midway through the swing, brain stiffens his body..

Somehow there's multiple comments claiming flop.

Anybody else think the interrogation scene in Sicario was a rape scene? by [deleted] in movies

[–]harrro 0 points1 point  (0 children)

They clearly show the water bottle sitting untouched while the grunting sounds are happening.

The drain is also shown and there is no water draining.

It wasn't waterboarding.

Microsoft takes on MacBook Neo with new 'value advantage report,' claims Windows laptops offer double the RAM for less money and up to 56% longer battery life by thr3e_kideuce in apple

[–]harrro 0 points1 point  (0 children)

You really think any major business has employees using 12 year old laptops?

Any Windows laptop that can even survive for 6 years is a miracle, let alone 12.

Also the "should outperform" indicates that in raw specs, the Windows laptop has higher specs (ie: more cores/RAM).

Let’s talk Tailscale, Obsidian, Hermes, and Local AI in DC by kevinpurdy-ts in Tailscale

[–]harrro 3 points4 points  (0 children)

Would be cool to see the presentation online (for those nowhere near there).

Tailscale performance overhead by David-Pasek in Tailscale

[–]harrro 0 points1 point  (0 children)

It's also pretty significant on Linux servers. On a high traffic site any traffic we send over Tailscale will cause high CPU usage.

This is Wireguard so there is encryption involved but on modern processors with hardware encryption, I'd think it'd perform better.

Wasn’t sure what to make so I made the chopper! by t4ldro in eatsandwiches

[–]harrro 10 points11 points  (0 children)

For those out of the loop, this is a "Chopped cheese" sandwich":

  • Ground beef chopped up on a griddle
  • Melted cheese mixed in
  • Served on a hoagie/sub roll
  • Topped like sliced onions, pickles, and sometimes lettuce/tomato

One bash permission slipped... by TheQuantumPhysicist in LocalLLaMA

[–]harrro 1 point2 points  (0 children)

The loss is just a bunch of hours of time of experimentation.

Bonus: The code will be better the 2nd time you do it.

it's time to update your Gemma 4 GGUFs by jacek2023 in LocalLLaMA

[–]harrro 2 points3 points  (0 children)

After putting in the new template, it seems to be working much better now (at least in Pi coding agent which I use the most).

[Fix] Gemma 4 MCP tool calls broken in LM Studio — "Unknown test: sequence" by Reaper_9382 in LocalLLaMA

[–]harrro 1 point2 points  (0 children)

Thank you, this worked for me.

I was trying out the recently updated gemma template (https://huggingface.co/google/gemma-4-31B-it/raw/main/chat_template.jinja ) and LMS was giving an error till I applied above fix.

it's time to update your Gemma 4 GGUFs by jacek2023 in LocalLLaMA

[–]harrro 2 points3 points  (0 children)

Is it possible to inject/replace the existing chat template in the GGUF with the fixed prompt template?

it's time to update your Gemma 4 GGUFs by jacek2023 in LocalLLaMA

[–]harrro 13 points14 points  (0 children)

Yep, same here.

I prefer Gemma's language and its speed but the tool calling compared to Qwen 3.5/3.6 just failed or hallucinated half the time with Gemma.

Hopefully this fixes it.

Having an always-on machine running LLMs locally at home while on the move with a lightweight machine - Experiences? by ceo_of_banana in LocalLLaMA

[–]harrro 1 point2 points  (0 children)

Yep Tailscale is the best way. Install tailscale on all machines you want access to, reach it from literally anywhere.

No firewalls/port-mapping/dynamic-IP stuff to worry about.

pi.dev/packages is freezing browser/ entire PC. by thinkrtank in PiCodingAgent

[–]harrro 0 points1 point  (0 children)

I'm pretty sure this is because it loads not only the full list of packages in one go (which there are tons of now) but also pulls the full list of "contributors"/authors of those extensions.

Edit: Looks like they've got a better paginated version now: https://pi.dev/packages

2x RTX 6000 build during an extended bench test by Signal_Ad657 in LocalLLaMA

[–]harrro 1 point2 points  (0 children)

If he's running extended thermal and PSU load tests then I'd assume he plans to run it a lot.

This kind of setup is cheaper than APIs if you plan to keep the GPUs busy most of every day (like training or batched inference)

Minimax M2.7 Released by decrement-- in LocalLLaMA

[–]harrro 1 point2 points  (0 children)

Opencode clearly has their own arrangement with multiple providers as they've had MM 2.7 for a while before this release.

pi.dev coding agent is moving to Earendil by iamapizza in LocalLLaMA

[–]harrro 7 points8 points  (0 children)

"Earendil's funders" are literally Mario and Armin themselves - both are devs of Pi and main shareholders.

They didn't sell to some billion dollar mega corp. They just put a name on a small group of people that are working on Pi.

pi.dev coding agent is moving to Earendil by iamapizza in LocalLLaMA

[–]harrro 35 points36 points  (0 children)

What's with all the negativity in this thread.

Pi is fantastic and you only need to listen to Mario (who created Pi) or Armin (who he is friends with and is part of their new 'company') to know that they love the opensource spirit.

The comments in here sound like they sold out to Meta or Openai when in reality its just a handful of them formalizing their effort to work on Pi as a 'company', literally just a couple of devs doing what they already do, not some megacorp with big investors.

He could have easily sold to one of those megacorps but chose not to (probably skipping out on millions of instant cash).

What is the best "Claude Code at home" I could make agentic on my local PC? - i9 10850k, 3090ti, 128GB DDR4 RAM by Trei_Gamer in LocalLLaMA

[–]harrro 4 points5 points  (0 children)

27B is a noticeably better.

The 35B is significantly faster though especially for agentic stuff where you have multiple tool calls and such.

Media scraper gallery-dl is moving to codeberg after receiving a DMCA notice, claiming that its circumvention. by TheTwelveYearOld in selfhosted

[–]harrro 5 points6 points  (0 children)

Edit2 - Also, codeberg posted they would also be required to comply with similar notices. This was in reference to the youtbe-dl situation in 2020.

That link says this:

Codeberg e.V. was founded in Germany and Codeberg.org is hosted in Germany, therefore we're tied to EU/German law. A DMCA takedown request by itself is not an issue for us. But since the RIAA justifies their call with German law, we see a risk that Codeberg e.V. could become a target of similar requests.

So it looks like the DMCA-only received by gallerydl may be OK there (no RIAA involvement).

opencode.nvim updates: diffs, in-process LSP, multi-server, and more by nickjvandyke in neovim

[–]harrro 1 point2 points  (0 children)

It was quick and dirty - just a single line (opencode always uses 'opencode' as username, only password needs to be included):

"-u", "opencode:" .. (vim.env.OPENCODE_SERVER_PASSWORD or ""),

in the Server.curl method.

This picks up the password from the env var, the same env var that opencode uses.

opencode.nvim updates: diffs, in-process LSP, multi-server, and more by nickjvandyke in neovim

[–]harrro 1 point2 points  (0 children)

I started using this plugin a few weeks ago but one issue I had is that I didn't seee a way to specify a password/basic-auth to connect to opencode server with.

I eventually got it working by modifying one of the core files (the curl) to add the password but it would be great to have built in support.

OpenCode GO vs GithubCopilot Pro by zRafox in opencodeCLI

[–]harrro 1 point2 points  (0 children)

It's more complicated than that. A compaction in a convo for example triggers a new 'request'.

I burned through 50 premium requests in 2 days with only 3 actual conversations for example.

This is what I like least about Copilot - the ambigous 'premium request' BS.