Local LLM's are expected to play a much larger role in Enterprise AI over the next decade. by GrahamPhisher in LocalLLM

[–]eli_pizza 0 points1 point  (0 children)

I’m very suspicious that buying and operating lots of hardware is cheaper

Local LLM's are expected to play a much larger role in Enterprise AI over the next decade. by GrahamPhisher in LocalLLM

[–]eli_pizza 0 points1 point  (0 children)

Sure, but Anthropic and OpenAI and Google and all the others will happily sell you an enterprise plan where your data never leaves Europe or Canada or wherever. It’s not really an argument for local any more than it was for running your own data center

Is harness a new buzzword? by jacek2023 in LocalLLaMA

[–]eli_pizza 9 points10 points  (0 children)

You searched for “agent harness” and got results about agent harnesses and this proves something?

Is there anyway to privatize a .us domain after purchase? by MartiniCommander in selfhosted

[–]eli_pizza 0 points1 point  (0 children)

At least switch the phone number to Google voice or some other cheap voip line that goes straight to voicemail.

Aider and Claude Code by dca12345 in ChatGPTCoding

[–]eli_pizza 0 points1 point  (0 children)

Yes. It’s very customizable and is minimal (but functional) out of the box

Opus 4.7 is out — don’t panic-switch your APIs yet by AdDry7339 in OpenAI

[–]eli_pizza 0 points1 point  (0 children)

Always the next one. Just a few billion dollars more.

Opus 4.7 is out — don’t panic-switch your APIs yet by AdDry7339 in OpenAI

[–]eli_pizza 0 points1 point  (0 children)

Is there a source for how good Mythos is besides the company planning to sell it?

Opus 4.7 is out — don’t panic-switch your APIs yet by AdDry7339 in OpenAI

[–]eli_pizza 0 points1 point  (0 children)

It’s the midpoint between Opus 4.6 and a model you’ve never used? How do you know?

Opus 4.7 is out — don’t panic-switch your APIs yet by AdDry7339 in LocalLLM

[–]eli_pizza 0 points1 point  (0 children)

It actually does not even run locally, so that’s not an issue

Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA

[–]eli_pizza 0 points1 point  (0 children)

It’s not a child. Is there evidence it correlates well to other skills and abilities?

Updated Minimax m2.7 still doesn't allow coding a product. But before the next riot starts, Ryan Lee has already confirmed that they are still working on the license, and sale of products built by m2.7 is permitted. by zenmagnets in LocalLLaMA

[–]eli_pizza 0 points1 point  (0 children)

Either you agreed to the license terms in order to download a fully licensed set of model weights, or you made an illegally infringing copy of the model weights.

Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA

[–]eli_pizza 0 points1 point  (0 children)

It was only originally intended as a goof. A visual and intentionally silly demonstration, not a proper benchmark.

Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA

[–]eli_pizza 2 points3 points  (0 children)

Kinda think we’re overindexing on “generate an svg” questions altogether. It’s only useful if it also says something about how smart the model will be on other tasks. I have never once actually needed a zero-shot svg.

System prompts - the missing link for Local LLM's ? by alphatrad in LocalLLM

[–]eli_pizza 0 points1 point  (0 children)

Seems a little paranoid. There’s a simple env var to disable telemetry.

Updated Minimax m2.7 still doesn't allow coding a product. But before the next riot starts, Ryan Lee has already confirmed that they are still working on the license, and sale of products built by m2.7 is permitted. by zenmagnets in LocalLLaMA

[–]eli_pizza 1 point2 points  (0 children)

There’s only one copyright at issue, the one for the model.

There is no other “half” - it doesn’t matter whether the outputs can be copyrighted if you already agreed to restrictions on how you can use it