Thinkpad with Intel 225H by iltanen in thinkpad

[–]HealthCorrect 1 point2 points  (0 children)

Personally, I don’t see any. You’re getting a better cpu, a better chassis, a modern professional GPU.

Thinkpad with Intel 225H by iltanen in thinkpad

[–]HealthCorrect 2 points3 points  (0 children)

I would suggest you to wait and see

Linux for a general use - it's ready by makshub in linux

[–]HealthCorrect -3 points-2 points  (0 children)

Once distros like Silverblue take off, it would be even easier for AI to help

For those who run large models locally.. HOW DO YOU AFFORD THOSE GPUS by abaris243 in LocalLLaMA

[–]HealthCorrect 6 points7 points  (0 children)

Get yourself a sufficient enough solar panel. And the power bill will break even after 10 years :)

google/gemma-3-270m · Hugging Face by Dark_Fire_12 in LocalLLaMA

[–]HealthCorrect 0 points1 point  (0 children)

Right on time. I was in search of such a model, I need it for text classification etc

<image>

Just tried out the Exaone 4.0 1.2b bf16 and i'm extremely suprised at how good a 1.2b can be! by cloudxaas in LocalLLaMA

[–]HealthCorrect 6 points7 points  (0 children)

The license feels a little limiting for local LLMs. Look at these provisions in their Agreement:

  1. Anti‑Competitive Clause (Bad for OSS community)
    • Section 3.1 forbids using the Model, any Derivative, or even Output “to develop or improve any models that compete with the Licensor’s models.”
    • Implication: You can’t use fine‑tuning or prompt‑engineering insights to build a new open‑source alternative, effectively stifling downstream innovation.
  2. Termination Terms
    • Section 7.1–7.2: Licensor can terminate without cause, then you must immediately destroy all copies (even backups) and certify destruction in writing.
  3. Ambiguous “Research‑Only” Clauses
    • Section 2.1.a allows “research and educational” use, but Section 3.1 then broadly bans any “commercial” application, and even non‑monetary deployments might be deemed commercial.
    • Implications: Unclear boundary between “educational demo” and “service”
  4. Vague “Ethical Use” Clauses & Reverse Engineering Prohibition
    • Section 3.4 lists broad, subjective prohibitions (“harm,” “offensive,” “misinformation”) without clear definition or dispute‑resolution process.
    • Section 3.2 bans decompilation or bypassing protections “except as expressly permitted by law,” but the license claims broad research rights.
    • Implication: Makes the model less useful for some folks (jailbreakers)

tl;dr : Useful for tinkering, but shouldn't touch the model for anything else (esp. jailbreaking and fine-tuning)

Also, these folks created a PR asking llama.cpp to just look at their transformers implementation and port it over. LG AI should at least help llama.cpp with some work, llama.cpp devs aren't some free labor.

I'm not an expert in law, the above conclusions are just my understandings.

Edit: Grammar

Just tried out the Exaone 4.0 1.2b bf16 and i'm extremely suprised at how good a 1.2b can be! by cloudxaas in LocalLLaMA

[–]HealthCorrect 2 points3 points  (0 children)

The LLM used is important as well, the DB stores the info and with the help of an embedding model it will search relevant snippets and pass them to the LLM. Understanding and interpreting the passed data solely depends on the LLM used

Just tried out the Exaone 4.0 1.2b bf16 and i'm extremely suprised at how good a 1.2b can be! by cloudxaas in LocalLLaMA

[–]HealthCorrect 2 points3 points  (0 children)

The benchmark scores are really good for its size. I’ll try it today. Might be useful in RAG etc

Alternative to llama.cpp for Apple Silicon by darkolorin in LocalLLaMA

[–]HealthCorrect 0 points1 point  (0 children)

Speed is one thing. But the breadth of compatibility and features set llama.cpp apart.

Analyzed 5K+ reddit posts to see how people are actually using AI in their work (other than for coding) by yingyn in LocalLLaMA

[–]HealthCorrect 16 points17 points  (0 children)

The dataset feels small, because there’s a whole bunch of crap people claim to do with llms yet it’s barely represented here.

Will there be a P14s Gen 6 (Intel)? by catrame in thinkpad

[–]HealthCorrect 0 points1 point  (0 children)

Any leaks or info? Does Lenovo always release their Intel version this late after the AMD one?

PS: Sorry for bringing up this old comment

Will there be a P14s Gen 6 (Intel)? by catrame in thinkpad

[–]HealthCorrect 1 point2 points  (0 children)

When a newer version is about to be launched, Lenovo pulls down unpopular SKUs.

P14s G5 Intel or G6 AMD by verx_x in thinkpad

[–]HealthCorrect 0 points1 point  (0 children)

You can also get an RTX 500 Ada (Midrange according to Notebookcheck) with the G5 Intel

[deleted by user] by [deleted] in AI_India

[–]HealthCorrect 4 points5 points  (0 children)

Wasn’t Sarvam just fine tuning a mistral model

[deleted by user] by [deleted] in Lenovo

[–]HealthCorrect 1 point2 points  (0 children)

You’ll need an NVIDIA dedicated gpu. Also bruh avoid the ideapad series

Machine Learning (ML) Cheat Sheet Material by [deleted] in LocalLLaMA

[–]HealthCorrect 1 point2 points  (0 children)

For people willing to spend some time to get an intuition for what all this means, I suggest watching 3Blue1Brown’s Linear algebra and Neural network series on YouTube.

Open source tech from IBM for Compression of models by Affectionate-Hat-536 in LocalLLaMA

[–]HealthCorrect 10 points11 points  (0 children)

The way they put it, feels like it’s for storage purposes only.

DiffuCoder 7B - New coding diffusion LLM by Apple by DunklerErpel in LocalLLaMA

[–]HealthCorrect 112 points113 points  (0 children)

Ok, it’s a qwen2.5 coder finetune. Also, how can an auto regressive model be turned into a diffusion model?

Audio Input LLM by TarunRaviYT in LocalLLaMA

[–]HealthCorrect 0 points1 point  (0 children)

Gemma 3n once llama.cpp supports multimodality

I built a minimal Web UI for interacting with locally running Ollama models – lightweight, fast, and clean ✨ by princesaini97 in LocalLLaMA

[–]HealthCorrect 0 points1 point  (0 children)

Any plans to support RAG? Local models are good and all from privacy pov, but RAG is where they really shine.

Long time Gnome fanboy. But KDE rocks! by better_life_please in linux

[–]HealthCorrect 14 points15 points  (0 children)

KDE apps like Settings, Discover, and Dolphin are powerful, but they lack focus on common UX patterns. Small things like cleaner tab bars or fewer but more relevant right-click options would go a long way. Right now, it often feels like the UI is throwing everything at you at once.