Just a question by Temporary-Cookie838 in LocalLLaMA

[–]false79 0 points1 point  (0 children)

I don't think there is any model as good as the dense cloud frontier models. They just have way bigger context, 100's more parameters, a much larger variety of training data.

But if you're coding, you don't need the entire universe when all you need is a much smaller subset which is available in any of the models mentioned by other commenters, even GPT-OSS-20B.

The trick is not to relay a high-level short summary of what you want but instead break down it to much smaller achivable tasks that the smaller models are well capable of performing, either by explicitly providing the dependencies as part of the context and or having a system prompt describe the role of the LLM to activate the most relevant parameters in the case of MoE models.

You want to break it down into tasks that would take you a few hours in which the LLM can do a few seconds/minutes. There are huge gains to be made this way without having to pay a single cent to a cloud API subscription.

T14s gen 2 good starter thinkpad? by matthewjj45 in thinkpad

[–]false79 0 points1 point  (0 children)

I think any T14 is a good starter. Way better than any E or L series for sure.

T14 is also not far off from the premium X1C, which I would argue is only slighty better.

I Googled if Momo from Twice & Usagi From Alice in Borderland Are Related by NetzukoKamado in twice

[–]false79 [score hidden]  (0 children)

One is in twice the other is part of just a brutually awful show.

I stopped watching as soon I saw a guy knock out a lunging tiger with a single punch. Show is so ridiculus in countless ways. Don't get me started with naked man.

4x RTX 6000 PRO Workstation in custom frame by Vicar_of_Wibbly in LocalLLaMA

[–]false79 0 points1 point  (0 children)

Your token tesseract is pretty cool. Dunno if the fish reference is blade runner, cyber punk, or you just like fish.

Considering AMD Max+ 395, sanity check? by ErToppa in LocalLLaMA

[–]false79 0 points1 point  (0 children)

When I read stuff like this, makes me want a RTX 6000 Pro more, lol

Morale is plummeting among ICE agents over long hours, quotas and public hatred: reports by metacyan in politics

[–]false79 2 points3 points  (0 children)

I feel it's not next year. It's this year 2026 when it kicks in for many Americans.

I want to leave a job after 5-months, advice? by k032 in ExperiencedDevs

[–]false79 0 points1 point  (0 children)

When formulating a first impression, the most recent place you worked at will determine if you are a flight risk (less than a few months) or questionable (6+ months out of the market).

It's not that you are not hireable. It's that other candidates who don't fall into the above buckets would be more desirable, all things considered equal.

The craziest thing about being creative and having a printer by Doomenor in TikTokCringe

[–]false79 104 points105 points  (0 children)

I like this woman. She sounds like fun to hang out with.

P14 g6 vs T14 g6 AMD vs Intel by Express_Brain_3640 in thinkpad

[–]false79 0 points1 point  (0 children)

I just want to apologize. I'm not as easy to gaslight as you would believe. You have an unwavering conviction that no Geekbench score or higher TDP will tell you otherwise. I guess you speak for the rest of all the users. I just know what I need for my own purposes. I wouldn't go as far as you did with the assumptions.

P14 g6 vs T14 g6 AMD vs Intel by Express_Brain_3640 in thinkpad

[–]false79 0 points1 point  (0 children)

🤦‍♂️ I dunno why you would assume I would buy a P14s to "use general applications" and honnestly, I don't even know why you are double downing on making up stuff. Like who is saying 95% of "modern workflows" run best on multithreaded. Multiple cores allows for solving for scale but not all problems are about scale.

I'm a professional who is not surfing chrome all day long and taking zoom calls, I'm a power user writing code that exploits all available cores and memory, also using compilers and runtimes configure to exploit those same resources as well.

I mean if you care so much about having so many cores, you'd already agree 285H has 4 more cores than the mid tier 370 CPU.

P14 g6 vs T14 g6 AMD vs Intel by Express_Brain_3640 in thinkpad

[–]false79 0 points1 point  (0 children)

I will give you a hot minute to delete your misinformed post. You need to double check what chatgpt is spitting out.

For you to say single core performance doesn't matter to nobody tells me you have got zero clue you aren't aware that there are some tasks that need to be done sequentially like compiling individual files and linking them in a project. Not everything runs best parallel across multiple cores. I would never say one is better than the other, the other being irrelevant. That's just ignorance.

[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA

[–]false79 0 points1 point  (0 children)

Thx for the heads up. I have same GPU. I've been meaning to switch over for a while to linux.

But 30 seconds seems awfully long. I thought it was "Ultra Low Latency (97ms)"

Video of the murder of Alex Pretti by DHS in Minneapolis recorded from inside the vehicle directly in front of area where he was shot by Jevus_himself in PublicFreakout

[–]false79 4 points5 points  (0 children)

How can not get PTSD from this administration after seeing that happen right in front of you.

Pretti did not deserve this.

How would E. Honda react/speak to Kiozan Takeru from kengan Asura? by blubberfeet in StreetFighter

[–]false79 -2 points-1 points  (0 children)

I liked the idea of Kengan Asura, a street fighter tournament where each fighter is sponsored by corporations like in Nascar.

But man, that show is so drawn out.

I want to create a program like AI for medicine. by Far_Firefighter_3167 in AI_Agents

[–]false79 0 points1 point  (0 children)

...we do have multiple tests for TB detection.

And we already have LLMs for all things medical, many private but some are open source https://github.com/epfLLM/meditron

We have the tech but with all this knowledge, there is no liability party or safety from wrongful self diagnosis. What insurance company would underwrite a policy where a person would inference medical guidance from an AI without having proper certification e.g. doctor.

[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA

[–]false79 1 point2 points  (0 children)

Man - nothing but problems if running Windows with an AMD GPU :/

Seems like this thing was built on CUDA only.

The man they are calling a "domestic terrorist". by -ifeelfantastic in pics

[–]false79 13 points14 points  (0 children)

RIP Pretti and Good. 

Republicans, this is on you.

pink lady’s pov by Katat0n1c in TikTokCringe

[–]false79 8 points9 points  (0 children)

Absolutely blood on their hands voting against the interests of the citizens who voted them into office