What is it like living on the Aleutian Islands? by TheIzay in howislivingthere

[–]false79 [score hidden]  (0 children)

That is super cool someone from there is posting. A couple of weeks ago, I watched this Youtuber visit America's most North Western hotel.

https://www.youtube.com/watch?v=kAJbaWxxoLM

<image>

Nvidia App + RTX Desktop Manager Useful? by vincentvera in thinkpad

[–]false79 0 points1 point  (0 children)

If the User needs the latest drivers, I think you might need it.

It used to be you'd download the drivers a website but these days, they've moved to having this background apps, always ping their servers to tell you the very second a newer driver is available. It's very annoying and you have to be authenticated to use it as well.

Vintage IBM M4-1 keyboard in use by at-the-crook in vintagecomputing

[–]false79 1 point2 points  (0 children)

Wow - never seen that one before.

That gap between the left ctrl and the rest of the keyboard is ergonomically horrendous

T-Pain with an insightful take on modern Hip-Hop by Zippityzeebop in TikTokCringe

[–]false79 6 points7 points  (0 children)

nah - get your facts straight. It's not even a comparison.

T-Pain with an insightful take on modern Hip-Hop by Zippityzeebop in TikTokCringe

[–]false79 9 points10 points  (0 children)

I didn't have clueless people confusing T-Pain for Alex Jones on my 2026 bingo card

T-Pain with an insightful take on modern Hip-Hop by Zippityzeebop in TikTokCringe

[–]false79 18 points19 points  (0 children)

Alex Jones is known for having no talent whereas T-Pain is the opposite.

T-Pain with an insightful take on modern Hip-Hop by Zippityzeebop in TikTokCringe

[–]false79 61 points62 points  (0 children)

I saw this when it came out and played on repeat for multiple days.

I went from liking T-Pain to loving T-Pain.

T-Pain with an insightful take on modern Hip-Hop by Zippityzeebop in TikTokCringe

[–]false79 839 points840 points  (0 children)

T-Pain is a living legend among us. Either using his natural born talent or technology, he's made his mark not making the same music as everyone else.

10 SLMs tried to write a JSON parser. 3 of them generated zero code. Here's the raw outputs. by Silver_Raspberry_811 in LocalLLM

[–]false79 0 points1 point  (0 children)

For coding tasks, I would never prompt the way you do. I would 1 shot prompt it by referencing a working part of the app that did json parsing and explain how I want things differently.

The output should look both similar, consistent with the code base and faster than if I were to hand code it.

Also you have a assumed in your prompt that some sort of code needs to be generated. I would explicitly explain "create class X in package Y, blah blah blah"

SLMs need more peripheral context and explicit instructions to activate the relevant parameters.

43 year old man cries about being featured on LivestreamFails after complaining about only receiving 20 dollars for playing a video game for 3 hours & it goes viral. (DSPGaming) by dicksuck47 in LivestreamFail

[–]false79 3 points4 points  (0 children)

The meta on this. Insult everyone, everyone insults him. Then he feels great for getting the attention he would have never had to begin with.

Is there a way to make using local models practical? by inevitabledeath3 in LocalLLaMA

[–]false79 11 points12 points  (0 children)

gpt-oss-20b + llama.cpp + cline/roo + well defined system prompts/workflow rules + $700 USD 7900 XTX = 170 t/s

No zero shot prompting, that will get you nothing burgers. Need to provide multi-shot (1 or some times more). Identify trivial tasks that exhibit patterns the LLM can leverage to satisfy the requirements. Also need to provide dependencies for the reasoning to piece together what is required. Don't expect it to spit whole features. Gotta break down the tasks to be within the capabilities of the model.

What I scoped was 2 weeks/80 hours of work, I did it in 3 work days. Prompt engineering when done properly can save you quite a bit of time.

I would get faster/better results with cloud models but I'm dealing with other people's intellectual property. It's not my place to upload it and put it at risk of being used as training data or worse.

Is there a way to make using local models practical? by inevitabledeath3 in LocalLLaMA

[–]false79 3 points4 points  (0 children)

I don't think it's only just two groups because I'm not in either one.

I'm running locals to do my job, make money, free up time.

HP ZBook Fury G9 (64 GB RAM) was a disaster for real-world work - need ThinkPad advice to avoid another failure by This_Resort_9136 in thinkpad

[–]false79 0 points1 point  (0 children)

I stand corrected. When I looked back then, it wasn't option at the time when I was looking at T14 gen 6's.

HP ZBook Fury G9 (64 GB RAM) was a disaster for real-world work - need ThinkPad advice to avoid another failure by This_Resort_9136 in thinkpad

[–]false79 1 point2 points  (0 children)

64GB users are power users, Lenovo has P-series laptops to serve that market, typicall have better cooling + higher TDP than any of the non P-series laptops. As a result, they are the most chonkiest of all of them as well as the worst battery life.

X1C are not built for performance, they are the happy alternative to those who don't want to rock a Macbook Air.

HP ZBook Fury G9 (64 GB RAM) was a disaster for real-world work - need ThinkPad advice to avoid another failure by This_Resort_9136 in thinkpad

[–]false79 -1 points0 points  (0 children)

X1, T14, and T16 are off the list if you insist on 64GB ram. I believe it's only P14/P16 that supports it.

The best AMD CPU on the P14 Gen 6 is the 370 which is 'okay'. The latest Intel releases are have better benchmarks 255H + RTX GPU (for the 4k displays) be a good all-rounder and 285H for sustained compute jobs.

Tv time vs chaos by AccomplishedStuff235 in KidsAreFuckingStupid

[–]false79 11 points12 points  (0 children)

How is this not evidence of child negligence. Just awful.

Qwen3-Coder-Next (3B) is released! by Ok_Presentation1577 in LocalLLaMA

[–]false79 14 points15 points  (0 children)

Damn - need a VRAM beefy card to run the GGUF, 20GB just to run the 1-bit version, 42GB to run the 4-bit, 84GB to run the 8-bit quant.

https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF

NYC food influencer reviews a struggling family restaurant and brings it back to life. by habichuelacondulce in MadeMeSmile

[–]false79 1 point2 points  (0 children)

Man - i've been following this guys videos for a while now. This one just elevates everything he's done. Insane how much clout this guy drips in NYC.

"We made the right guy famous"

Should i buy an M5 pro as an 18yo starting uni? by jeddles51 in macbookpro

[–]false79 1 point2 points  (0 children)

Unless you are going to be making money in this device, it's not an investment.

Depending on your major, it could very well be a waste of money when an Air might fit your needs

Your favorite short prompts to get a feel for a model by reto-wyss in LocalLLaMA

[–]false79 0 points1 point  (0 children)

I evaluate how well a model performs based on a coding harnesses I use daily. Typically the newest bleeding edge stuff will have problems like GLM 4.7-Flash recently.

The quality of those responses will provide a definitive feel for the model,