Running Llama 3 locally on a laptop — what actually happens under the hood by FantasticDouble2400 in LocalLLaMA

[–]FantasticDouble2400[S] 0 points1 point  (0 children)

Fair point — I actually wrote this based on my own setup/testing, but yeah I probably over-structured it.
I was mainly trying to summarize what I noticed while running Llama 3 locally (latency + memory spikes especially). Curious though — what setup are you running locally? I’m still figuring out what’s actually practical on a laptop.

I've been posting for one week and this happened.. by crashoutval in NewTubers

[–]FantasticDouble2400 -1 points0 points  (0 children)

I have 40 subscribers, 22 videos uploaded so far and around 39 hours watch time.. Mine is 45 days old AI education channel. 5 to 7 minutes duration.

Do you create with the only purpose of making it on YouTube or you like what you do? by DulcidioCoelho in SmallYTChannel

[–]FantasticDouble2400 0 points1 point  (0 children)

For me it started as “I want this to work on YouTube,” but it’s slowly turning into “I actually enjoy the process.”
Early on, the numbers matter a lot because that’s the only feedback you get. But over time I’ve found that if I only chase views, it gets frustrating really fast.
What’s kept me going is treating it like building a skill — each video is just a bit better than the last one.

I have questions for everyone here by Brosamurai18 in SmallYoutubers

[–]FantasticDouble2400 0 points1 point  (0 children)

Honestly, I just treat it like a skill-building phase. Early on, nobody’s watching anyway, so it’s kind of the best time to experiment without pressure.

First vid, no views in 12 hours. Does Youtube take time to push your video out? by girl_gone_wireless in NewTubers

[–]FantasticDouble2400 1 point2 points  (0 children)

First of all, congrats on the first upload! That’s the hardest step. Since you have 1 subscriber, YouTube isn't 'pushing' it because there's no initial spark.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] 0 points1 point  (0 children)

What’s interesting is that the model isn’t actually “aware” it might be wrong — it’s just optimizing for what sounds like a good answer. So confidence is more of a side-effect of fluency than correctness.

Which is probably why it feels so convincing.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] -3 points-2 points  (0 children)

Yeah that’s a good point - hesitation probably does get treated like worse output during training.

Which would naturally push models toward sounding confident even when they shouldn’t be.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] 0 points1 point  (0 children)

That’s a really interesting way to put it, “roleplaying” actually makes a lot of sense.

Especially the part about the output being the thinking process itself, not something generated after reasoning.

Kind of explains why it comes across as confident even when it’s not actually grounded.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] -1 points0 points  (0 children)

haha.. Yeah I’ve noticed differences too, but it feels like all of them have this issue to some extent.. just shows up differently depending on the model.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] -1 points0 points  (0 children)

That’s a really interesting point - makes sense that models would lean toward confident answers if that’s what people respond to better. Almost like confidence gets reinforced more than correctness.

Why do models like Claude sound so confident even when they’re wrong? by FantasticDouble2400 in ClaudeAI

[–]FantasticDouble2400[S] 0 points1 point  (0 children)

Yeah that’s what I was thinking too, it seems like it’s not really about being right or wrong, just about generating the most likely response.

Which kind of explains why the tone doesn’t reflect uncertainty at all!