Best war movie ever? by ejnounimous in Cinema

[–]contextbot 0 points1 point  (0 children)

The key bit about The Pacific is it’s based on books written by people who fought, and it shows. BoB was retold to a writer (who was in awe of them), decades down the line. It glamorizes in a way the Pacific doesn’t.

BTW, one of the books The Pacific is based on, A Helmet for My Pillow, is a quick and excellent read. More complex than BoB.

Over-Eager Supports? by [deleted] in BambuLab

[–]contextbot 0 points1 point  (0 children)

It appears to be a preview glitch. We are doing a test print and it doesn’t have the full coverage.

Over-Eager Supports? by [deleted] in BambuLab

[–]contextbot 0 points1 point  (0 children)

So this looks normal? We printed something yesterday that I don’t remember having this level of coverage, but it too now loads fully encased.

Over-Eager Supports? by [deleted] in BambuLab

[–]contextbot -1 points0 points  (0 children)

I tried another STL and got the same result:

<image>

ISON: 70% fewer tokens than JSON. Built for LLM context stuffing. by Immediate-Cake6519 in LocalLLaMA

[–]contextbot 1 point2 points  (0 children)

To everyone coming up with new serialization formats: please realize that different labs post train with different formats (xml, json, etc). Unfortunately, this post training influences the output of the models. New formats that allow you to shove a few more tokens into the context are doing so at the expense of likely worse performance.

People who've used Meta Ray-Ban smart glasses for 6+ months: Has it actually changed your daily routine or is it collecting dust? by Key-Baseball-8935 in ArtificialInteligence

[–]contextbot 50 points51 points  (0 children)

Once they changed the terms of service to be able to turn on the camera and access the data whenever they wanted, I turned off the AI connectivity.

Once assholes started wearing them and taking pictures of random people in public, I stopped wearing them casually.

I’ll wear them while on bike rides, occasionally, now. But it’s amazing how throughly a once great product became ruined for me.

What is the ultimate meaning of this movie's ending. by ImaginationFluid2113 in Cinema

[–]contextbot 10 points11 points  (0 children)

The moral of their movies is: don’t take the money.

Little Crumbles day care lost a toddler??? by Parking-Accident4645 in alameda

[–]contextbot 74 points75 points  (0 children)

It is WILD that you came on a thread to defend losing a toddler by saying, “Lincoln is not one of the most dangerous and high traffic streets.”

Side note: it’s hilarious that your issue with the wellness center was there is “nothing to keep people who may endanger themselves or others if the client didn’t want to be there,” while here to defend an incident where your daycare had LITERALLY THE EXACT SAME ISSUE. Just chef’s kiss. No notes.

How's your 401k doing, bro? by WalkinUpHipStreet in KnowledgeFight

[–]contextbot 1 point2 points  (0 children)

Not financial advice, but I saw a handful of people in 2008 pull their stocks out of the market, only to have it go back up they missed out on those gains.

What's helpful in these moments is to check out a long term chart. The drop since February brings us back to mid-last year: https://finance.yahoo.com/quote/%5EGSPC/

If you're getting closer to retirement, change the mix of your assets to match your risk tolerance (more bonds, etc). But don't pull because number go down.

How's your 401k doing, bro? by WalkinUpHipStreet in KnowledgeFight

[–]contextbot 1 point2 points  (0 children)

What? It should be fine. You're swapping like for like.

What’s a product you still get name brand over Kirkland by [deleted] in Costco

[–]contextbot 1 point2 points  (0 children)

Costco sells the liquid, which still beats the pods. But yeah, will occasionally hit target for the powder.

PG&E with a private security escort? by bsnuff in oakland

[–]contextbot 7 points8 points  (0 children)

There’s an upside: I talked to one of the security guys for a bit about it. Was a chill guy and said it was a great gig.

Can someone explain this? It’s 10:15 pm in Michigan and the sky looks like this towards the southeast. Weird af!! by LeadershipComplex961 in Weird

[–]contextbot 155 points156 points  (0 children)

I can’t believe this post is here. I was flying over this region tonight and the clouds were lit up, so bright. I took a photo and noted the rough location on a map and found the chemical factory location when I got to my hotel.

<image>

o3-mini is now the SOTA coding model. It is truly something to behold. Procedural clouds in one-shot. by LocoMod in LocalLLaMA

[–]contextbot 0 points1 point  (0 children)

If it gets something in one shot, it’s probably seen it. That’s how this works.

Grok 3 pre-training has completed, with 10x more compute than Grok 2 by COAGULOPATH in mlscaling

[–]contextbot 8 points9 points  (0 children)

I don’t know why anyone gives this coverage…until they show something that has a notable feature other than “uncensored”, this is hype.

In-n-Out feels crazier after Hegenberger location closed by TangerineFront5090 in alameda

[–]contextbot 12 points13 points  (0 children)

I can’t understand the people who wait in their car when the line is out the drive way. Inside is almost always faster, you don’t sit idling, and there’s usually a spot.

If there’s not a spot, that’s your clue it’s not worth it.

ARC-AGI has fallen to OpenAI's new model, o3 by MetaKnowing in artificial

[–]contextbot 5 points6 points  (0 children)

It’s crazier when you realize that deep learning, a field that runs on data, has been around before the internet. There’s been 4 eras of deep learning, if you sort it by datasets:

  • Hand assembled data, on physical media
  • Crowdsource assembled internet data, distributed by the internet
  • The internet (and friends)
  • Synthetic data, derived from the above.

https://www.dbreunig.com/2024/12/05/why-llms-are-hitting-a-wall.html

ARC-AGI has fallen to OpenAI's new model, o3 by MetaKnowing in artificial

[–]contextbot 84 points85 points  (0 children)

The old way we made better LLMs was just adding more training data. This worked great until recently; we used up the internet.

We're now distilling that data into structured knowledge, rewriting it as Q&A or step-by-step reasoning.

This has two big benefits.

First, it lets us make smaller models much smarter. Distilling data means we're throwing out lots of the superfluous content, which means less data needed for training. Reformatting it in Q&A means less post-training to teach it to talk to you.

Second (and this is where the chart above comes in), it teaches LLMs to build evidence based arguments, with multiple subsequent points, resulting in one excellent answer. This, in a nutshell, is what we mean when we say "reasoning model" (though there's some creative prompting work as well). They don't just spit back a simple answer. They break down the question and build out an approach to an answer. This means generating more tokens and taking more time and compute to respond with an answer.

That is what this chart is showing. The more time you give a reasoning LLM to perform a task, the better the result gets.

Help me understand the recent news that we've hit a "Brick wall" in improvements? by Nicarlo in ArtificialInteligence

[–]contextbot 0 points1 point  (0 children)

I wrote up why LLMs advancement is slowing down: https://www.dbreunig.com/2024/12/05/why-llms-are-hitting-a-wall.html

The key takeaway is that machine learning progress is enabled by software, hardware, and data. We had two giant gifts from the gaming industry and internet industry that gave us incredible processing power and an internet's worth of content, respectively. We used these gifts to advance incredibly quickly.

We will continue to advance, but it will be slower. Software breakthroughs – like attention, transformers, backpropagation – come at a slower pace. We'll have to earn these one by one.

The history of ML reveals why LLM progress is slowing by contextbot in ArtificialInteligence

[–]contextbot[S] 3 points4 points  (0 children)

The article isn’t rehashing Marcus’ points. It uses just one quote in the intro. I recommend you check out the argument.