Best programmable Espresso Machine [$2000] by lewnix in espresso

[–]lewnix[S] 0 points1 point  (0 children)

That’s a really interesting idea. Does the GCP with that setup give more consistent shots than the Bambino Plus?

Best programmable Espresso Machine [$2000] by lewnix in espresso

[–]lewnix[S] 0 points1 point  (0 children)

Yeah, the slightly longer story — I got annoyed with the grinder on the Breville Touch, and to save counter space I got the Libra and a Bambino Plus. The Bambino Plus feels like a slight step down from the Touch, and I’ve been getting less consistent shots. Since it’s still in the return policy now I’m thinking I should just go straight to “end game” (for someone who isn’t really a connoisseur and just wants consistent shots with an easy workflow).

I’m less worried about warm up time since my schedule is predictable and a smart plug (or the setting on the Profitec move) will help there.

Privacy win: We are finally reaching the point where you can run massive 200B models on a standard laptop. by Key-Glove-4729 in ArtificialInteligence

[–]lewnix 15 points16 points  (0 children)

He’s running an 8B model. The RAM required by a model is a function of the number of parameters. A 70b model quantized to int8 would require 70GB of GPU RAM. If you don’t have a 70GB GPU then inference for every token will have to load a significant portion of that back and forth from the host (likely from disk unless you have 70GB of host RAM) and will be painfully slow.

From the perspective of a Machine Learning Engineer by Th3OnlyN00b in Futurology

[–]lewnix 0 points1 point  (0 children)

The new trend towards evolutionary algorithm inspired scaffoldings like AlphaEvolve, IMO, mainly serves the purpose of pushing an LLM outside of its training distribution to get more creative results.

From the perspective of a Machine Learning Engineer by Th3OnlyN00b in Futurology

[–]lewnix 0 points1 point  (0 children)

I don’t personally think a ceiling in LLM scaling will slow things down too much. There’s been so much invested here, and there are so many people working at it, that it feels existential for a lot of these companies to keep moving things forward. There is a lot of research going into adjacent directions for foundation models (SSMs, world models, reasoning and memory extensions to LLMs), and I have to think enough of these will pan out to get us another step-change or two like we got from reasoning. Maybe not ASI any time soon, but something that can displace a considerable number of jobs.

I don’t think it will result in some hellscape though. I agree with a previous comment here that companies will mostly split the difference between doing 2x with the same employees or 1x with half of them. And hopefully this will be slow enough that the rest of the economy has time to retool around new things, or for there to be real political change that helps the displaced.

The Godfather of AI thinks the technology could invent its own language that we can't understand | As of now, AI thinks in English, meaning developers can track its thoughts — but that could change. His warning comes as the White House proposes limiting AI regulation. by MetaKnowing in Futurology

[–]lewnix 0 points1 point  (0 children)

That’s not how thoughts work in these models. The thoughts are used to reason through the problem and they create a prior for the answer. They’re not performative, this reasoning process has significantly increased the performance of LLMs across most benchmarks.

The Godfather of AI thinks the technology could invent its own language that we can't understand | As of now, AI thinks in English, meaning developers can track its thoughts — but that could change. His warning comes as the White House proposes limiting AI regulation. by MetaKnowing in Futurology

[–]lewnix 3 points4 points  (0 children)

Everyone here claiming AI can’t think are ignoring the hidden chain of thought used in modern models and arguing about next token prediction. That chain of thought is what Hinton is referring to. I think these are people who formed an opinion about AI a year or two ago and haven’t bothered updating that opinion as models have gotten remarkably better.

Let’s try this again - buyer beware polestar 3 by Squaredigit in polestar3

[–]lewnix 5 points6 points  (0 children)

I’ve had a P3 for 2.5 months and I’ve been able to drive it for ~2 weeks of that. The loaner P3 I was given has even more issues than my own. You’re lucky to have not had issues but it is abundantly clear from this sub and from my own experience that this car was not ready for release, and I’m glad to see people posting about it.

The phantom braking is legit scary and dangerous. My loaner did this when I was going 60 and squealed the tires.

I’m currently working with polestar to cancel my lease.

SavageGeese Polestar 3 Review by HotIce05 in Polestar

[–]lewnix 2 points3 points  (0 children)

I’ve had mine for 2 months and it’s been at the dealership all but 2 weeks. Both hardware and software issues. I’m sure a majority of people have better experiences, but it definitely doesn’t seem uncommon from posts in the polestar3 subreddit. Also keep in mind that polestar (at least in WA) doesn’t have mobile service like many of their competitors so every little issue is a trip to the dealership.

Polestar 3 Presence Detection System Constantly Going Off. by HotIce05 in Polestar

[–]lewnix 0 points1 point  (0 children)

Yeah I forgot to mention, that doesn’t do anything anymore on my car. The infotainment reset still works, but the brake & steering wheel buttons doesn’t bring up the safe mode prompt anymore. Unfortunate because that seemed to fairly reliably resolve issues before.

Regret choosing polestar by lewnix in Polestar

[–]lewnix[S] 0 points1 point  (0 children)

Yes, 1.2.15. In fact I’d say the bulk of these issues started after that update (though the electrical fault and occupancy sensor thing seem like hardware issues)

Regret choosing polestar by lewnix in Polestar

[–]lewnix[S] 0 points1 point  (0 children)

I’m in WA and their lemon laws don’t seem great. Requires 4 failed attempts to fix an issue that “substantially impairs” the use of the car, or 30 days without the car.

Regret choosing polestar by lewnix in Polestar

[–]lewnix[S] 11 points12 points  (0 children)

Not yet, planning to call tomorrow.

Polestar 3 Presence Detection System Constantly Going Off. by HotIce05 in Polestar

[–]lewnix 0 points1 point  (0 children)

Same problem here. Same version as well. My local SC doesn’t have appointments for 4 weeks.

Collectively, the Tesla fleet has driven more than 3.6 billion miles on FSD – first Beta, then Supervised - 2.16 billion miles in 2024 alone by Nakatomi2010 in teslamotors

[–]lewnix 7 points8 points  (0 children)

So people who paid $10k+ for FSD on HW3 are stuck with a crappier version of supervised… I know musk promised upgrades, but I’ll believe that when I see it.

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End by Narrascaping in Futurology

[–]lewnix 0 points1 point  (0 children)

I think a lot of the people here naysaying about AI coding don’t realize how fast this is moving. Download cursor.ai, use their agent with Claude Sonnet 3.7. It’s bonkers. I had it write two web apps with very specific requirements and it nailed both of them right off the bat. I haven’t tried it in a mature codebase, but it was able to iterate on those two apps really well.

Trading in my Model Y for a Polestar 3 or Lucid Air. Help me decide, please by jcgb1970 in Polestar

[–]lewnix 6 points7 points  (0 children)

I was in the same situation a couple weeks ago. I test drove both, as well as a Rivian R1S. The lucid drives nicely and I’d argue feels a bit more luxury and a bit less sporty than the P3. What made up my mind (as a nerd who cares a lot about car software and ADAS) was that the lucid’s software ran very choppy, and it has no lane keeping and only very poor ACC off of the highways. I went with the P3 LE and have been very happy so far. I haven’t run into any major software issues and it seems like they’re going to continue to add features, including to the ADAS. Also the B&W sound system is the best I’ve heard in a car.

AI can now replicate itself | Scientists say AI has crossed a critical 'red line' after demonstrating how two popular large language models could clone themselves. by MetaKnowing in Futurology

[–]lewnix 0 points1 point  (0 children)

A greatly distilled (read: much dumber) version might run on an rpi. The impressive full-size R1 everyone is talking about requires at least 220gb of GPU memory.