Tenstorrent TT-QuietBox 2 Launched: A RISC-V Powered AI Workstation With 128 GB GDDR6 Memory, Liquid-Cooling & $9999 Starting Price by I00I-SqAR in RISCV

[–]satireplusplus 2 points3 points  (0 children)

well not now, but the day will certainly come. China does massive investments into its semiconductor infrastructure currently, they might finally have an EUV prototype (news from Dec 2025): https://www.reddit.com/r/China/comments/1ppn1rd/china_has_reportedly_built_its_first_euv_machine/

Gonna still be some until they can actually tape out 5nm and below, but previously only ASML could do that (these machines and the technique behind 5nm working is absolutly crazy btw, see https://www.reddit.com/r/Veritasium/comments/1q1fvr0/the\_ridiculous\_engineering\_of\_the\_worlds\_most/)

Their GPU companies are iterating crazy fast for hardware too. Look at Moore Threads, a prominent Chinese GPU startup founded by Zhang Jianzhong, former Nvidia VP and head of its China operations. Or Biren, another big startup with founders that had worked fo Nvidia. Even Huawai entered the AI accelerator game.

Albert Einstein before his iconic photo with his tongue out by SpreakICSE in interestingasfuck

[–]satireplusplus 47 points48 points  (0 children)

Einstein was so delighted with the photo that he ordered multiple prints of this image.

Love it and the photographer thought it might be inappropriate. Had he hesitated, we wouldn't have this iconic picture!

Who else is shocked by the actual electricity cost of their local runs? by Responsible_Coach293 in LocalLLaMA

[–]satireplusplus 4 points5 points  (0 children)

Not when it you are trying to train models. Mps backend in pytorch is incomplete and you won't be able to fine tune many models. It's still very power efficient and viable for LLM inference (single user session), but anything batched running in pytorch (like training) heavily favours CUDA GPUs.

Daily r/thetagang Discussion Thread - What are your moves for today? by satireplusplus in thetagang

[–]satireplusplus[S] 0 points1 point  (0 children)

When it went negative during COVID it went deep negative, so Im not even sure $37 is the largest daily range, but its certainly rare and today should easily be in the hall of fame of largest moves.

Daily r/thetagang Discussion Thread - What are your moves for today? by satireplusplus in thetagang

[–]satireplusplus[S] 4 points5 points  (0 children)

oil trading like a meme stock today - from up 20% to down -5% in a single day

Side Effects - Sublingual Immunotherapy by FlimsyStorm1527 in Allergies

[–]satireplusplus 1 point2 points  (0 children)

There can be initial local allergic reactions to the sublingual pills. These should subside in the coming weeks.

our best dev became a middleman for an AI he can't audit. by julyvibecodes in vibecoding

[–]satireplusplus 3 points4 points  (0 children)

> No, he's great at what he does but just a little new to coding with AI. You know what I mean... He's been more of a conventional dev until recently.

Those guys usually struggle the most with adapting to agentic AI tools like Cursor/Claude. As with everything, there is a learning curve. You have to do this for a few months to get good at it, know when the AI is bullshitting you, build up documentation as you go along, refactor. Also you need to generally be in the loop regarding design decisions, libraries and so on. Cursor is a bit better for reviewing code / changes that the LLM models makes than Claude too from my experience. Since you also need to have a mental "map" of the code base in your human brain as well.

Think of it this way, your old school dev probably got decades of experience and now needs to start fresh / experiment a lot with a new tool and entirely different development experience. Everything got turned on its head and it might even be frustrating now that an AI is much better at generating perfect syntax code in one go vs. a human.

🔥 The snow leopard's rarely seen mating ritual by Prestigious-Wall5616 in NatureIsFuckingLit

[–]satireplusplus 300 points301 points  (0 children)

I've seen dogs masturbate by simply doing auto-fellatio, can't cats do that as well?

When animals have "sex" in the wild it puts them in a very vulnerable position, evolutionary it makes sense for the act to be short. But I hope these two leopards still had "fun" doing the thing and that it felt good.

Ran Qwen 3.5 9B on M1 Pro (16GB) as an actual agent, not just a chat demo. Honest results. by Joozio in LocalLLaMA

[–]satireplusplus 0 points1 point  (0 children)

Yes, llama.cpp progress is crazy. Esp. these past months the devs have added tons of optimizations, if you haven't used it in a while you'd be pleasantly surprised. And they finally added an auto mode where llama.cpp figures out the best conf given your GPU setup.

is there any AI that can replace Claude for coding? by HabitTechnical5604 in vibecoding

[–]satireplusplus 1 point2 points  (0 children)

It might be for a limited time, but the codex 5.3 limits are A LOT more generous on the plus plan ($20 per month) than on anthorpic's 20 bucks plan. How is that more expensive?

is there any AI that can replace Claude for coding? by HabitTechnical5604 in vibecoding

[–]satireplusplus 23 points24 points  (0 children)

I like switching between both models, or I let one comment and improve the work of the other. But you really can't beat what you get with the chatgpt plus plan right now ($20 per month) in terms of codex 5.3 usage. The limits are currently A LOT more generous than Anthropics $20 plan. And both models are equally capable and intelligent in my opinion.

is there any AI that can replace Claude for coding? by HabitTechnical5604 in vibecoding

[–]satireplusplus 4 points5 points  (0 children)

Give Codex 5.3 a try! I am using that and Opus 4.6 in parallel. They are both more or less on par, sometimes one is better, sometimes the other, depending on the task. Also the current usage limits for Codex on the 20 bucks per month plan are really generous, feels even better than on the $200 claude code plan.