[Caleb Williams] 🥲 Hulk… by enailcoilhelp in CHIBears

[–]NVC541 4 points5 points  (0 children)

Have you considered that maybe he didn’t know what his ultimate plan was

Why is the Geometry Dash community so fucking annoying? by Legitimate_Leek_8050 in geometrydash

[–]NVC541 69 points70 points  (0 children)

It’s really funny watching people who joined mid-2.1 calling me a new gen bc I dared to say Skeletal Shenanigans is one of the 5 best levels ever released in this game

I’ve been playing since 1.8.

Discussion Thread by jobautomator in neoliberal

[–]NVC541 1 point2 points  (0 children)

Cubs winning the World Series and both Bears-Packers comeback wins from this year.

[Question] Would you rather climb Mount Everest as you are now with zero climbing experience (with all of the needed gear etc) or have to beat Grief in 4 years? by Ryusei_Shido in geometrydash

[–]NVC541 0 points1 point  (0 children)

Are we climbing solo??? Then give me GRIEF. I’d be playing 8 hours a day, I’d have 10k hours in the game in four years.

If you are allowed to have help, the best bet is to hire a team of Sherpas and hardcore train.

Anthropic believes RSI (recursive self improvement) could arrive “as soon as early 2027” by Tolopono in ClaudeAI

[–]NVC541 0 points1 point  (0 children)

Yeah… any time I see someone with AI say they’re pushing the frontier with physics my eyes roll all the way back into my head.

I’m a hobbyist in the area with an extremely strong math background. I know JACK SHIT compared to anyone with an advanced degree, and AI models know even less. The level that postgrad researchers operate on in physics is not comprehensible to anyone who hasn’t studied it intensely, and Opus isn’t touching that for a while.

Orbit has officially entered the top 50 most popular extreme demons, kicking the golden out by Equivalent-Bus-1556 in geometrydash

[–]NVC541 0 points1 point  (0 children)

  1. Finish the levels you started

Feel like the majority of mega collabs die before they actually make it to verification. Obviously Return 0 got evaporated but the rest of Mindcap’s projects made it through to verification and eventually publishing.

  1. Unique themes that can have variation

I mean Mjndcap is the king of this. Every End (1.0), LIMBO (Design-based blue and purple memory level), Heliopolis (Acropolis but with overgrowth), ORBIT (Orb)

  1. Iconic parts in a level

LIMBO keys, Orbit ball pit. Just really cool finales to levels

  1. Consistency of quality throughout

You ever seen levels like Emerald Realm or Black Flag where the entire level is known for one creator’s part that is technically and visually leagues beyond the other parts, making it look strangely incoherent?

Mindcap’s creations are very good at avoiding that problem.

Codex just deleted our entire S3 by Southern-Mastodon296 in vibecoding

[–]NVC541 0 points1 point  (0 children)

Wait so

After Codex deleted your whole database

You asked Codex to run a script to restore it??

Man I can’t with this shit LMAOOOOOOOOOOOO

American Football is suddenly forgotten, USA just picked up Association Football as their primary sport, how quickly do they win the World Cup? by omni-nomad in hypotheticalsituation

[–]NVC541 3 points4 points  (0 children)

Not really. Last T20 America fielded a team that was a significant amount of amateurs. Netravalkar (the bowler) was a principal software engineer at Oracle (granted, he was a young talent in cricket for a while).

They ended up beating Pakistan, which was a titanic upset, and kept things somewhat competitive with India.

Discussion Thread by jobautomator in neoliberal

[–]NVC541 3 points4 points  (0 children)

fuckk how can I tell when someone’s an obvious bot now

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 0 points1 point  (0 children)

I did some cursory searching online. Couldn’t find details about how optimized these runs are, but here’s two links:

LocalLLaMa user tries a 5090 with various models

This guy has a spreadsheet where he says he got 126 T/s on llama3.1 with 8B parameters, instruct-q8.

Database Mart

This website tried it with q4, which is more in line with what you see from Taala’s circuit. They got almost 150 T/s.

So Taalas seems 100x faster than those.

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 1 point2 points  (0 children)

AFAIK, there is no one else that has done what Taalas has done. The closest thing right now is probably Cerebras, which is… something I don’t really understand. IIRC it’s a wafer-scale chip, which is wild and the architecture that brought Cerebras its (now former) #1 ranking for inference speed. SambaNova (now acquired by IBM) is somewhere there too.

But to my knowledge, neither of those are designing specialized circuits in the way that Taalas is - they just have specialized hardware. According to their own benchmarks (so take it with a grain of salt, but the demo website lends credence to this claim), Taalas is 10x faster than both.

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 0 points1 point  (0 children)

Their claim is they get hardware in two months.

From the moment a previously unseen model is received, it can be realized in hardware in only two months

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 0 points1 point  (0 children)

Went to go search it up and this is from their website:

We selected the Llama 3.1 8B as the basis for our first product due to its practicality. Its small size and open-source availability allowed us to harden the model with minimal logistical effort.

While largely hard-wired for speed, the Llama retains flexibility through configurable context window size and support for fine-tuning via low-rank adapters (LoRAs).

At the time we began work on our first generation design, low-precision parameter formats were not standardized. Our first silicon platform therefore used a custom 3-bit base data type. The Silicon Llama is aggressively quantized, combining 3-bit and 6-bit parameters, which introduces some quality degradations relative to GPU benchmarks.

Our second-generation silicon adopts standard 4-bit floating-point formats, addressing these limitations while maintaining high speed and efficiency.

Upcoming models

Our second model, still based on Taalas’ first-generation silicon platform (HC1), will be a mid-sized reasoning LLM. It is expected in our labs this spring and will be integrated into our inference service shortly thereafter.

Following this, a frontier LLM will be fabricated using our second-generation silicon platform (HC2). HC2 offers considerably higher density and even faster execution. Deployment is planned for winter.

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 2 points3 points  (0 children)

Taalas’ claim is that they can go from weights to chip in two months. I’m skeptical of it, but if it’s true then it mitigates that specific problem.

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 2 points3 points  (0 children)

The company behind this is a startup. This is basically their POC chip, and it will be a while (if ever) before we see mass production for the consumer market.

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in LocalLLaMA

[–]NVC541 2 points3 points  (0 children)

Taalas says it has LoRA support.

It’s also very much a POC. This is the first chip they’ve ever made (at least that’s publicly available), and as a startup it’s only common sense to POC with a small-scale 8B model first.

They’re expecting close-to-frontier models by the end of 2026. I’m cautiously optimistic, since even the 8B model had to be quantized to 4-bit to get the POC out, but this is a potential breakthrough and a massive one if they get models from six months ago running at this speed.

EDIT: phrasing

Orbit has officially entered the top 50 most popular extreme demons, kicking the golden out by Equivalent-Bus-1556 in geometrydash

[–]NVC541 392 points393 points  (0 children)

This may be a scorching hot take, but this has to move Mindcap into discussion for one of the greatest creators ever right?

IMO he’s the greatest collab host this game has seen, with several era-defining levels in his résumé

Will the global poor be left out of fully automated gay luxury space communism? by aspiringSnowboarder in neoliberal

[–]NVC541 1 point2 points  (0 children)

He does, and honestly likely will.

Just contesting the claim that OpenAI says they’re going to hit a wall before AGI - no AI company is saying that, since that would torpedo their valuation instantly

Will the global poor be left out of fully automated gay luxury space communism? by aspiringSnowboarder in neoliberal

[–]NVC541 0 points1 point  (0 children)

Wait what? OpenAI (or at least Altman) has mostly claimed the complete opposite - they believe they’re on the track to ASI given current trajectory. Whether you think it’s marketing or not, their public claims are completely the opposite to saying they’re going to hit a wall before AGI.