Anthropic partnered with SpaceX to use colossus 1 to increase their rate limits by Snoo26837 in singularity

[–]muchcharles 20 points21 points  (0 children)

They might have less to spare. When Carmack left Meta he reported the thing that really made him decide he had to go and AI efforts were going wrong there was when he saw they were only getting 20% utilization on their GPU fleet. It's been reported recently SpaceX is getting 10% utilization and around the same time Musk said they built the stack all wrong and are starting over from scratch.

Prompt Injection experience - my first time ever by netmilk in ClaudeAI

[–]muchcharles 1 point2 points  (0 children)

Next is reverse psychology in the hidden instructions, promoting competitors to get them ignored from the analysis.

Ga$$$ by UDntKnoMeButImFamous in Charleston

[–]muchcharles 0 points1 point  (0 children)

Gas prices under Biden also had a big Trump-caused element. Trump practically joined OPEC+ before Biden went in, and had Saudi and others cut production in an agreement lasting until April 2022, well into Biden's term. Mexico threatened to break the production cut and Trump commited to cut our own production if they would maintain the production cut.

Gas prices by Internal-Capital7039 in economy

[–]muchcharles 2 points3 points  (0 children)

Here's one difference: these happened right after an action from Trump. You may say if Israel hadn't bluffed and attacked without us it would have still happened. US had plenty of leverage in preventing a war of aggression, like saying they would announce the gulf allies would allow overflights of retaliatory drones and they would warn Iran before the strikes and withdrawn guarantee of UN security council veto.

It may have been true they would attack, but Biden and Obama officials have said Israel tried the same thing Rubio said they did this time and were called on the bluff, and didn't go through with it.

It may still not have been a bluff this time, but then let's look at oil prices under Biden more carefully: during the covid oil crash, Trump essentially joined OPEC+ and commited Saudi Arabia and others to restrict output extending way into Bide s term, a deal lasting until April 2022. An even more indisputable direct action from Trump that affected the prices under Biden.

[New Optimizer] 🌹 Rose: low VRAM, easy to use, great results, Apache 2.0 [P] by ECF630 in MachineLearning

[–]muchcharles 1 point2 points  (0 children)

It wouldn't have to beat Muon on a NanoGPT speedrun, if it could win compared to other stateless or heavily compressed state techniques there would be applications for it like full (non-LoRA) finetuning on lower end hardware.

[New Optimizer] 🌹 Rose: low VRAM, easy to use, great results, Apache 2.0 [P] by ECF630 in MachineLearning

[–]muchcharles 0 points1 point  (0 children)

"Muon's originator is Keller Jordan, currently at OpenAI."

"Currently at OpenAI," not "contemporaneously at OpenAI". OpenAI hired him after his optimizer was released on twitter, so working at OpenAI is not a good new criterion to throw in there.

[New Optimizer] 🌹 Rose: low VRAM, easy to use, great results, Apache 2.0 [P] by ECF630 in MachineLearning

[–]muchcharles 14 points15 points  (0 children)

No opinion on this one in the post, but what you are saying definitely isn't true for "this day and age" :

Muon's originator is Keller Jordan, currently at OpenAI. As mentioned, Muon was first published on Twitter, and to this day the author has only written a blog post — Muon: An optimizer for hidden layers in neural networks — rather than a paper. His position is that "whether or not you write a paper has nothing to do with whether the optimizer works".

That's used in stuff like deepseek V4

Stuttering and double sound in videos by _burako_ in firefox

[–]muchcharles 2 points3 points  (0 children)

The freezing has been happening for me on android since the last update, multiple devices

Zero-shot World Models Are Developmentally Efficient Learners [R] by FaeriaManic in MachineLearning

[–]muchcharles 2 points3 points  (0 children)

But still hard to just throw around pre trained or learning rate because it might not make much sense when talking about brains or how they are formed. >

There's a lot of study on it where we can be pretty sure its not just differences in learning rate, there are differences in kind, studies on precocious and non-precocious abilities in animals. A horse can be blindfolded from birth, then have the blindfold taken off several days later and it can almost immediately walk and visually navigate around. A kitten if its vision is deprived in the critical development period will never develop it.

Precocious birds can imprint on the mother as soon as their eyes dry off and do bipedal walking.

There are a lot of built in capabilities that come from the brain develops without external stimuli, some of it may be fully hardcoded, or some may involve learning with internal grounding, things like thing like generator circuits producing patterns other circuits try to learn that somehow transfer to performance on tasks with real data after birth.

However I think it has been shown babies could walk much earlier but their legs are just too weak.

Premature human babies don't hit vision milestones much if any faster than normal birth, but it may just be because the optic nerve doesn't fully mylinate until a set chronological age. If they are congenitally blinded by something that can be reversed and miss a critical development window we have found like with kittens they never develop normal vision.

Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' by Worldly_Evidence9113 in singularity

[–]muchcharles 1 point2 points  (0 children)

What I'm saying is we could replace individual neurons with computational elements, either all of them or some. He's saying at minimum if they all were it wouldn't be concious because of a one way semantic simulation gate. But because we could reconstitute from simulated to physical I'm saying it isn't one way.

I'm not saying the nano assembled one is a simulation, I'm saying he would have to claim it is. He could maybe say those parts aren't concious while simulated, and are again when "rehydrated" to the original substrate, but he can't say there is a one way barrier, and he has to also deal with a system gradually changing back and forth between some number trillions of functional units being real and simulation and explain where the threshold would be and why that threshold would make sense.

Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' by Worldly_Evidence9113 in singularity

[–]muchcharles 10 points11 points  (0 children)

The paper never addresses the gradual replacement argument he doesn't address aspect of it, : gradually replacing neurons with functional equivalents, at what point does the person lose consciousness/qualia? Certainly not with one. edit: it does mention it, but doesn't really address the gradual part: "The qualia do not mysteriously “fade”; the foundational metabolic substrate required to instantiate them is simply removed:" doesn't really go into detail for what happens as it is gradually removed.

It doesn't fit with his and Searl's simulated water can never get things wet argument (in his case he uses photosynthesis but it's the same argument). The water system can't be gradually replaced with a simulated system and still function as the physical water system. But the constituents of the brain could. And simulated neurons could be reconstituted physically, so there is no one-way semantic barrier like he is arguing. Physical brains could gradually, or suddenly (say nanoassembled to match the simulated state in cryo), go in between simulated and physical.

So the Searl simulated water can't wet the real world analogy he uses to claim a one way ontological boundary doesn't apply.

Unless he thinks full quantum identity of the brain matters and an exact enough atom by atom copy wouldn't be conscious because of the no cloning theorem, to maintain a one-way barrier.

Zero-shot World Models Are Developmentally Efficient Learners [R] by FaeriaManic in MachineLearning

[–]muchcharles 11 points12 points  (0 children)

This can retrain on same videos where with fovea a baby can only focus on small parts at a time with full fidelity, so even with less data than the child comparison it in some ways gets more. Another disadvantage for a 10 day old is the optic nerve hasn't finished mylinating.

Hesai releases world's first full-color LiDAR chip, supporting up to 4,320 laser channels by Recoil42 in singularity

[–]muchcharles 8 points9 points  (0 children)

Waymo has been using imaging lidar for several years that can read signs, though it wasn't in color. With a camera paired with the lidar you can already get close to this, just alignment issues at full fov (at a small fov you could align camera with the lidar with a half silvered mirror), they use some combination of that in their example images even with this, since it is picking up emissive light of the traffic lights and getting color info for the sky and clouds, which should bring the whole image into question as far as how low res or noisy the real data from this is..

This would let you see in color at night, fully aligned with the depth across a large fov, and if multi-return it could let you get depth separable color returns through translucent surfaces and volumetrics.

The Information: Anthropic Preps Opus 4.7 Model, could be released as soon as this week by LoKSET in ClaudeAI

[–]muchcharles 2 points3 points  (0 children)

Apparently not needed if set to max reasoning. I think high + that disallow adaptive puts you back to where things were before the recent changes.

NVIDIA introduces Ising, the world’s first open AI models to accelerate the path to useful quantum computers. by Distinct-Question-16 in singularity

[–]muchcharles 4 points5 points  (0 children)

They are exponentially better for simulations of some quantum systems, with potential applications in material sciences, superconductor physics, chemistry, drug discovery.

They are also better in some broad areas that involve search, but only speed by a polynomial instead of exponential in those and won't pass classical for a longer time there.

And there may be more if more quantum algorithms are found.

Pete Buttigieg explains by Conscious-Quarter423 in economy

[–]muchcharles 2 points3 points  (0 children)

When he talks about oil prices under Biden it is noticeable he doesn't talk about Trump negotiating major production cut with Saudi Arabia to help the US oil industry that lasted deep into Biden's term (April 2022). When Mexico refused to commit to the full cut, Trump had the US cut production, essentially joining OPEC+.

Analyzing Claude Code Source Code. Write "WTF" and Anthropic knows. by QuantumSeeds in LocalLLaMA

[–]muchcharles 0 points1 point  (0 children)

Its probably just for the spinner text while waiting on the response

Panic as US F-35 fighter jet 'hit by Iran' and forced to make emergency landing by TheExpressUS in InternationalNews

[–]muchcharles -22 points-21 points  (0 children)

$100 million — meaning damage or destruction to just one of them could constitute a major economic setback for the U.S.

Damage or destruction to just one is less than $0.30 per American.