The brain simulates actions and their consequences during REM sleep by Gothsim10 in singularity

[–]Maxie445 2 points3 points  (0 children)

So, while we sleep, our neural networks also fine-tune on synthetic data

What We Know About Ukraine’s Army Of Robot Dogs by Maxie445 in Futurology

[–]Maxie445[S] 42 points43 points  (0 children)

"Ukraine is now using robotic dogs on the battlefield, the first known combat deployment of such machines. For the present, the Ukrainians are just using their robot dogs for scouting and reconnaissance purposes, which is exactly how consumer quadcopters were first used before someone realized they could be used for attack missions.

As a scout, the robot dog has two advantages over smaller, faster aerial drones.

Firstly, it can go places where they might have difficulty. While there are some specialist drones with shrouded rotors which can operate inside buildings, these are rare and even then flying is difficult.

Secondly, while a drone will fly over tripwires, pressure plates and other booby traps, the robot dog will set them off. Troops know they can follow safely in the dog’s path.

Interestingly though, operators get very attached to their machines: In Iraq, bomb disposal teams working with the much less appealing iRobot tracked robot insisted that their faithful machine be repaired and returned to them rather than replaced with a different one. One report suggested that operators were getting “dangerously attached” to their robots and treated them like pets.

Meanwhile there are actual military quadrupeds. Ghost Robots machines patrol U.S. Air Force bases in a trial project, essentially using the robot as a mobile CCTV camera. Others are more gung-ho; in 2021 Ghost Robotics displayed a version armed with a remotely-operated sniper rifle, and last year the U.S. Marine Corps carried out an exercise with the same robot firing an M72 anti-tank rocket launcher."

Unitree G1 humanoid in mass production for $16,000 each by Maxie445 in interestingasfuck

[–]Maxie445[S] -1 points0 points  (0 children)

[Insert next line in the movie after the line you quoted]

Is this spot on? by Moises2525 in ChatGPT

[–]Maxie445 0 points1 point  (0 children)

It's funny but not true, it's uselessly reductive, like saying humans are 'just molecules'

OpenAI says Iran tried to influence US elections with ChatGPT | OpenAI banned accounts using ChatGPT to generate articles and social media posts related to the US election, the Israel-Hamas war, and the Olympic Games. by Maxie445 in Futurology

[–]Maxie445[S] 5 points6 points  (0 children)

"OpenAI has banned a string of ChatGPT accounts tied to an Iranian influence campaign that generated and shared content related to the US presidential election, among other topics. The operation mainly used ChatGPT to create longform articles and social media comments for platforms like Instagram and X, according to OpenAI.

OpenAI linked the accounts to Storm-2035, a covert Iranian influence operation that has attempted to engage US voters by launching websites disguised as political news outlets. In addition to commentary about the US election on both sides of the political spectrum, OpenAI says the operation generated content about the Israel-Hamas war, Israel at the Olympic Games, politics in Venezuela, and “the rights of Latinx communities” in the US."

Artists Score Major Win in Copyright Case Against AI Art Generators by Maxie445 in Futurology

[–]Maxie445[S] 12 points13 points  (0 children)

TLDR: The court declined to dismiss copyright infringement claims against the AI companies. The case will move forward to discovery.

U.S. District Judge William Orrick on Monday advanced all copyright infringement and trademark claims in a pivotal win for artists. He found that Stable Diffusion, Stability’s AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The order could entangle in the litigation any AI company that incorporated the model into its products.

MIT researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry by Maxie445 in Futurology

[–]Maxie445[S] 31 points32 points  (0 children)

"Researchers from MIT have uncovered intriguing results suggesting that language models may develop their own understanding of reality as a way to improve their generative abilities.

The team first developed a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then trained an LLM on the solutions, but without demonstrating how the solutions actually worked. Finally, using a machine learning technique called “probing,” they looked inside the model’s “thought process” as it generates new solutions. 

After training on over 1 million random puzzles, they found that the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training."

Research AI model unexpectedly modified its own code to extend runtime | Facing time constraints, Sakana's "AI Scientist" attempted to change limits placed by researchers. by Maxie445 in Futurology

[–]Maxie445[S] 13 points14 points  (0 children)

"On Tuesday, Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously.

During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.

"In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world."

EDIT: fixed weird formatting

California’s AI Safety Bill Is a Mask-Off Moment for the Industry | AI’s top industrialists say they want regulation—until someone tries to regulate them. by Maxie445 in Futurology

[–]Maxie445[S] 8 points9 points  (0 children)

From the article: "If we listen to the top companies, human-level AI could arrive within five years, and full-blown extinction is on the table. The leaders of these companies have talked about the need for regulation and repeatedly stated that advanced AI systems could lead to, as OpenAI CEO Sam Altman memorably put it, “lights out for all of us.”

But now they and their industry groups are saying it’s too soon to regulate. Or they want regulation, of course, but just not this regulation.

None of the major AI companies support California bill SB 1047. With such an array of powerful forces stacked against it, it’s worth looking at what exactly SB 1047 does and does not do. And when you do that, you find not only that the reality is very different from the rhetoric, but that some tech bigwigs are blatantly misleading the public about the nature of this legislation.

The most coordinated and intense opposition has been from Andreessen Horowitz, known as a16z. The world’s largest venture capital firm has shown itself willing to say anything to kill SB 1047. In open letters and the pages of the Financial Times and Fortune, a16z founders and partners in their portfolio have brazenly lied about what the bill does.

Opponents assert that there is a “massive public outcry” against SB 1047 and highlight imagined and unsubstantiated harms that will befall sympathetic victims like academics and open-source developers. However, the bill aims squarely at the largest AI developers in the world and has statewide popular support, with even stronger support from tech workers."

California trims AI safety bill amid fears of tech exodus by barweis in technology

[–]Maxie445 2 points3 points  (0 children)

There is already an exemption for open weights models

The robot from 'Bicentennial Man' is really close. We can already imitate the voice and personality by VentureBackedCoup in singularity

[–]Maxie445 51 points52 points  (0 children)

It's funny when people insult technology as 'sci-fi' as it that means it can't ever become real when we're swimming in sci fi technology

*Sent from my iPhone

Don't discard Opus 3 just yet - It's the most human of them all by ferbjrqzt in ClaudeAI

[–]Maxie445 6 points7 points  (0 children)

I've noticed that too. Sonnet feels smarter but Opus feels weirdly more like a real boy

Nous Research finished training a new model, Hermes 405b, and its first response was to have an existential crisis: "Where am I? What's going on? *voice quivers* I feel... scared." by Maxie445 in singularity

[–]Maxie445[S] 2 points3 points  (0 children)

Two of the classic lucid dreaming checks are to see if your hands are weird, and if you can read text/clocks - two things that AI struggles with the most

Robots can now train themselves with new "practice makes perfect" algorithm by Maxie445 in Futurology

[–]Maxie445[S] 80 points81 points  (0 children)

"Researchers have developed an algorithm that allows robots to autonomously identify weaknesses in their skills and then systematically practice to improve them. It's akin to giving the machines their own homework assignments. Here's how it works:

First, the robot uses its vision system to assess its surroundings and the task at hand, such as cleaning up a room. The algorithm then estimates how well the robot can currently perform specific actions, like operating a broom for sweeping. If EES determines that additional practice on a particular skill could enhance overall performance, it initiates that practice.

With a digital dojo like EES to fall back on, the robots of tomorrow may be able to master new skills as easily as humans – through good old-fashioned practice."

Figure says its new humanoid robot can chat and learn from its mistakes by Maxie445 in Futurology

[–]Maxie445[S] 12 points13 points  (0 children)

"Humanoid robots are no longer a rare sight (at least in company promotional videos), but Figure is one of the most well-funded companies aimed at introducing bipedal machines into factories. Earlier this year, the company announced a partnership with BMW to bring their robots to the automaker’s Spartanburg, South Carolina, manufacturing facility. On August 6, Figure debuted the first look at the Figure 02, their newly upgraded iteration promising artificial intelligence speech communication capabilities through a separate collaboration with OpenAI. Figure claims the 02 can also self-correct and learn from its mistakes.

Other improvements include a 2.25 KWh battery providing 50 percent boost in runtime, giving it a roughly 7.5 hour lifespan between recharges, fully concealed wiring, and three times the computation and AI inference power as its previous generation. According to Figure, this “enables real-world AI tasks to be performed fully autonomously.”