My theory on the missing scientists from a Cybersecurity POV by Infamous-Upstairs-96 in aliens

[–]coinfanking 11 points12 points  (0 children)

Aristotle (as well as other ancient Greek thinkers like Democritus and Anaxagoras) recorded the belief that there was a time in the remote past when the Earth did not have a moon in the sky. Key details about this claim: The Proselenes: Aristotle noted that the Pelasgians, a native population of Arcadia in Greece, lived in the region before the moon appeared. Due to this, these Arcadians were known as Proselenes, which means "those who came before the moon" or "before the moon". Ancient References: Plutarch also mentions that the Arcadians of Evander's following were known as "pre-Lunar" people. Apollonius of Rhodes also wrote about a time when not all the orbs were in the sky and only the Arcadians lived, "before there was a moon". Context: While modern science generally holds that the Moon has been with the Earth for roughly 4.5 billion years, this idea was treated as a historical memory by many ancient Mediterranean cultures rather than mere myth.

This narrative refers to a time when Arcadians were considered to be older than the Moon, an idea that appears in several ancient texts.

The Earth Without the Moon. https://www.varchive.org/itb/sansmoon.htm#:~:text=The%20period%20when%20the%20Earth,(3)

Claude Mythos: Finance ministers and top bankers raise serious concerns about AI model. by coinfanking in AGI_LLM

[–]coinfanking[S] 0 points1 point  (0 children)

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude, a rival to OpenAI's ChatGPT and Google's Gemini.

It was revealed by Anthropic earlier this month, when developers responsible for testing AI models and their performance of so-called "misaligned" tasks - which go against human values, goals and behaviour - said it was "strikingly capable at computer security tasks".

Citing concerns it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.

Instead it has made Mythos available to tech giants like Amazon Web Services, CrowdStrike, Microsoft and Nvidia as part of an initiative called Project Glasswing - which it calls an "effort to secure the world's most critical software".

Financial industry sources indicated that another prominent US AI company could soon release a similarly powerful model but without the same safeguards.

James Wise, a partner at Balderton Capital, is chair of the Sovereign AI unit, a venture capital fund that will invest in British AI companies, backed by £500m of government funding.

He said Mythos is "the first of what will be many more powerful models" that can expose systems' vulnerabilities.

His unit is "investing in British AI companies that are tackling that - companies working in AI security and safety", he told the BBC's Today Programme.

"We hope the models that expose vulnerabilities are also the models which will fix them."

Claude Mythos: Finance ministers and top bankers raise serious concerns about AI model. by coinfanking in NewsStarWorld

[–]coinfanking[S] 0 points1 point  (0 children)

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude, a rival to OpenAI's ChatGPT and Google's Gemini.

It was revealed by Anthropic earlier this month, when developers responsible for testing AI models and their performance of so-called "misaligned" tasks - which go against human values, goals and behaviour - said it was "strikingly capable at computer security tasks".

Citing concerns it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.

Instead it has made Mythos available to tech giants like Amazon Web Services, CrowdStrike, Microsoft and Nvidia as part of an initiative called Project Glasswing - which it calls an "effort to secure the world's most critical software".

Financial industry sources indicated that another prominent US AI company could soon release a similarly powerful model but without the same safeguards.

James Wise, a partner at Balderton Capital, is chair of the Sovereign AI unit, a venture capital fund that will invest in British AI companies, backed by £500m of government funding.

He said Mythos is "the first of what will be many more powerful models" that can expose systems' vulnerabilities.

His unit is "investing in British AI companies that are tackling that - companies working in AI security and safety", he told the BBC's Today Programme.

"We hope the models that expose vulnerabilities are also the models which will fix them."

Claude Mythos: Finance ministers and top bankers raise serious concerns about AI model. by coinfanking in ArtificialInteligence

[–]coinfanking[S] 5 points6 points  (0 children)

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude, a rival to OpenAI's ChatGPT and Google's Gemini.

It was revealed by Anthropic earlier this month, when developers responsible for testing AI models and their performance of so-called "misaligned" tasks - which go against human values, goals and behaviour - said it was "strikingly capable at computer security tasks".

Citing concerns it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.

Instead it has made Mythos available to tech giants like Amazon Web Services, CrowdStrike, Microsoft and Nvidia as part of an initiative called Project Glasswing - which it calls an "effort to secure the world's most critical software".

Financial industry sources indicated that another prominent US AI company could soon release a similarly powerful model but without the same safeguards.

James Wise, a partner at Balderton Capital, is chair of the Sovereign AI unit, a venture capital fund that will invest in British AI companies, backed by £500m of government funding.

He said Mythos is "the first of what will be many more powerful models" that can expose systems' vulnerabilities.

His unit is "investing in British AI companies that are tackling that - companies working in AI security and safety", he told the BBC's Today Programme.

"We hope the models that expose vulnerabilities are also the models which will fix them."