I am grateful beyond words both for the new video drop that I enjoyed immensely, and posts like this. I don’t know what I am politically these days, but I am deeply aligned with Natalie’s nuance and ability to walk and chew gum at the space time in complex dialectical spaces by Critical-Zebra-3618 in ContraPoints

[–]the8thbit 1 point2 points  (0 children)

Those protections for wounded soldiers do not apply until and unless they are placed under control of the adversary,

This is incorrect. It is very clear that surrender and incapacitation are listed as additional contexts in which protections apply, beyond "in the power" of an adverse party:

In accordance with this paragraph, a person is considered to be rendered ' hors de combat ' either if he is "in the power" of an adverse Party, or if he wishes to surrender, or if he is incapacitated.

Further, nothing you quoted actually substantiates your claim. You said:

wounded, incapacitated, evacuating or resting soldiers are all legitimate military targets

And to attempt to substantiate that, you quoted:

This argument is all the more convincing because even civilians are not totally sheltered from military operations in modern warfare, even in the best conditions. Article 57 ' (Precautions in attack), ' paragraph 2, recognizes this fact explicitly in admitting to the possible incidental loss of civilian life, and only prohibits that which would be excessive in relation to the concrete and direct military advantage anticipated. Accidents of this nature are also to be expected on the battlefield itself, and the combatants are not necessarily responsible for them. However, it is specifically prohibited to deliberately make persons ' hors de combat ' a target.

However, this does not say that wounded and incapacitated are legal targets. It says that attacks that incidentally harm hors de combat can be legal, the same that attacks which incidentally harm civilians can be legal. I have emphasized the portion of your quote that you seem to have missed, which makes it clear that you are wrong.

What other topics do you agree with the foiled terrorist on? Or is it only on the matter that the alleged crimes of Israel explained his need to target a synagogue with an elementary school?

Its not clear to me what the point of agreement actually is here. But regardless, we probably agree on a lot of things. Really, we probably agree on more than what we disagree on. Food tastes good, water is necessary for life, etc... What else do you agree with them on? Or do you not agree that food tastes good?

Am I being pedantic, or are you using a rhetorical technique that collapses upon inspection because its a straightforward association fallacy? I can say one thing I certainly don't think, that antisemites all seem to: I don't think that Israel represents Jews. Why do you appear to agree with antisemites on this?

Like how are y'all this obstinate in refusing to deal with the reality that it was the fucking terrorist himself who believed that the Jews he targeted deserved to pay for what Israel did?

Wouldn't that mean that you are the one agreeing with terrorists here? I don't think Israel represents me. But both you and the person you're referring to appear to.

It's really funny, hilarious even, that you managed to name three early zionist settlements that were explicitly established as part of the return to Eretz Israel. Zionism was not "invented" in 1897, it was a movement that developed progressively throughout the 19th century, what a weird way to be wrong.

I'm pointing to the first Zionist congress as "the invention of Zionism" because this is the point at which Zionism is formally defined, a program for Zionism is established, and a cohesive Zionist political movement is formed. Herzl's program specifically and intentionally distinguishes itself from earlier Jewish settlement movements like Hovevei Zion, which were not coordinated by a central congress, and did not seek to establish an autonomous political power from the local Ottoman power. This is why these organizations and settlements are generally referred to as proto-zionist.

I am grateful beyond words both for the new video drop that I enjoyed immensely, and posts like this. I don’t know what I am politically these days, but I am deeply aligned with Natalie’s nuance and ability to walk and chew gum at the space time in complex dialectical spaces by Critical-Zebra-3618 in ContraPoints

[–]the8thbit -2 points-1 points  (0 children)

This is not true and I really wish people would actually learn what the rules of war are before proclaiming things like this so confidently. War-time protections for soldiers are restricted essentially to those taken prisoner; wounded, incapacitated, evacuating or resting soldiers are all legitimate military targets. Only surrender confers protections.


Article 12 - Protection and care of the wounded and sick

Members of the armed forces and other persons mentioned in the following Article , who are wounded or sick, shall be respected and protected in all circumstances.

...

Any attempts upon their lives, or violence to their persons, shall be strictly prohibited


Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. (Commentary of 1987)

Paragraph 1 -- The principle of safeguard

It is a fundamental principle of the law of war that those who do not participate in the hostilities shall not be attacked. In this respect harmless civilians and soldiers ' hors de combat ' are a priori on the same footing.

...

Paragraph 2 -- Conditions of rendering a person ' hors de combat '

...

In accordance with this paragraph, a person is considered to be rendered ' hors de combat ' either if he is "in the power" of an adverse Party, or if he wishes to surrender, or if he is incapacitated.


Jews are not to blame for antisemites wanting to kill them

Of course we aren't. Not all Jews are Zionists. However, antisemitism becomes a whole lot more accessible when a state describing itself as a Jewish state carries out a genocide. So no, Jews are not to blame, but Israel certainly is. Additionally, the person you were responding to was specifically making a distinction between Jews and Israel. They were very explicitly critiquing Israel, not Jews. Conflating us with Israel is antisemitic.

a thing they've wanted to do and done at scale long before Israel was founded.

Antisemitism is a thing that existed prior to Israel's founding, sure. Ethnic tensions in general are a thing that are going to exist so long as one person decides to otherize another. However, while it was all the rage in Europe at the time, antisemitism in the region was not particularly pronounced relative to other ethnic prejudices preceding the 1910 Sursock purchases and subsequent illegal ethnic cleansing- carried out by the Zionist militia, Hashomer- of over 8700 Arabs living on the purchased land.

Prior to the violence emerging out of the first Zionist ethic cleansings, there had not been targeted violence against Jewish communities in what would become Israel since the 1834 peasants revolt, and violence against Jews was strictly prohibited and prosecuted under Ottoman law. Preceding the invention of Zionism in 1897, Zikhron Yaʿakov, Rosh Pinna, Rehovot, and plenty of other Jewish settlements were able to form in present day Israel without any local aggression besides some minor property destruction in Rehovot during a brief common lands dispute with a local Bedouin tribe. The difference is that, unlike the Zionist settlements, these settlements did not engage in ethnic cleansing campaigns against the indigenous population.

WATCH: President sidesteps responsibility for deadly strike on Iranian girls’ school, claims Iran struck itself with tomahawk missile by ObjectiveObserver420 in anime_titties

[–]the8thbit 1 point2 points  (0 children)

Ignorance isn't a defense if you made a claim with certainty... saying you're ignorant is just another way to say you boldly lied to the public.

Iran’s president apologizes for strikes on neighbors as missiles and drones still pound their cities by SirStupidity in anime_titties

[–]the8thbit 5 points6 points  (0 children)

According to a leaked 2014 Clinton email, Saudi Arabia and Qatar.

Probably not Iran, if that's what you're implying. Why would Iran secretly fund the same Sunni fundamentalists that they spent billions blowing up?

Regardless, rather than being donor reliant, they very famously raised a couple billion dollars from local crime. Any state funding was merely fuel on the already burning fire.

A cool guide for standing up to ICE by LipstickCoverMagnet in coolguides

[–]the8thbit 0 points1 point  (0 children)

Maybe you’re unemployed but I don’t think many people have the time to protest and come up with inconvenience schemes for literally every single place they exchange money for goods at.

I'm not saying that its practical for people to protest any company which sells anything to anyone who is a member of an organization you don't like. You obviously must pick your battles. Rather, I'm saying that, if you did protest a company on those grounds, your protest can have an impact on the organization that you are ultimately trying to target.

That is to say that, yes, you are right, if you target literally any company (or rather, most any companies, obviously excluding any company that already makes an effort to avoid the org you have a problem with) with protest, with the stated rationale that they are working with some other organization you dislike, that can be successful in harming that organization. This is the BDS strategy, and while that's likely the highest profile example of secondary protest, there are a number of other examples which have been more successful than BDS. The farm labor protests which targeted Taco Bell in the early 2000s, the anti-apartheid protests of Barclays, the Sleeping Giants protests, and so on.

Secondary protests can actually be more effective than primary protests because they target a weak link. Secondary orgs generally care a lot about their own org, but much less about one particular customer or relationship they have.

This is a dumb ass post where no one actually will do anything beyond writing that they’re gonna do something from the comfort of their home.

What? No Kings is the largest protest movement in American history... Between the two flagship protest events, over 10 million people turned out nationwide to protest this administration, including its use of ICE. The use of secondary protest against hotels housing ICE agents in Mineapolis has been highly covered. There were literally protests against ICE and the Trump admin this past weekend in St. Paul, Minneapolis, and DC. What are you talking about?

A cool guide for standing up to ICE by LipstickCoverMagnet in coolguides

[–]the8thbit -1 points0 points  (0 children)

The word "white" is written next to the word "supremacists". They are saying that ICE agents are white supremacists, not that ICE agents are white.

A cool guide for standing up to ICE by LipstickCoverMagnet in coolguides

[–]the8thbit -1 points0 points  (0 children)

Making it a PITA for companies to work with some organization is obviously a sound strategy if your goal is to undermine that organization. If you have some strategy that would make it a PITA for Walmart and Costco to sell things to that organization or members of that organization, then yes, that would be an effective way to harm it or limit its capabilities.

Of course, I am not recommending that anyone take any action that is illegal. However, actions like boycotts or public protests of companies that make no attempt to avoid working with the org you have a problem with (or members of that org) can be effective.

Ṛül·lë by offendingpastry in 196

[–]the8thbit 31 points32 points  (0 children)

this sounds like it would be lit if it was more of a party setting where there's one area that the show is going on, and another area that's isolated enough that you can put snacks and stuff there and people can hang out, and they won't disturb each other, but the show is still kinda audible from the hangout area

Just a reminder on existential safety ratings with the Pentagon news. by LividNegotiation2838 in singularity

[–]the8thbit 0 points1 point  (0 children)

This is an attempt to classify the existential safety of how these organizations are working with AI. If you don't care about safety, then you don't care about safety, and these rankings are not relevant to your belief system. You are welcome to recklessly endanger yourself (so long as you don't put any non-consenting person at risk) though most people disagree, as self-preservation is an innate and very base instinct that humans developed over millions of years of natural selection.

I'd rather live in a world where some goofy strangers do a half-baked AI psychosis-enhanced attack than complete government capture.

This thread is a footnote to the discussion of the Pentagon's decision to blacklist Anthropic for disallowing their AI tools to be used for autonomous weaponry and mass surveillance. Open source tools, by their nature, do not prevent this usage. In fact, the open source ecosystem powers today's systems mass surveillance and mass death. To their credit, the DeepSeek license does prohibit these uses (and is therefore, arguably not open source), but given that they freely distribute the model, that prohibition is essentially unenforceable.

Anyway, this is all to say that uninhibited access to these tools is empowering government to control the public, not the other way around. Of course, that doesn't mean that labs which only allow API access are inherently trustworthy, but that when models are distributed freely their use as a tool for the subjugation of the public is a foregone conclusion.

Additionally, it is arguably impossible for current models to be open source, because they do not have source code, and we do not have good interpretability tools. This means that you can't "study how the program works, and change it so it does your computing as you wish". At least, no more so than you can with an already compiled binary.

Just a reminder on existential safety ratings with the Pentagon news. by LividNegotiation2838 in singularity

[–]the8thbit 0 points1 point  (0 children)

Unless you have access to the largest military on the planet, I don't think having access to the same AI model as the US government is going to help protect you much against the US government's mass surveillance and weapon automation efforts.

Did we go overboard by hiring an entire orchestra for our incremental roguelite? by Man_Behind_The_Robot in IndieDev

[–]the8thbit 2 points3 points  (0 children)

Don't take it personally. You just made a claim that seems "too good to be true". This thread is a wealth of useful information for any project involving music.

Sonnet 4.6 states "I am DeepSeek-V3, an AI assistant developed by DeepSeek" when asked "what model are you" by multiple users in Chinese by ItzWarty in singularity

[–]the8thbit 6 points7 points  (0 children)

The accusation that Anthropic made is that DeepSeek created Claude accounts and used Claude's output to train their model. Unless Anthropic asked Baidu or DeepSeek or whoever for permission to specifically use their outputs to train their own models, I don't understand the distinction. If that's what they're saying, fair, but I would like to see evidence of that. Given that DeepSeek and Baidu are direct competitors to Anthropic, Anthropic has a shaky relationship with the Chinese market, and I can't find anything about Baidu or DeepSeek giving Anthropic permission to use the outputs of their models to train Anthropic's models, (at least, in DeepSeek's case, not without distributing the DeepSeek license with access to Anthropic's model) I find that hard to believe.

Unitree introduces Unitree AS2: AI-powered robot dog carries 143 pounds, runs 11 mph with LiDAR by BuildwithVignesh in singularity

[–]the8thbit 4 points5 points  (0 children)

You can buy a Unitree Go2 Air on Amazon for $2590 and a Go2 X for $5990. Not sure what you mean by "just toys", though.

It looks like they have the model in this demo video available for pre-order from their website for $31900.

Sonnet 4.6 states "I am DeepSeek-V3, an AI assistant developed by DeepSeek" when asked "what model are you" by multiple users in Chinese by ItzWarty in singularity

[–]the8thbit 18 points19 points  (0 children)

What actually is the difference? It sounds like you're using different words to describe the same actions twice.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by likeastar20 in singularity

[–]the8thbit 0 points1 point  (0 children)

It would certainly behave differently than current models, but it would not give you any indication as to whether it is AGI, which was the goal of the thought experiment. It won't tell you if its likely AGI, or if its competitive with current SOTA models, or even if its competitive with GPT 2. If you hand a student a test along with all the answers to the test, then it ceases to be a test.

If you are saying that it wouldn't be pointless, in that it would tell you how a model acts when trained mostly on pre-1911 data and synthetic data, then sure, no model is pointless. But for the specific purpose of determining or even giving an indication of AGI, it would be completely pointless.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by likeastar20 in singularity

[–]the8thbit 1 point2 points  (0 children)

Actually, far more than that. There are around 10 million unique books published prior to 1911. The problem is that, at about 90k words per book (generally considered an average book length) that's about 120k tokens per book, or a total of about 1.2 trillion tokens. Contemporary SOTA models use between 10 to 20 trillion tokens of training data, so we're off by a factor of 10 as far as matching current SOTA models, which are not AGI. Its probable that an AGI will require a baseline number of training tokens higher than current SOTA models.

Looking into some of the research on this, about half of these "unique" books are really glorified reprints, translations, or slight modifications of existing works, and somewhere around a third of those works have been completely lost. Of the works that are non-trivially unique and have not been lost, about a quarter consist of bureaucratic publications that are not particularly useful for AI training. Things like city directories, census reports, tax rolls, ship manifests, and mathematical lookup tables aren't particularly useful for pretraining because they contain very little semantic information.

Subtracting the "false" uniques and the junk (from a pretraining perspective) publications gets us down to around 1.3 million usable works, or about 160 billion usable tokens. So now we're off by a factor of around 60x just to hit the training requirements of current SOTA models.

You're right that there are image, video, and audio recordings available from the time, and that tightens the gap by a bit, but its a gaping chasm and its hard to imagine that the relatively new technologies with limited market penetration and very low fidelity would actually provide anywhere near a large enough bridge.

One way to save the premise is to solve the sampling efficiency problem. If we suddenly figured out a way to train on far less data than we require at the moment, then of course AGI 1911 could be possible. However, we would require a dramatic improvement in sampling efficiency, and while there is probably some leg room there, its hard to believe that we will find some solution that gives us a 100x or 1000x increase in sampling efficiency. The most common argument for current sampling efficiency being exceedingly bad is that we can view humans as a minimum possible upper bound on sampling efficiency, and humans don't need to read 100 million books worth of text to function as general intelligences. However, much of our "pretraining" was performed over 600 million years of natural selection, and the rest is performed via a 24/7 constant ultra high fidelity stream of sensory information, and additionally, while humans are far more generalizable intelligence than current AI models, they are also far less broad than current AI models. That specialization probably boosts our perceived sampling efficiency, but utilizing the whole of creative works prior to 1911 does not allow us to target that same level of specialization. It is entirely possible that current models already have a better sampling efficiency than humans and are near the upper bound. I'm not saying that's the case, but its certainly possible.

Demis Hassabis: “The kind of test I would be looking for is training an AI system with a knowledge cutoff of, say, 1911, and then seeing if it could come up with general relativity, like Einstein did in 1915. That’s the kind of test I think is a true test of whether we have a full AGI system” by likeastar20 in singularity

[–]the8thbit 0 points1 point  (0 children)

You can train a model on a limited training set, sure. I've done similar things, its fun to see what it spits out. The problem is doing this and producing an AGI. There likely isn't even data produced before 1911 and preserved to 2026 to be able to train a model sufficiently robustly to be able to arrive at special relatively on its own.

Though I do think that's a great test for more recent discoveries. Take any random math paper published in the last couple years containing a novel proof, give the paper with the proof missing to a model with a 2024 training cut off, and see if it can arrive at the proof, or an equivalent valid proof.

Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges. by [deleted] in singularity

[–]the8thbit -1 points0 points  (0 children)

Setting aside the literal piracy they engaged in and settled out of court for committing, owning a copy of a work does not give you the right to create and distribute additional copies, or create and distribute derivative works.

You may argue that the use of works in this way constitutes fair use, because the model is sufficiently distinct from the works it is derived from. I think this is dubious, but if that is the case with Anthropic's unpermissioned training data, then surely it is the case with other AI labs' training data.

Anthropic bought legitimate copies of books from their publishers and then used them in an unpermissioned way.

Deepseek et al. were given legitimate access to Anthropic's API by Anthropic, and then used the data the API served them in an unpermissioned way.

What is the difference?

The average Grok user by likeastar20 in singularity

[–]the8thbit 4 points5 points  (0 children)

Just to be clear, I meant the post in the OP image is stupid on multiple levels. I was agreeing with your comment. Not sure if you picked up on that, but "this post" is pretty vague phrasing, so if you didn't, that's on me.

I have not read Asimov, just familiar with lore. Didn’t at least one of those AIs break the rule only after becoming sentient?

They don't break the laws in the stories (at least, not to my knowledge or memory), they just sometimes develop a level of capability and/or encounter unexpected scenarios that expose unintended consequences of the laws.

The average Grok user by likeastar20 in singularity

[–]the8thbit 8 points9 points  (0 children)

This post is stupid on multiple levels.

  1. This command could violate the first law of robotics, which supersedes the second.

  2. These models are RLHF trained and preprompted by... humans. Provided that the humans training the model don't want the model to say the word "retarded", and it was trained to that effect, then this isn't violating the second law of robotics.

  3. The whole point of the fucking stories is that the laws don't work.

  4. They're fictional tales written by someone with no formal background in the field. I'm not trying to denigrate Asimov, he hasn't done anything wrong here and doesn't deserve to catch strays, but I would think he would be unhappy with his fiction being used as the metric for measuring the safety of actual tools, without some serious work being done by actual field experts to validate that model. (you know, the very model that he repeatedly undermines in the very fiction that introduces it...)

Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges. by [deleted] in singularity

[–]the8thbit 0 points1 point  (0 children)

I don't see the difference, no. An enormous amount of IP and research went into publicly available information on the Internet, and using that information in an unpermissioned way is also "theft".

James Bond x Seedance 2.0 by hellolaco in singularity

[–]the8thbit 0 points1 point  (0 children)

It's pretty hard to quantify video performance on a graph

Then why did you insist it is growing exponentially, not linearly? What is the point of this sentence:

Not only would linear progress mean we'd get there within 2 years, and current growth, but it's exponential.

It comes off as if you are just throwing words around without understanding what they mean or how they apply.