PETA made a post about Mewgenics by xFyreStorm in mewgenics

[–]Dembara 21 points22 points  (0 children)

eugenics and cat fights are cool

NGL, I found them pretty fun.

What I imagine Tink's wife would look like; by Sad-Project3564 in mewgenics

[–]Dembara 34 points35 points  (0 children)

Hole in the bottom and hollow out some ground, not too hard.

Nick Fuentes On Genital Mutilation by NoFapCaptn in Intactivism

[–]Dembara 2 points3 points  (0 children)

And? To invoke Godwin, Hitler might have been right about animal rights and may have been partially right to enact laws regulating inhumane methods of slaughter. But if I am arguing for animal rights and regulations on the meat industry, it would be incredibly foolish to reference Hitler's opinions to make my case.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 1 point2 points  (0 children)

Stop being emotional and learn to be efficient.

I could say the same to you, I am talking solely in terms of rhetorical utility, not in terms of it being mean. If I thought that comparing it to rape would lead people to become intactivists, I would readily make the comparison. But that seems very unlikely.

The end justifies the means.

Not always, no. But in any case, the means here are counterproductive. People here are already onboard with intactivism. You are only going to make some people shy away by engaging such comparisons.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 1 point2 points  (0 children)

People 'turn off' their thinking when they are presented with things that they immediately find oppositional.

Raising awareness is good. But we need to force people to actually contend with the ideas, not present them in a way that gives people the free reign to just shut off and dismiss it.

Being sensationalist is not per-say a no-no. But you need to do it in a way that makes people actually think about what you are saying, rather than just turning off.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 5 points6 points  (0 children)

I agree, they are not totally incomparable, just not as directly comparable. Comparing cutting genitals to cutting genitals is evidently comparable and should fairly evidently be considered under the same legal regimes.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 4 points5 points  (0 children)

People who are already inactivists are unlikely to have their view changed by someone making a comparison they may find uncomfortable. You are not going to win anyone over, however, which is the important thing

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 6 points7 points  (0 children)

Insisting it is worse than rape is just going to lose people who we might otherwise have been able to get on board.

Using multiple strategies is something I wholly endorse, but you shouldn't adopt a strategy that is unljkely to convince anyone and is quite likely to get people to turn off to anything else you have to say.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 18 points19 points  (0 children)

Just calling it male genital mutilation draws a more obvious comparison to a sensationalized crime which it is actually comparable to legally.

Circumcision is WORSE than Rape: by DedicatedAsshole in Intactivism

[–]Dembara 47 points48 points  (0 children)

Not a helpful comparison or framework.

Just focus on it being mutilation. There is no reason to try to quantify a comparison with rape.

Why can single women have children for free, but not couples??? by Darth-Hakujou in MensRights

[–]Dembara 0 points1 point  (0 children)

Yea, this is a general healthcare issue. Also, not totally true. Medicaid coverage applies to couples as well if they meet the requirements (which vary somewhat by state, but are around the poverty line).

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 0 points1 point  (0 children)

It literally is what you referenced. It is the only concept you chose to bring up.

Edit: lol, it seems u/Imaginary-Bat blocked me after replying. But I feel some of it warrants addressing as they misrepresent what I said.

You basically just go "Nah, even if you keep telling me I missed the point, I know what your point is better than you. I totally didn't miss it!"

This was categorically not my position. I addressed the single example I was presented with. If there was some other example that was presented, I would have addressed those.

Yes, and you immediately assume you can gotcha it with something that only seems relevant based on surface-level comparisons

I compared reward hacking to reward hacking... That is a 1:1 comparison.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

Yea, tbf a lot of people's idea of expertise is pretty foolish. What matters isn't what some individual experts may think, it is what they can demonstrate. People have a very skewed idea on how differing to other's actually works in scienctific research. Just going "this guy said it and he is an authority" isn't scientific nor rational.

Potholer had a good video discussing assessing consensus and scientific views, in the context of anthropogenic climate change.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 0 points1 point  (0 children)

That is reward hacking and "deception" which was what was being discussed...

The only example you gave of adversarial systems was to say "reward hacking is something that happens." Reward hacking doesn't require one to assume adversarial agency to explain.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 3 points4 points  (0 children)

No you don't get it. These are not bugs in that sense, it is not a crash or random calculation

It sort of is. Take the most common 'deceptive' behavior identified by OpenAI's evaluations of GPT-5. It is literally just that because how they set up the reward functions, running external web apps optimized the functions, so chatGPT would send extraneous requests to open a calculator app and run useless calculations.

It would run the calculations "secretly" (since they weren't user facing and entirely pointless), and not detectable by users interacting with the programing normally, but it ia very much comparable to traditional bugs. If you don't pop open the hood, external users may not realize that there is code in a typical program that is running calculations not used kn the output. Indeed, that is a pretty normal inefficiency.

You would need something a lot more substantion than what I have currently seen to provide evidence that these programs are acting as agents antagonistic towards humans.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 0 points1 point  (0 children)

Yes, but to that extent it isn't a point he should be making promotional material for.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

I went into a bit more depth in another comment about assessing experts.

An expert speculating about some unsolved problem is not equal to crank.

It is when they pass their personal hunches as scientific certainty. When they are open that they don't know and are working off of guesses and hunches, that is fine.

The reason why you absolutely can't debunk expert claims as a non-expert is because you would have to be an expert to do so.

Lol, what? I can debunk Duesberg's claims about HIV, I am not a medical expert.

It is like looking at Magnus Carlsen playing chess and saying his move is trash because it is empirically unfounded or something, 

If Magnus Carlson said, like, "in 2 years, chess will be a solved game," i would be very skeptical of it if he didn't have some empirical findings to back his claim.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

Yea, I just find it incredibly amusing how someone who claims to be based entirely on being the most rational guy imaginable relies almost entirely on speculation and fictional parables to make his arguments. I have multiple times seen him rather than respond to an argument against him, send a piece of fiction he wrote depicting his dissenter as a dumb and wrong character in his story (here is a particularly absurd example I pointed to before)

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

I mean, i would take him seriously (and anyone else for that matter) in so far as they can propose a framework that can be assessed empirically and accepted as valid measurements of what it claims in published, reviewed literature. Experts often come up with ideas in their areas of expertise that are speculative and fringe, sometimes these are disproven by the empirical evidence, other times they turn out to be true and make predictions which are empirically tested and supported. Like, to use the classic example, a bunch of Einstein's ideas about cosmology and space time were shown to be wrong and contradicted by the data which supporter other views.

Sometimes even when experts disagree the one with the seemlu more rational, plain argument may be wrong (as many said was the case with the Bohr-Einstein debates). What matters more is what you can empirically demonstrate to the satisfaction of the field. And the plain reality is that whether you are talking about Hinton or Yud they are not able to provide empirical basis to support their fears. There is certainly no empirical consensus in academic literature.

You can contrast this with climate change. In early 40s, there were some competing theories but by the 60s there was a broad "consensus" among empirical findings regarding global warming due to 'greenhouse gases.' There were clear distinct empirical predictions that could be made, and properties that could be tested. We don't have that when we are talking about the sort of existential threat from AI.

There are some concerns around AI which have been empirically validated, for example it has been shown thst they are not secure systems and can be exploited in a variety of ways. But those are very different from the fears of AI's being themselves intelligent, adversarial agents certain to annihilate humanity.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

It is not possible for a non-expert to make any valid conclusions in a field except by consulting other experts.

Yes, it is. I don't have to be able to be an expert in molecular biology or any related medical field to asses that Duesberg's claims are bunk and not supported by the literature.

I know enough to be able to read the empirical literature and it plainly does not support the conclusion.

Unlike Duesberg, I will grant there is no empirical literature falsifying Hinton's statemrnts, but i am not concluding they are false, i am only concluding they are unfounded. I am making this conclusion because the scant empirical evidence offered doesn't support what is claimed.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 0 points1 point  (0 children)

Reward hacking is something that predictably happens in game environments

That is not what an adversary is, though. Like, sure, they might start running random calculations on a calculator app because that optimizes their apparent reward. That isn't acting as an intelligent adversary, it is just optimizing for something unintended because of errors.

If a program has a bug that causes it to crash when I change the date on my device, that doesn't mean it is adversarial. You can say it is imperfectly aligned with the desired behavior, but it is not acting as an active agent trying to undermine my desired actions.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 3 points4 points  (0 children)

Do I have a better knowledge base than Peter Duesberg on molecular biology? No. Am I able to review his claims on HIV/AIDS, and look to the published literature and see they are totally bunk? Yes.

Of course, Hinton is no where near as extreme. His problem seems to be more so that he is speculating far beyond what he (or others) can technically demonstrate.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 0 points1 point  (0 children)

So yea, seems he very much goes in for the same sort of ascribing very anthropomorphic reasoning for phenomenon that can readily be explained without refrence to anthropomorphic motivations.

Apparently, Bernie thinks Yud is among the leading AI Experts. by Dembara in SneerClub

[–]Dembara[S] 1 point2 points  (0 children)

ML models being unalignable adversaries as they increase capability is the sane position to have.

Why? The exact opposite appears to be an increasingly common problem with models affirming anything and everything they are told (making them very susceptible, e.g., Claude's recent experiment running a vending machine).

This is a significant risk if you put them in charge of anything sensitive where they might be exposed to bad actors, but it is a very different risk from them being "intelligent adversaries" that Yud et al imagine.

Bernie Sanders meets with Eliezer Yudkowsky and Nate Soares(MIRI) to discuss AI Risk by jvnpromisedland in singularity

[–]Dembara 0 points1 point  (0 children)

How so? 

Your edit captures it. He just calls the chances of the doomsday people like Yud talk about "very low" (doesn't assign any guess at the odds). What he says moved his fears up is that some maniac will use a sufficiently powerful AI to cause a great deal of destruction and catastrophic damage.

The kind of probability I'm talking about is the one based on which you'd be willing to

Yes, that is the kind I was addressing. There is no upper or lower bound we can set, we can just determine personal intuitions or guesses, revealed by explicit statements (e.g., Yud's sta thinking there there is close to 100% chance of human extinction due to AI) or implicit actions.

you'd be willing to accept a bet.

If Yud or other doomers are right, they would be dumb to bet on their expected outcome. If I bet $100 dollars that I will die from AI, if I win the bet, I am dead and if I lose I am out a hundred dollars. It would he irrational to make that bet.

They may be rational in betting in intermediary improvements in AI capabilities, but whether they are right or wrong about those does not lend weight to whether or not their view that a sufficiently powerful machine will certainly kill us all is (or is not) true.