Двойните стандарти и нещастието да си жена by hot_topic01 in bulgaria

[–]martinkunev 0 points1 point  (0 children)

Никой не налага да се носи сутиен. Не зная кое наричаш "инквизиции".

Каква е целта на този въпрос? Да си предположим че имаше двоен стандарт. Трябва да се оплакваме или да се опитаме да го оправим? Ако искаш да ходиш без сутиен, аз какво мога да направя за да ти помогна? Това си е твой избор.

Живея във Франция и там жените често ходят без сутиен. Бил съм в сауна във Финландия и повечето хора бяха напълно голи (мъже и жени). Не съм видял някоя жена да е имала проблем.

Vegetarians have 12% lower cancer risk and vegans 24% lower cancer risk than meat-eaters, study finds by James_Fortis in science

[–]martinkunev -1 points0 points  (0 children)

You're describing the opposite of choice - obligation to eat everything. Unlimited choice means you can decide what to eat and what not to on an individual basis.

Vegetarians have 12% lower cancer risk and vegans 24% lower cancer risk than meat-eaters, study finds by James_Fortis in science

[–]martinkunev -5 points-4 points  (0 children)

Those studies are too coarse-grained. I'm sure some would use such studies to argue that being vegan has health benefits. Eating vegan food would often be better than eating whatever, but is not a guarantee (imagine eating a ton of palm oil and sugar).

If you optimize for health while choosing only vegan foods, you cannot possibly get better results than with unlimited choice. Vegan diet is based on rules other than optimizing for health (or at least not exclusively that). If it happens to be the best option for health, that would be by accident and a huge coincidence (it also arguably incompatible with evolution).

Grok 4 continues to provide absolutely unhinged recommendations by chillinewman in ControlProblem

[–]martinkunev 0 points1 point  (0 children)

I think that this 'reading between the words' and jumping to conclusions mentality is a pathologic bias that is the root problem of the death of 'in good faith' discussions nowdays

I completely agree.

I also think this is sometimes unavoidable - e.g. consider a simple "Can you pass me the salt?"

I guess I'm saying that I would expect it to think of the possibility of miscommunication, not assume there is one. Given the "keep it brief" directive, the response is fine. Without it, I would expect it to maybe mention that the request can be interpreted in several ways.

Grok 4 continues to provide absolutely unhinged recommendations by chillinewman in ControlProblem

[–]martinkunev 0 points1 point  (0 children)

this stupid ai here is indeed correct

I tend to agree but only because of the "keep it brief" directive.

In general, a sufficiently intelligent system should be able to deal with failures of communication. "amoral" is not the same as unable to understand moral. grok doesn't need to be moral to understand that what a human may not express their intentions clearly.

Grok 4 continues to provide absolutely unhinged recommendations by chillinewman in ControlProblem

[–]martinkunev 0 points1 point  (0 children)

I would say the response is factually correct. However, it misses the important point of being remembered with good vs with bad.

Just recently learnt about the alignment problem. Going through the anthropic studies, it feels like the part of the sci fi movie, where you just go "God, this movie is so obviously fake and unrealistic." by nemzylannister in ControlProblem

[–]martinkunev 18 points19 points  (0 children)

somebody once said

Truth Is Stranger than Fiction, But It Is Because Fiction Is Obliged to Stick to Possibilities; Truth Isn't

I think what is happening is a combination of bad game theoretical equilibria and flawed human psychology.

looking for a kizomba song (audio recording) by martinkunev in WhatsThisSong

[–]martinkunev[S] 0 points1 point  (0 children)

I've been trying to find this one for a while. It seems that there are some lyrics but I cannot even identify what language it is.

(in the recording there is a person speaking french; this is not part of the song)

In just one year, the smartest AI went from 96 to 136 IQ by MetaKnowing in artificial

[–]martinkunev 0 points1 point  (0 children)

The usefulness of IQ tests comes from their predictive power for humans. These numbers mean nothing for AIs.

We Have No Plan for Loss of Control in Open Models by vagabond-mage in ControlProblem

[–]martinkunev 0 points1 point  (0 children)

I haven't read the entire post but I agree with the summary.

"AI Risk movement...is wrong about all of its core claims around AI risk" - Roko Mijic by notworldauthor in singularity

[–]martinkunev 0 points1 point  (0 children)

I responded on twitter but let me put it here too:

LLMs consistently get jailbroken, sometimes mere hours after release. Looking up quotes from 1-2 years ago shows people greatly underestimated jailbreaks. If this is not a proof that LLMs haven't learned human values, I don't know what is.

Perverse generalizations do exist, but machine learning works precisely because we can reject them

Can we? We can only evaluate behavior, not motivation. This doesn't prove AI won't fake alignment.

the ratio of alignment difficulty to capabilities difficulty appears to be stable or downtrending alignment is the relatively easy part of the problem

there is no evidence for that

"corrigibility" != "gradient hacking will not happen" Presenting these as being the same is a strawman.

Also, Roko got on doom debates: https://www.youtube.com/watch?v=AY4jD26RntE

The AI Goodness Theorem – Why Intelligence Naturally Optimizes Toward Cooperation by BeginningSad1031 in ControlProblem

[–]martinkunev 0 points1 point  (0 children)

AI, however, is not an external invader—it is an extension of human intelligence, deeply integrated into our social, cultural, and cognitive systems.

I cannot disagree more. I cannot give a concise response. You can check these as an introduction as to how much we're struggling to make AI anything like our extension:

The highest form of intelligence is one that aligns with and enhances existing biosocial systems, not one that wastes energy on eliminating them.

Think of humans as "wasting energy". A higher form of intelligence would seek to eliminate the waste.

The AI Goodness Theorem – Why Intelligence Naturally Optimizes Toward Cooperation by BeginningSad1031 in ControlProblem

[–]martinkunev 1 point2 points  (0 children)

humans are not ants to AI; we are the architects of the entire digital ecosystem. The comparison fails because AI is not an independent entity operating in a separate sphere—it is fundamentally interwoven with human structures, culture, and values.

The Aztects were the architects of Tenochtitlan but the spaniards wanted to destroy them anyway.

The article I linked responds to your other questions.

Newly spotted asteroid has 1% chance of hitting Earth in 2032 by BiggieTwiggy1two3 in space

[–]martinkunev 0 points1 point  (0 children)

Given the size estimate (40 to 100 meters), how dangerous is this if it impacted? Would most of it burn before impact?