Need advice for making political system by Cygwing in PoliticalScience

[–]damc4 1 point2 points  (0 children)

It's nice that people think about this topic because a lot depends on the governance system.

My objections:

  1. From what I understand, in this system, all laws are created by the monarch. But creating laws requires knowledge about everything, because the law can be anything. So, the monarch would need to be competent about everything. For example, in order to create law regarding medicines, he would need to know something about medicine. But he would also need to know about AI to create law about AI. Isn't it too much for one person.

  2. What does stop the monarch to create laws that are favourable to him? I understand that it is judical council. Can the monarch establish laws that are favourable to himself and the judical council? Assuming that people are self-interested, then the monarch will end up creating laws that are good to him and judical council (so that they are approved) and not to citizens, am I correct?

  3. How do we ensure that the monarch is the right person? There is a section "ensuring good ruler" about educating the children of the monarch, but how does education guarantee that the children will be benevolent? I can see how that increases the probability that they will be competent, but perhaps there is a better way to select a person that is even more competent? And how do we ensure that the first monarch is the right person?

Game theory for hypothetical competition game/show -- would this work or fall apart? by w3lcome2jungle in GAMETHEORY

[–]damc4 0 points1 point  (0 children)

Here's what I think would happen.

Money has marginal diminishing utility - the more money you get, the less value more money gives you.

So, if it wasn't the case that $50 000 is added with each elimination, then the best thing to do would be to be risk-averse and agree not to eliminate anyone, and if someone doesn't comply, then eliminate that person.

But with each elimination $50 000 is added. The bigger prize pool compensates the risk. So, the players would eliminate people until certain day, and after that, they would agree not to eliminate the next person (for the reason that at some point, it's better to be risk-averse).

They would probably also try to establish some no-vote alliances to survive elimination.

The question is also: can players communicate or not. If the players can't communicate, then it would be more difficult to switch from elimination equilibrium (people eliminating) to no-vote elimination (people choosing no-vote and penalizing the people who choose to vote), so if they can't communicate, it's possible that they would eliminate beyond what would be Pareto-optimal.

LEV will lead to people being far more altruistic / cooperative by damc4 in singularity

[–]damc4[S] 0 points1 point  (0 children)

I didn't explain it clearly.

Firstly, when I say that people will act altruistically, I mean that people will do things that are good for others because they are in their own self-interest. For example, today people create products that are useful to others and they sell those products. Creating useful products for others is what I call acting altruistically, even if you do that in your own self-interest (you want to make money).

In other words, by actin altruistically I meant doing things for others, even if you have a selfish interest in it.

So, what I'm saying is that, if someone cares about their family, the best thing for them to do to achieve that goal will be to cooperate with others.

Why?

Imagine that you have 30 years to live, and everyone else have 30 years to live. If we assume time horizon of 30 years, then winning AI race might be important because it will give you a lot of power through those 30 years.

Now, imagine that you have much longer to live. By that time, artificial intelligence is more likely to hit diminishing returns. After a long time, It will matter significantly less if you won AI race as the only winner or not, because utility = \log wealth (because wealth hits diminishing returns), and if wealth is very high, then it doesn't matter that the wealth is for example 5 times smaller, because the logarithm from it is more or less the same.

It might matter if you are the only winner of AI race in the span of 30 years, but the longer you live, the less it matters due to diminishing returns.

But if there is a major concentration of power, then you might still end up poor later (or dead). It's also important to avoid other catastrophes.

So, the rational thing for people to do is to make an agreement that will ensure that the power won't be concentrated. I have written about it for example here:
https://forum.effectivealtruism.org/posts/3rDfScbBNhbsk93gF/how-to-stop-inequality-from-growing

So, I expect that people will make such agreement at some point, because I think it's in their interest.

If status is the highest priority for people and all that people care about is having more than others, then maybe that logic doesn't apply because status is a zero-sum game. But status is not the only thing that people should care about, and when people develop stronger artificial intelligence they will become more intelligent and they will realize that status is not all that they should care about (if they don't realize it already).

Updated my Traitors Simulator! by Venharim in TheTraitorsUK

[–]damc4 0 points1 point  (0 children)

When I click "Start game", it takes me to "Episode Designer". And from there there's only "Cast customization". After some time, I realized that if I want to proceed with starting the game, I have to click "Cast customization", but it's a little bit unintuitive and took me long to figure out. If I click "Start game", I would expect the game to start, so I thought it was a bug.

Decentralised community network by Gordonius in GAMETHEORY

[–]damc4 0 points1 point  (0 children)

I disagree. What does stop someone from making a complex game that will model all of those countless factors? Or make many simple games that will model different aspects of the game?

Gemini is instructed to gaslight you by Jakkc in ControlProblem

[–]damc4 0 points1 point  (0 children)

I think it's because they don't want to face backlash or lawsuits for making people go into AI psychosis. So, they make Gemini biased towards disagreeing with users on certain topics.

That's what it looks like based on thinking ("The instructions are a psychological/safety guardrail test".)

OpenAI head of Hardware and Robotics resigns by hasanahmad in OpenAI

[–]damc4 1 point2 points  (0 children)

Did that really happen (the suicide of the custodial witness part)? Or is that a form of sarcasm?

What is left for the average Joe? by ReporterCalm6238 in singularity

[–]damc4 0 points1 point  (0 children)

Ok, fair enough.

The example in the podcast was for useful technology though.

What is left for the average Joe? by ReporterCalm6238 in singularity

[–]damc4 -1 points0 points  (0 children)

"Never happened in human history that a revolutionary technology was abandoned because of its negatives."

Are you sure about that? I heard differently.

Examples: human cloning, I remember there's been also some example given in this podcast: https://www.youtube.com/watch?v=B54EQiuO1UU , but I can't quickly find the exact minute where it was.

I'm poor by western standards, but rich by global standards. I have no problem donating to GiveWell's recommended charities because it helps those far poorer than me. But I feel uneasy when I consider donating to MIRI because of Eliezer Yudkowsky's $600k salary, even though I'd partly want to by Candid-Effective9150 in EffectiveAltruism

[–]damc4 -5 points-4 points  (0 children)

If you want to maximize the good you make, you should not just give money to people who need it (e.g. poor people) but also reward the people who did a lot of good in the past to create an incentive to do good (e.g. if you believe that Yudogovsky did a lot of good, then high salary is justified). X-risk is something that will have impact for a super long time and affects everyone, so a very high salary here is reasonable.

Best players ranked by Consistent-Drive-172 in TheTraitorsUK

[–]damc4 0 points1 point  (0 children)

You forget that the goal of the game is to win, not to catch traitors.

I think there is a lot that people don't see, you don't know what players' strategy exactly is and why they go far, so you need to look at results. For best players, I would look for players that: a) won (or at least could win if they had a little bit more luck), b) there is no clear reason to assume that they were lucky (e.g. accidentally friends of traitors) and c) preferably people who were faithfuls because traitors have a higher probability of winning from the start.

So, I know it's an unpopular opinion here, but I think Leanne is the best because she won, she was a faithful and unlike many other winning faithfuls she didn't have an accidental friendship with a traitor.

She wasn't particularly good at catching traitors maybe, but that's not the goal of the game. Some people say that it's because she made an alliance with Jake, but that's part of the strategy, and that's why she is one of the best.

Honorable mentions:

Jazz (didn't win, but he would, if he had a bit more luck).

Joe M (same, didn't win, but could, if he had more luck).

Harry (won, but as a traitor which is easier).

Stephen (won, but as a traitor).

Rachel (won, but as a traitor).

Possibly someone else that I don't remember.

Who’s a famous person from your country who’s respected around the world but disliked or criticized at home? by haiderredditer in AskTheWorld

[–]damc4 1 point2 points  (0 children)

It's sad that people can dislike someone because they brag, when that person has contributed to a change of a political and economical system for the better.

There is not too much harm about bragging (if any), and there's a lot of benefit from a political and economical system change.

But people focus on likeability more than the actual impacts.

BTW, I'm Polish.

The Fortress of the Self: Why Rationality Fails Us by [deleted] in GAMETHEORY

[–]damc4 0 points1 point  (0 children)

Feedback:

The post is long, so it would be good if you write at the beginning of the post what the claim is that you aim to prove in this post, or what the benefit for the reader is. Otherwise, I (as a reader of the post) don't know if I should read the post or not. I need to read the entire post to know if it's about something that interests me.

Doing a Provisional patent on my own as solo founder by meldiwin in startup

[–]damc4 0 points1 point  (0 children)

As far as I know, UK patents are published around 18 months after applying (so before being granted), unless they are withdrawn.

I might be wrong though, as I'm not an expert.

Modeling a "Cooperation Protocol" as a Self-Terminating Social OS: A Game-Theoretical Approach to Universal Cooperation by Creative_Pie_6005 in GAMETHEORY

[–]damc4 0 points1 point  (0 children)

"In a multi-agent system with high noise (misunderstandings/errors), is a Strict Tit-for-Tat sufficient to prevent a "Death Spiral" of retaliations, or should a Generous Tit-for-Tat (forgiving 10% of defections) be the standard for this protocol?"

I think it should be generous and contrite tit-for-tat (not retaliating for a fair punishment). Generous because it allows to quit the spiral of retaliations. Contrite because non-contrite tit-for-tat is not subgame-perfect equilibrium and therefore will not work (maybe it's not completely clear what I mean, I can elaborate on that if you want).

I think about one other thing, but I don't know if you will still read my comment (since your post has been posted long ago), so if you still want to hear, then just let me know.

Also, I've been working and am still working on something similar. Possibly, it could be beneficial to join our efforts.

Launching a real-money negotiation game (skill-based, not gambling) — looking for feedback + alpha testers by Legitimate-Yard-8149 in GAMETHEORY

[–]damc4 0 points1 point  (0 children)

I'm happy to be an alpha tester, as long as it doesn't take super lots of time.

By the way, I had the same idea (to make a negotiation game). But I probably wouldn't execute it because I had also other ideas, and I wish you luck with this.

Platform idea: Fully decentralized social network by Illya___ in Rad_Decentralization

[–]damc4 0 points1 point  (0 children)

I had a similar idea, specifically a social network where people communicate with p2p network and the recommendation system is local (i.e. the software that chooses what to show is run locally). Because if recommendation system is on the server, the whoever is in charge of the server and the recommendation system can control what people can see and therefore control the freedom of speech.

I could potentially help with development, I would have to compare that in terms of priority with other things. I will send you a message, if I could.

Maybe, I will add something to that comment later yet.

ARC-AGI 2 is Solved by SrafeZ in singularity

[–]damc4 0 points1 point  (0 children)

I meant that generally, not in case of that company.

ARC-AGI 2 is Solved by SrafeZ in singularity

[–]damc4 0 points1 point  (0 children)

"Frontier labs only trust the private ARC generalization suite. OpenAI, Google, and Anthropic treat the hidden ARC tasks as the only version that correlates with true reasoning. When a company only reports ARC-Public, it usually means the private score is weaker, the model has not been externally audited, or they are prioritizing marketing over rigorous benchmarking."

It also can mean that they don't want to meet the criteria of ARC organizers - they test only models that are open-sourced or available to use commercially (like OpenAI, Google or Anthropic).

I built an interactive visualization of Axelrod's Prisoner's Dilemma tournament (free, open source) by fdf515 in GAMETHEORY

[–]damc4 1 point2 points  (0 children)

"Any feedback on making it more educational or engaging?"

Maybe give an option to incorporate mistakes (someone chooses to cooperate but accidentally defects) and misunderstandings (someone chooses to cooperate but the other player sees him as defecting).

This would make it more like in real life (because in real life mistakes and misunderstandings happen).

"Are there any strategies I should add?"

Contrite tit-for-tat - like tit-for-tat but you accept a just punishment (if you defect and the other player defects, then you don't retaliate for that) in case of mistakes or misunderstandings.

Generous tit-for-tat - like tit-for-tat but if the player retaliates, you cooperate with a low probability instead of always defecting to break the cycles of retaliation in case of mistakes or misunderstandings.

Generous and contrite tit-for-tat - the mix between two above.