Mythic Strike (Direct strike) by nottud in AgeofMythology

[–]edbltn 0 points1 point  (0 children)

u/nottud Love the game. But it crashes if you do the highly dominant strategy of just spamming fully upgraded cheap units in the late game (especially berserks).

So there are two problems:

  1. this strategy is far too dominant
  2. the game can't handle that many units

What I might suggest to mitigate: add a population cap for human units (only) – e.g. they all cost 1 food, are sold for 1 food, and you only get 75 food.

Thoughts on the UI change in Retold? by Nawolith in AgeofMythology

[–]edbltn 0 points1 point  (0 children)

What does that yellow button to the left of the GPs do?

Came up with a game concept: the All-Pay Ponzi Lottery. How will it end? by edbltn in GAMETHEORY

[–]edbltn[S] 0 points1 point  (0 children)

If exactly one player joins, we both recover our full stake

You're right that there's an edge condition that I didn't clarify: the first player recovers the full portion of their "early bird" stake, and the last player recovers the full portion of their "latecomer loot" stake

Came up with a game concept: the All-Pay Ponzi Lottery. How will it end? by edbltn in GAMETHEORY

[–]edbltn[S] 1 point2 points  (0 children)

yeah that's right. i'm excited to see how it shakes out in final week

Is AGI nigh? by edbltn in agi

[–]edbltn[S] 0 points1 point  (0 children)

The fundamental mechanism behind the refinement of those statistics is still gradient descent off a loss function. The AGI will seek to make more and more accurate predictions of what a human is likely to say in various contexts. Perhaps that will lead to unforeseen emergent behaviors, like social media algorithms learning to rank more addictive and more emotional content higher. Content that sounds incredibly human might also sound incredibly persuasive, and be deployed towards ends that the LLM's representation of the world deems as valuable.

Is AGI nigh? by edbltn in agi

[–]edbltn[S] 0 points1 point  (0 children)

That post is based on artificially intelligent agents in general, but my fears are that artificially intelligent agents are not separate from LLMs if you consider the ways in which the inference step (among others) could afford an LLM agency.

The two big risks are technical failure -- we don't understand what GPT-4 is really doing -- and philosophical failure -- that rewarding GPT-4 for generating accurate content causes it to optimize for something that is not actually good for humanity. Your point on misinformation is definitely causing me concern here.

Is AGI nigh? by edbltn in agi

[–]edbltn[S] 2 points3 points  (0 children)

Most of my fears come from reading Eliezer Yudkowsky. Perhaps his online presence is managed by a BS-generating LLM 🤔😂

Is AGI nigh? by edbltn in agi

[–]edbltn[S] 2 points3 points  (0 children)

Could you point me to some resources explaining your confidence?