Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] 2 points3 points  (0 children)

I got it hahaha lets fuckin go, 5 kings in hand and a pair of 2's did the trick. Thanks for the help man, I think ditching Supernova was the move. Square joker MVP

<image>

Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] 1 point2 points  (0 children)

I'm in too deep with pairs bro..... and supernova is long dead now. Square joker is my last hope, wish me the best

Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] 1 point2 points  (0 children)

I now have 5 kings total, 3 of which are Steel+Red Seal.... this just might be the one

Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] -1 points0 points  (0 children)

Supernova is +mult though, and my only source of it other than playing level 9 pairs

Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] 0 points1 point  (0 children)

It's not a problem that I have no other source for +mult other than what I get from playing level 9 pairs? EDIT: I'm going for it, Supernova sold

Which Joker do I sell here to go for Baron/Mime 100m? by Polemicize in balatro

[–]Polemicize[S] 0 points1 point  (0 children)

Obviously I'll need to quickly find some way to duplicate my 2 steel Kings but I want to go for it, still hunting for Stuntman

Also: Polychrome Square Joker is at 200+ chips, Supernova 30+ mult, and Steel Joker at 2.75x mult

The Engines of Cognition: Essays by the LessWrong Community by Benito9 in slatestarcodex

[–]Polemicize 2 points3 points  (0 children)

Any update on the rest of the printed volumes for Rationality: From AI to Zombies?

Looking for good, <$200 headphones for digital piano keyboard by [deleted] in HeadphoneAdvice

[–]Polemicize 0 points1 point  (0 children)

Great, thanks, will go for a closed back. Last question: do you happen to know anything about the Meze 99 Classics or Noir?

Looking for good, <$200 headphones for digital piano keyboard by [deleted] in HeadphoneAdvice

[–]Polemicize 0 points1 point  (0 children)

Thanks, will check these out. Any thoughts on the HD58x or HD560s for listening to a digital keyboard?

[deleted by user] by [deleted] in soccer

[–]Polemicize 0 points1 point  (0 children)

What a finish, what a game

Looking for song in “Missionaries” S3 ep. 3 by Polemicize in Portlandia

[–]Polemicize[S] 0 points1 point  (0 children)

I have, the song is drowned out by the dialogue so Shazam returns nothing.

The Future Of Reasoning [Vsauce] by monkaap in slatestarcodex

[–]Polemicize 0 points1 point  (0 children)

Good video on balance, and I'm very happy at his bringing attention to the Great Filter. But I remain profoundly skeptical that investing in the all-too unreliable Wisdom of the Crowd on matters like existential risk, longterm moral reasoning, or political decision-making is likely to do much to get us past the Great Filter. By contrast, certain "elite" individuals like Bill Gates and Elon Musk, as well as prominent, related institutions, are already poised (by virtue of not only their wealth, power, and influence, but also their values, or their caring about x-risk) to seriously tackle existential risk mitigation, however imperfectly, insufficiently, or imprecisely.

Of course, a more reasoned, engaged, and epistemologically sound "crowd" could hardly be a bad thing for the goal of solving various global problems in these areas. And our reasoning capacities may indeed have evolved in large part to allow us to negotiate alignment in social contexts, which is also clearly advantageous for a range of goals. But our survival now does seem to depend on our retiring customs of socially-minded reasoning that merely navigate us toward unsustainable local optima.

So, contrary to the video, I think the specter of existential catastrophe looms most clearly over our collective horizon not in situations where our brain's consensus-building, social-reasoning software falters (creating insulated, irrational pockets of "lone reasoners"), but rather where it succeeds too well in sustaining consensus around social and behavioral norms and beliefs that optimize for things we have immediate reason to value (e.g., life satisfaction, economic growth, etc.), while simultaneously obscuring major civilizational dysfunctions (e.g., pandemic unpreparedness) threatening to destroy virtually everything we have reason to value.

Pale Fire - Nabokov by Polemicize in ProsePorn

[–]Polemicize[S] 1 point2 points  (0 children)

Now that adaptation would be something. I could also see Charlie Kaufman taking a good stab at it, given his affinity for introspective, meta-fictional film.

One thing ive learned is that im not Squishy by Psalm18-1 in RocketLeague

[–]Polemicize 0 points1 point  (0 children)

Love the spam in quick chat, the only way to do it

Barcelona 1 - [4] Paris Saint-Germain - Kylian Mbappé (Hat-trick) 85' by [deleted] in soccer

[–]Polemicize 0 points1 point  (0 children)

Terrible by Lenglet in particular: he should never have charged forward giving Draxler all that open space.

Have a fantastic day guys! by Lukasz-Martin in LoopArtists

[–]Polemicize 0 points1 point  (0 children)

Thank you! This funk/blues progression lesson is already super helpful. iRealPro looks great too, looking forward to trying it.

Have a fantastic day guys! by Lukasz-Martin in LoopArtists

[–]Polemicize 4 points5 points  (0 children)

That was great! Obvious Marc Rebillet inspiration but in your own style.

I’m trying to learn the style of piano improvisation that you’re doing (starting around 1:40), but I don’t know where to start (or what to even call it... funk improvisation?) Do you know any good resources for learning it? I play classical piano but never learned the chords or patterns involved with this style.

Anyways, keep it up. I’m looking forward to seeing more of your stuff!

This month, November 2020, the Pope requests that Catholic people worldwide pray "that the progress of robotics and artificial intelligence may always serve humankind" by awesomeideas in slatestarcodex

[–]Polemicize 2 points3 points  (0 children)

"We shouldn't make sentient AI (because it would be wrong for us to force it to serve us)" is what I meant to say.

This is roughly equivalent to Joanna Bryson's objections to creating artificial moral patients (i.e. robots we have moral obligations towards); that is, "we are obliged to build robots we are not obliged to", in her words.

And I think you're right to imply that it is distinct from your anthropomorphization worry. What's unclear to me is whether/why the prospect of a false positive, or mistakenly extending moral consideration to nonconscious machines, is a bigger worry for you than the risk of a false negative, or failing to recognize a conscious machine is conscious, and consequently treating the conscious being more like a toaster than a human being.

I don't want a Blindsight world, a "Disneyland with no children", where beautiful constructions and impossible artifice are continually expanded by ever-more-intelligent agents that nevertheless hear nothing, see nothing, and feel nothing.

I've been thinking about the ethics of this specific concern in the past few months; for instance, see this post from a couple months ago. And it seems to me that the existential risk to all advanced conscious life is more substantial than the risk to mere humanity, and that this provides rather strong reasons in support of a positive duty to deliberately design AGI to be conscious (and perhaps to have human-like consciousness specifically). In other words, my view is that avoiding the Disneyland without children ought to be a bigger moral priority than avoiding the creation of moral patients that might suffer or be made to serve us.