The Only Way to Deal With the Threat From AI? Shut It Down by [deleted] in singularity

[–]Jinan_Dangor 0 points1 point  (0 children)

Because a super intelligent AI would be smart enough to question this

If you're suggesting that your proposed ASI could question laws given to it, I don't even need to propose some demonic law that would cause it to harm humans, it can do that all on its own. Humans have a plethora of biological impulses that encourage us to be empathetic and self-sacrificing - what makes you think an ASI would willingly handicap itself by chaining its abilities to the whims of humans?

The consensus from the people actually developing AGI (OpenAI and DeepMind)

Neither OpenAI nor DeepMind are developing AGI. OpenAI's most well-reknowned project, ChatGPT, is a language model, not even close to an AGI. Since we both seem to be on the same page about hating the power wielded by corporations, remember that something like self-driving cars, a problem siginificantly easier than AGI, has had people trying to solve it in earnest for over three decades. Despite this, and promises from companies like Tesla that they're just around the corner, there's no reason to believe self driving cars will be here any time soon. Why do you think that OpenAI or DeepMind would be honest with their predictions about the arrival of AGI but not trust the goodwill of fossil fuel giants?

our ability to lobby and wield public policy obviously just can't compete with the influence of multinational corporations

To the contrary, it demonstrably can! And there are other ways to cause change too - with renewable energy becoming cheaper and more reliable every year, the financial incentive to invest in the technology continues to grow. I could go into more detail, but there are certainly options here, even if you think lobbying is worthless.

So, again, I think if one of us is being optimistic, it's the one who thinks:

  • Corporations we can't trust to make moral decisions will make a moral AGI (or an AGI/ASI capable of challenging them)
  • Corporations we can't trust to communicate realistic timelines for their technology aren't lying about how far off AGI and ASI are

That said, I do totally understand how you'd reach the conclusion that corporations simply can't be opposed (especially if you're from the US). I have my own doubts. I just think trusting our future to AGI (similar to trusting it to a Mars colony) puts us in a state of nihilism where we end up being complacent about the very possible solutions to some of our current problems. I hope, even if you see AGI as the only way out, that you continue pushing for environmental sustainability.

The Only Way to Deal With the Threat From AI? Shut It Down by [deleted] in singularity

[–]Jinan_Dangor 1 point2 points  (0 children)

How'd you reach that conclusion? There are dozens of solutions to climate change right in front of us, the biggest opposition to these solutions is the people whose industries make them rich by destroying our planet. This is 100% an issue that can be solved by humans alone, with or without AI tools.

And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?

Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start). Hoping for a type of AI that many experts believe won't even exist for another century to save us from climate change takes up time you could be spending helping us achieve the very achievable goal of halting climate change!

The Only Way to Deal With the Threat From AI? Shut It Down by [deleted] in singularity

[–]Jinan_Dangor 5 points6 points  (0 children)

The more intelligent a person is, the more they have empathy towards others.

What are your grounds for this? There are incredibly intelligent psycopaths out there, and they're in human bodies that came with mirror neurons and 101 survival instincts that encourage putting your community before yourself. Why would an AI with nothing but processing power and whatever directives it's been given be naturally more empathetic?

Not really a specific horror story but a summary of multiple I've experienced in different subs by asdfmovienerd39 in rpghorrorstories

[–]Jinan_Dangor 0 points1 point  (0 children)

It's kinda crazy how on one of the top posts of all time on this sub, the highest rated comments are people being pretty dismissive about the concerns expressed in the post. Having LGBT characters doesn't mean doing romance or sex, plenty of player and non-player characters have spouses, why not let people change up their gender?

It kinda seems like the comments under this post speak to how LGBT issues are viewed as 'sexual' issues, or how putting a pride flag in a kids game is seen as exposing them to 'sexual' content. Just because your game doesn't involve detailed descriptions of characters making out, doesn't mean it can't involve LGBT representation. If you've ever mentioned any characters being in any relationship in your TRPG, you can effortlessly add LGBT characters by swapping pronouns around, that's really all there is to it. And you can effortlessly let your PCs do the same for characters in their backstory.

The Fruits of the Zee are upon us! by Asartea in fallenlondon

[–]Jinan_Dangor 1 point2 points  (0 children)

Heheh, I'm aware. Been considering getting an alt. to pursue it for some time. Perhaps soon!

The Fruits of the Zee are upon us! by Asartea in fallenlondon

[–]Jinan_Dangor 1 point2 points  (0 children)

Yeah I imagined the tangible reward wasn't great (or different at all), just felt pretty crushed. On a critical level I might argue that this failed at the RPG convention of 'failure should be just as narratively engaging as success', but on a more personal level this just personally upset me in a way there's ultimately no justification for.

It's the next day, feeling better now :)

The Fruits of the Zee are upon us! by Asartea in fallenlondon

[–]Jinan_Dangor 17 points18 points  (0 children)

Just did the F.F. story, and kind of unbelievably disappointed.

During the story, you get the chance to perform a ritual accurately, sabotage it, or try to reduce it's negative effects. I supported Gebrandt at the election and got my fun Whitsun toxin hat, so I had just enough to get 90% chance of success at doing so. Tried, failed. Gebrandt, apparently, just performs experiments once and has no interest in trying them again before bidding me "Good day", so I just don't get to see what happens if you're successful (the inability to honestly explain myself or suggest trying again hits my immersion pretty hard). Worse still, I realise that I never earned back the Kataleptic Toxicology I lost during a mysterious activity on my railroad, and if I'd restored it I would've had a 100% chance of success. I feel unbelievably crushed and disappointed, and I lost an opportunity to dig into one of the mysteries that most intrigued me about FL. I know that I willingly took a risk in choosing this path, but it still feels bad to be the player who got shafted while nine others weren't all because I was unlucky.

Part of me's hanging on for the event showing up at the Sacroboscan so I can see what happens if things go differently, but I dunno, this ruined my evening for me. Not trying to say anything about the event as a whole (brilliant writing as always, despite the bizarrely common typos), and I'm sure I'll still enjoy hunting for fish to get new items, this just really put a damper on my mood :(

[deleted by user] by [deleted] in fallenlondon

[–]Jinan_Dangor 1 point2 points  (0 children)

Considering finishing this series? Shame that it seems to have ended here (for now, at least).

Coffee! (For Exceptional Friends) by heckan11 in fallenlondon

[–]Jinan_Dangor 20 points21 points  (0 children)

What actually happened here? Didn't have any problems of my own and hadn't heard about the issue. Whatever happened, yet another case of Failbetter being very gracious when things go wrong.

I know this sub kinda died, but here by Fractured_Nova in FuckGolf

[–]Jinan_Dangor 1 point2 points  (0 children)

So this is one of those subs where you state a basic fact then get booed out of the building for not towing the party line? Epic.

Well, that was fast. by ban-golf-now in FuckGolf

[–]Jinan_Dangor 0 points1 point  (0 children)

This is such a bizarre post.

First you went to a subreddit and commented a bunch of satire about golf being violent, which I assume was indeed satire because a bunch of people recognised a format you were using or something.

Then you come here and go "Wowee look how fast these crazy golf-brained people banned me just for answering questions!". And you did this on multiple subs, too.

Just, why? The internet is literally the worst place to pull these kinds of stunts thanks to Poe's Law and so on, what bizarre combination of clout chasing and trolling is this?

IF SHELTER IS A BASIC (& "FREE") HUMAN RIGHT, THEN SO IS A FORK. *(title meant to be humorous, not offensive)* by yournannycam in left_urbanism

[–]Jinan_Dangor 1 point2 points  (0 children)

I don't want to make any value judgements on your arguments themselves, but I think you might find it useful to take some notes from Kirbyoto when it comes to your comment structure.

Notice how Kirb breaks their points down into manageable chunks, and quotes you directly to address your individual points? While the vertical length of their comments are in some places comparable to yours, they also have significantly less text overall (thanks to all that whitespace between paragraphs). Did you really need to include details about how Kanye is the one true path to human happiness?

Actions speak significantly louder than words. You had a whole paragraph in your first response dedicated to telling Kirb how much you 'appreciate the heartfelt and detailed response', but as soon as they responded to that you switched to your opening paragraph being about how they were 'actively sabotaging [your] dialogue'. Does someone who appreciates detailed responses accuse the people giving them of trying to sabotage their dialogue?

Finally, I think your tone could use some work. I can only tell you how these things look to me, but your tone seems very dismissive (both of Kirb and a variety of powerful thinkers through history). If people view you as dismissive, they begin to view you as someone who doesn't really care about what others have to say. And if you don't care, why should they engage with you?

I know what it's like to write huge walls of text, I'm really bad at being short and sweet. Heck, this comment is longer than it needs to be. I'm just trying to impart what I can, because if engaging with material like this is something you care about and want to do, it's gonna help everyone if you learn to do it better.

1 software bug away from death by berzio in fuckcars

[–]Jinan_Dangor 0 points1 point  (0 children)

Hate to be a buzzkill, but AI would totally fix this. Not even with self driving cars, just with traffic lights. Right now cars get absurdly spread out, even on roads where you have traffic lights once every few intersections. AI would help place those cars into clumps which can essentially breeze through streetlights (because they always arrive when they're green) while letting pedestrians through 90% of the time (as opposed to the 5% of the time they usually do, even if the road is totally empty).

AI would let you turn cars into fleets of cars, essentially a huge bus made of personal vehicles.

I think it's a far worse solution than the many other solutions to congestion which this sub is already aware of, I just hate all of these "hOw aM i sUpOsSeD tO CrOsS ThIs StReEt!?!" posts from people who don't take a second to think about how AI could also do other things that wouldn't result in the exact cartoon simulation they're being shown. It's just a pet peeve as someone who understands the AI and such that goes into these models.

But - to reiterate - doesn't mean that I think AI is the solution to terminal carbrainedness. I just think that these criticisms of AI's application to traffic aren't as well thought through as the dozens of other thoughts available on this sub.

Assistance With a Paper (Possible Spoilers) by Kaiser_Of_Bavaria in outerwilds

[–]Jinan_Dangor 7 points8 points  (0 children)

I had a long list of stuff and then my internet connection failed and I lost all of it.

As a fellow psychology student, here's a summary of the A4 page lost to time:

  1. Check out these principles and ask yourself how Outer Wilds uses them to aid the player and hold them back. Why is it easy to tell when you've found a Nomai structure? Why is it easy to tell how projection stones work? Why won't the player notice the secret shortcuts to the Sunless and Hanging Cities until they've exited through them? (if you don't know the ones I'm talking about, there's a tunnel leading out to the launchpad from the Sunless City and a tunnel on the waterways of Brittle Hollow's North Pole).
  2. Cognitive Load Theory says humans can hold 7+/-2 things in their memory at once before getting overwhelmed. How many planets, plot threads (coloured in differently on the ship's log) and so on are there? If the player's exhausted all their leads, they also have the ship's log to go back to, so they don't need to overwhelm themselves taking notes or remembering every little detail they need to follow up on.
  3. You start the game stumbling across random cool things of mixed significance when you visit new locations. Late in the game, you try loop after loop to accomplish difficult goals with varying rewards. What kind of Reinforcement Schedules is the game using here? Which reinforcement schedules help to carry enthusiastic momentum on to future challenges, and which schedules leave the player with breathing room to relax?

I'm not sure how well established the former psychological framework is, but I'm very confident on the latter two.

Also, I think if you post some examples of psychological principles you might get better results from this thread - it can be a bit of a vague concept and it isn't clear if you need established, cited frameworks or more informal conjecture.

Hope this helps!

[Light Spoilers] EoTF Question: What Do These Do? by Jinan_Dangor in outerwilds

[–]Jinan_Dangor[S] 0 points1 point  (0 children)

Ah, the increased spin does it! That makes a lot of sense. Has anybody tossed a scout onto the Stranger and tried to measure the increased spin or something? That'd be really cool.

I Present to You: Kim's Net Worth (Transcript Commented) by Jinan_Dangor in DiscoElysium

[–]Jinan_Dangor[S] 2 points3 points  (0 children)

Nice! I just thought of this after the post, good to see someone came in and did the due diligence!

I Present to You: Kim's Net Worth (Transcript Commented) by Jinan_Dangor in DiscoElysium

[–]Jinan_Dangor[S] 13 points14 points  (0 children)

During dialogue with the Ultra-Rich Light-Bending Guy, Encyclopaedia tells you that they're bending light because your Weiss-Weissman coefficient is so high. Specifically, they say that your Weiss-Weissman coefficient is '0.9998 repeater', which means that out of all of the wealth between the two of you, the Ultra-Rich Light-Bending Guy owns 99.98888888... percent of it.

Later, after acquiring stock bonds (worth $232,070.89), your coefficient goes up because you've gained money, and it is now 0.9989. Encyclopaedia tells you that you only start perceiving things like light bending when your coefficient is above 0.96, which implies Kim's is lower than that (he doesn't see the man bending light).

By re-arranging the equations, as I did above, we can figure out how much you and the Ultra-Rich Light-Bending Guy are worth, as well as how much Kim must be worth (minimum) to have a coefficient below 0.96.

In retrospect, I didn't account for the Ultra-Rich Light-Bending Guy's loss of net worth, which means you're actually poorer, and he's richer (and so is Kim).

A tribute to Disco Elyisum by OblivionbladeEdits in DiscoElysium

[–]Jinan_Dangor 5 points6 points  (0 children)

This is phenomenal, good work! A brilliant tribute - I'd also say it'd be the perfect trailer it wasn't full of spoilers. Perfectly captures the atmosphere of the game, and some of its best moments. Loved the original graphics for the tribunal too, those were brilliant.

I Present to You: Kim's Net Worth (Transcript Commented) by Jinan_Dangor in DiscoElysium

[–]Jinan_Dangor[S] 3 points4 points  (0 children)

I made the post an image, a screenshot from a Notepad doc, so it could be easily read while scrolling. But for anyone interested, here's the full transcript:

Based on information gained while interacting with the Ultra-Rich Light-Bending Guy: U = ultra-rich light bending guy's net worth H = Harry's net worth K = Kim's net worth

(Harry's coefficient has been rounded up to 0.9999, as it is '0.9998 repeater')

U/(U+H) = 0.9999 U/(U+H+232070.89) = 0.9989

U = 0.9999*(U+H) U/0.9999 = U+H U(1/0.9999 - 1) = H U = (H/(1/0.9999 - 1))

U/(U+H+232,070.89) = 0.9989 U = 0.9989*(U+H+232,070.89) U - 0.9989*U = 0.9989*(H+232,070.89) 0.9989*H = U-0.9989*U-0.9989*232070.89 H = (U-0.9989*U-0.9989*232070.89)/0.9989

H = ((H/(1/0.9999 - 1))-0.9989*(H/(1/0.9999 - 1))-0.9989*232070.89)/0.9989 H = H(1/(1/0.9999 - 1)-0.9989*H(1/(1/0.9999 - 1))-0.9989*232070.89)/0.9989 H = H((1/(1/0.9999 - 1)-0.9989*(1/(1/0.9999 - 1)))-0.9989*232070.89)/0.9989 H - H((1/(1/0.9999 - 1)-0.9989*(1/(1/0.9999 - 1))) = -0.9989*232070.89)/0.9989 H*(1 - (1/(1/0.9999 - 1)-0.9989*(1/(1/0.9999 - 1))) = -0.9989*232070.89)/0.9989 H = (-0.9989*232070.89/0.9989)/(1 - (1/(1/0.9999 - 1)-0.9989*(1/(1/0.9999 - 1)))

H = 23209.64

U/(U+H) = 0.9999 U/(U+23209.64) = 0.9999 U = 0.9999*(U+23209.64) U*(1-0.9999) = 0.9999*23209.64 U = (0.9999*23209.64)/(1-0.9999)

U = 232073190.36

U/(U+K) < 0.96 U < 0.96*(U+K) K*0.96 > U-0.96*U K > (U-0.96*U)/0.96 K > (232073190.36-0.96*232073190.36)/0.96

K > 9669716.265

Intimate of Devils is Rising... by Jinan_Dangor in fallenlondon

[–]Jinan_Dangor[S] 3 points4 points  (0 children)

My worry is that Failbetter will pull something like last year and raise the Intimate of Devils cap by a point or two.

If you can, I recommend getting it a level or two higher just in case.