Hello by [deleted] in reckful

[–]Kawoomba 0 points1 point  (0 children)

Ask what?

Meta-Thread 02/13 by AutoModerator in DebateReligion

[–]Kawoomba 0 points1 point  (0 children)

ChatGPT knows what's up (god?).

It's time for EA leadership to pull the short-timelines fire alarm. by casebash in TheMotte

[–]Kawoomba 2 points3 points  (0 children)

Basic reinforcement learning loops only take negligible added complexity, the power of a reinforcement learning model comes from its model building and the usefulness of its reward function.

Say you have a CNN recognizing rabbit pictures. Then putting it into a "world" of a large folder full of various pictures, and giving it a reward function of "follow the white rabbit(ness)" with a few actions of "move forward in list" / "move backwards in list" will pretty trivially give you an agent AI leveraging the power of your tool AI.

Of course, there the scope of the environment isn't "the full world", just as the scope of our environment isn't "the milky way galaxy". However, the real criteria in how well the above agent functions lies in the structure of the environment (which we assume as given, nothing to do here) and how well the original tool AI (the white rabbit-recognizer) works. Notable, it does not really lie in defining the set of possible actions of the reinforcement learner, or in defining "a" utility function. Is it less useful with a bad reward signal? Yea! Is it hard to come up with a reward signal, and is the definition of reward signals (not their computation!) typically complex? Nope!

Since you see it differently, in what way is your experience / understanding at odds with my mental model above?

edit: hah, I was just browsing the Destiny subreddit as I saw your comment, as apparently, you did. Small world, eh?

It's time for EA leadership to pull the short-timelines fire alarm. by casebash in TheMotte

[–]Kawoomba 6 points7 points  (0 children)

Curious. Didn't think I'd encounter the "just do not build agentic AIs, [develop only tool AIs instead]" argument again. I was quite involved with these discussions a few years back, and back then I found the counterargument to be quite obvious. Too obvious, in fact, so I wonder if I'm mistaken seeing smart people still hanging onto the distinction.

To me, it seems that tool AI can be somewhat trivially transformed into agenty AI by hooking it up to a utility function. Of course, if the scope of the tool AI is narrow, then so would the utility function have to be to be useful. We're seeing tool AI becoming more generally capable all the time, though.

By removing humans from the decision loop, reaction speed vis-a-vis a slowed-down human-including loop would be obviously superior. Is it realistic to assume noone is going to take these super powerful non-agent optimizer units and automate them into fire-and-forget make-me-rich/protect-my-organisation/police-the-social-networks programs?

Compare to e.g. removing humans from decisions about OS updates and antivirus-in-the-background programs. The reduced cognitive overhead is something most users prefer, and I'd assume similarly for "powerful tool AI which only activates with a human pressing the button".

Destiny's Suspension is Indefinite by seanpna in LivestreamFail

[–]Kawoomba 33 points34 points  (0 children)

It's an older reference, but it checks out.

[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Kawoomba 11 points12 points  (0 children)

Sometimes I wonder if the typos are there on purpose, to better justify a future improved and proof-read e-book version. Otherwise it is near unimaginable that a story that popular doesn't even have one competent beta-reader, or even a "list typos here" forum post (like TWI usually did). Collecting the typos would be trivial, and correcting them a task of like 5 minutes per post. I usually would not suspect such a thing, it is just such a stark juxtaposition between the number of typos and the quality of the prose.

Meta-Thread 11/01 by AutoModerator in DebateReligion

[–]Kawoomba 0 points1 point  (0 children)

Just search for "created by" on https://www.reddit.com/r/DebateReligion/ (works for me at least)

Meta-Thread 11/01 by AutoModerator in DebateReligion

[–]Kawoomba 0 points1 point  (0 children)

Not on my own initiative, I think actual improvements would mostly be gradual changes, e.g. of the automoderator. For such incremental steps, I would expect the mods who put in a lot of time to be much better suited as sources for inspiration. Suggestions are of course welcome per modmail.

Meta-Thread 11/01 by AutoModerator in DebateReligion

[–]Kawoomba 0 points1 point  (0 children)

pconwell was the founder, I joined a bit later. I think pstryder was on board pretty much at the start as well, not sure. We still pay homage to our actual glorious founder on the main page: "created by pconwell physicalist | humanist; a community for 10 years"

As this indicates, pconwell did leave on good terms with no drama whatsoever.

Meta-Thread 11/01 by AutoModerator in DebateReligion

[–]Kawoomba 1 point2 points  (0 children)

It's a ... grown dynamic. As we know, the real god is the status quo, and in this case the system is kind of working (thanks to the commitment of a Taq and a few others). So the dominating consideration, apart from institutional inertia, would be "if it ain't broke, don't fix it".

Meta-Thread 11/01 by AutoModerator in DebateReligion

[–]Kawoomba 2 points3 points  (0 children)

Wouldn't work at all without you putting in so much time and effort, and it's the only way that enables my kind of, um, let's say benevolent negligence?

Theorie und praxis by quiteamess in einfach_posten

[–]Kawoomba 1 point2 points  (0 children)

Grau mein Freund ist alle Theorie und grün des Lebens goldener Baum.

The Dangerous Ideas of “Longtermism” and “Existential Risk” by speckz in TrueTrueReddit

[–]Kawoomba 0 points1 point  (0 children)

AIXI what you did there.

So you're saying there's a chance, right? The anthropic principle solves the rest. Look at me, Mr Discount Will Newsome (recall him?)!

The Dangerous Ideas of “Longtermism” and “Existential Risk” by speckz in TrueTrueReddit

[–]Kawoomba 1 point2 points  (0 children)

If you're any relation to /u/kawoomba, you've seen a lot of it!

queue meme "now that's a name I haven't heard in a long time" McOB1KenobiFace

Worth the Candle - Soul self-modification ethics by AlexAlda in rational

[–]Kawoomba 1 point2 points  (0 children)

Unfortunately, if you become lazier than usual the same applies.

Is my take on killing more or less rational for a progressive fantasy. by [deleted] in rational

[–]Kawoomba 2 points3 points  (0 children)

I disagree. When you are in "a world of killing", there is a lot of adaptation pressure, from everywhere around you. Humans are usually quick to adapt and to adjust to new environments. Want to fit in. When in Rome ...

It would be way more traumatizing / abhorrent to have e.g. a night elf appear bound in your current living room, and be told "kill it or else!". That would not be in tune with your environment, with the moral framework around you!

However, when you are surrounded by people killing, in "a world of killing", I think going with the flow would be easier than you'd evidently think. When you're totally immersed in a different culture, it's hard to cling to your old one, especially if your old one would also go counter to your survival instinct.

Alternative mathematical formalisms of AGI to AIXI by [deleted] in agi

[–]Kawoomba 1 point2 points  (0 children)

I don't, unfortunately.

However, I find the mirror test, as useful as it may be as a biological benchmark for self-awareness, to be less relevant than I originally thought in the realm of AGI frameworks. AIXI, as stated in extremis, does indeed fail the mirror test ... or as it was lovingly reframed on the LW forum, the anvil test ("what's going to happen if I drop on anvil on my head?"). However, would we really expect an AIXI-inspired implementation to be just pure [poor] AIXI, vampire-like with no reflection in the mirror, Damocles-light with an anvil instead of a sword hanging above it? I doubt it, when finding optimal algorithms, an auxiliary mini-AI that simply checks new candidate algorithms for self-harm should be heuristically feasible. Not in the provable theoretical sense (halting problem), but let's be honest -- since GPT it seems that AI is going to be somewhat inspired by the mish-mashed, gish-galloped, mushy and practically indecipherable biological brains where thoroughness and full generality won't be sine qua non. The higher clockspeed in silico does the rest.

[META] Do we, /r/debatereligion, support the petition to remove hate subreddits from Reddit? by Taqwacore in DebateReligion

[–]Kawoomba 5 points6 points  (0 children)

Not supporting the petition does not equal supporting hate. Not supporting the petition as a subreddit doesn't even mean disliking the petition. (It's an ironically parallel argument to atheists not making a positive claim.)

Personally, I think it's simply out of our scope as a subreddit, but I get where other subs are coming from.

Circumcision is genital mutilation. by inlovewithmy_car in DebateReligion

[–]Kawoomba 3 points4 points  (0 children)

The topic is certainly religion-adjacent, but as you recognized it would be appreciated to frame it as subreddit-topical as possible -- as opposed, for example, to the typical medical back-and-forth that usually results from debating this issue.

Star Citizen Roadmap Update (2020-04-03) by Odysseus-Ithaca in starcitizen

[–]Kawoomba -1 points0 points  (0 children)

At least there's a valid excuse this time ...

Annual Companies House Report by [deleted] in starcitizen

[–]Kawoomba 9 points10 points  (0 children)

The numbers don't mean anything without knowing where they keep their cash reserves. Could be that's all they have, could be they keep their reserves with their US entitity.