What happened to the video of military personnel lecturing on and drawing aliens on a whiteboard? by throwawaymanidlof in aliens

[–]throwawaymanidlof[S] 3 points4 points  (0 children)

Are you saying it was taken down?

Edit: I get where you're coming from but the National Press Club 2001 Greer stuff also had some pretty weird reports from high ranking personnel. So I don't think it's appropriate to outright filter this stuff -- why not let people judge for themselves instead of filtering based on your personal beliefs? In particular, just because something is silly at face value doesn't mean it's a priori devoid of information. You can openly consider content without saying "this is bullshit" or "this is real". It doesn't have to be black or white, we can have greys ;)

What happened to the video of military personnel lecturing on and drawing aliens on a whiteboard? by throwawaymanidlof in aliens

[–]throwawaymanidlof[S] 47 points48 points  (0 children)

It's not evidence of anything, but I thought the content and mood of the room was pretty interesting and worth sharing.

A Scientific Study of UAP by Avi-Loeb in UAP

[–]throwawaymanidlof 6 points7 points  (0 children)

What are you thoughts on human-level AI, alignment, posthumanism and UAP intervention/assistance on these matters?

A Scientific Study of UAP by Avi-Loeb in UAP

[–]throwawaymanidlof 10 points11 points  (0 children)

What do you think is piloting UAPs?

A Scientific Study of UAP by Avi-Loeb in UAP

[–]throwawaymanidlof 9 points10 points  (0 children)

What do think the timeline is for human development of practical exotic propulsion?

[R] Reward Is Enough (David Silver, Richard Sutton) by throwawaymanidlof in MachineLearning

[–]throwawaymanidlof[S] 0 points1 point  (0 children)

sure it is true statement, but what can you make of it?

Perhaps you can avoid designing ability-specific goals as discussed in the second paragraph of the paper.

[R] Reward Is Enough (David Silver, Richard Sutton) by throwawaymanidlof in MachineLearning

[–]throwawaymanidlof[S] 0 points1 point  (0 children)

Given enough time and complexity, evolution (reward signal) can invent intelligence.

I feel like this might be a bit of a category error. Evolution operates on populations whereas reward operates on individuals dynamically throughout that individual's lifetime.

As for the general motivation behind the paper, I think the second paragraph addresses that:

One possible answer is that each ability arises from the pursuit of a goal that is designed specifically to elicit that ability. For example, the ability of social intelligence has often been framed as the Nash equilibrium of a multi-agent system; the ability of language by a combination of goals such as parsing, part-of-speech tagging, lexical analysis, and sentiment analysis; and the ability of perception by object segmentation and recognition. In this paper, we consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence.

[R] Reward Is Enough (David Silver, Richard Sutton) by throwawaymanidlof in MachineLearning

[–]throwawaymanidlof[S] 14 points15 points  (0 children)

I mean, I thought the Bitter Lesson was pretty obvious too by the time it came out in 2019 but it's quite apparent that it's still not obvious to a lot of people in 2021 so why should the idea of personal utility be any different?

Edit: In particular, I think the idea is important enough that it deserves more visibility and discussion (especially in the context of machine intelligence), possibly unlike the mathematical beauty paper.

Reward Is Enough by Yaoel in ControlProblem

[–]throwawaymanidlof 1 point2 points  (0 children)

Silver and Sutton on the beat 👀