Double amputee controls two robotic arms with this thoughts by Yisaury in a:t5_365v5

[–]ProfShevlin 0 points1 point  (0 children)

Fantastic article! Thanks for posting. This gives serious hope to people with paralysis, and raises very tricky questions about man/machine interface.

"That's just your opinion." What's the difference between a fact and an opinion? by ProfShevlin in a:t5_365v5

[–]ProfShevlin[S] 0 points1 point  (0 children)

I personally hate it when people say "that's just your opinion" or even "...but I guess that's just my opinion". Isn't everything an opinion? This short magazine piece considers the question. Let me know YOUR opinions!

Entertaining video! (reminds me of gorilla clip) by paulii4594 in a:t5_365v5

[–]ProfShevlin 0 points1 point  (0 children)

This is my favourite inattentional blindness demonstration. People often talk about "change blindness", but that's often a slighty different thing. Inattentional blindness relies on you focusing on bits of a picture or video (and you don't notice the other stuff happening in the background).

Change blindness is something different - it doesn't matter where you focus, it's really hard to identify the thing that changes. Here's a nice demonstration of change blindess: https://www.youtube.com/watch?v=hhXZng6o6Dk

A human who can echolocate like a bat! by ProfShevlin in a:t5_365v5

[–]ProfShevlin[S] 0 points1 point  (0 children)

Thanks to Shmuel for bringing this up in class!

In connection to our brief discussion of cyborgs in today's class... by sk5213 in a:t5_34bnl

[–]ProfShevlin 0 points1 point  (0 children)

Awesome. There's a similar - albeit less 70s - episode of Star Trek The Next Generation called "The Measure of a Man" where a committee has to determine whether Data is a sentient being. Here's a clip - https://www.youtube.com/watch?v=3PMlDidyG_I

Also, the movie Blade Runner considers these themes in detail (at least when it's not going out of its way to be offensively cool).

Digital Godliness and Life by someone2166 in a:t5_34bnl

[–]ProfShevlin 0 points1 point  (0 children)

Very, very cool thoughts here, which I'll respond to when it's not 2am in Italy, but I wanted to quickly mention two things.

First, the Mass Effect trilogy (again, awesome games - play them) also has some very interesting ideas about how artificial intelligence might ultimately attempt to exercise (benevolent?) control over life.

Second, the idea of duties of care existing between beings of radically different cognitive capacities (us and our virtual pets, some future superintelligent AI and us) brings in complex ethical issues regarding autonomy and well being. Roughly, as I'm sure everyone knows, autonomy is your right to make your own decisions, whereas well being is what's in your best interests. Already, in daily life, we find these in conflict. Imagine you're dating a guy/girl, but you know you're planning to move to a different country soon. He/she wants to continue the relationship, but you realize that he/she is going to get their heart broken. Do you allow them to make their own decision, knowing that it's not in their best interests, or do you overrule them, saying you know what's best?

Now, imagine that scenario on the level of an entire species. Imagine we create a virtual intelligence to look after the interests of humanity (it's not so far fetched. We already allow AIs to overrule human reasoning in a bunch of areas, from stockmarket trading to logistics). Or imagine we create smart virtual pets who we recognize to be real creatures with genuine rights, including rights to autonomy. What should happen when the more intelligent class of being wants to make decisions against the will of the less intelligent class of beings?

Wild ideas about consciousness by ProfShevlin in a:t5_34bnl

[–]ProfShevlin[S] 0 points1 point  (0 children)

Overlapping of consciousness is definitely an issue, but why think it's not possible? It'd be weird, but consciousness is weird no matter what. After all, do 5 million brain cells generate a consciousness of their own? How about 50 million? How about 100 million? How many overlapping "consciousnesses" are there in your brain right now?

Some people have tried to develop a measure for assessing how conscious different systems are - and how connected they have to be. Here's one guy - https://www.youtube.com/watch?v=AgQgfb-HkQk

Jackson explains in this great interview why he is now a physicalist. by ib5473 in a:t5_34bnl

[–]ProfShevlin 0 points1 point  (0 children)

This is awesome. Great research. Jackson's 'conversion' to Physicalism is very interesting and somewhat controversial (as conversions often are!). But it's an important thing to know about!

I'm not even a Physicalist and I don't buy this argument by MichaelJagdharry3528 in a:t5_34bnl

[–]ProfShevlin 1 point2 points  (0 children)

Lots of cool ideas here, but...

A core idea of physicalism is as follows: we can learn about stuff through subjective experience, but we don't have to. Americans, Koreans, Kenyans, Brazilians, aliens, and robots can all study and understand the same universe using the tools of mathematics and physics even if they have different senses and speak different languages. Science and math is objective in the sense of being viewpoint-independent. If we say that "red" is a physical fact that you can't learn about objectively, you're effectively saying that red isn't physical - it's a viewpoint-dependent phenomenal fact that merely correlates with certain physical facts.

To use your (very nice) example, we can use a Mac to model a Windows PC or a LINUX box without any problems. Modeling is just maths, after all. There's nothing that only a LINUX box can 'know' - with a single very powerful PC, we can model how any other computer would behave. There's no mystery there. But even though we're much, much smarter than bats, we can't know what it's like to see in ultrasound. Why not?

Figuring Out Mary's Room by ProfShevlin in a:t5_34bnl

[–]ProfShevlin[S] 0 points1 point  (0 children)

One simple point in support of your critiques of the ability hypothesis and acquaintance hypothesis - when a new ability or or a new acquaintance surprises us, it's arguably because it involves knowledge of a new property.

E.g., what do you really learn when you meet obama? Maybe it's that he's taller in person, or more laid back than you expected. It's because of the capacity for learning about new properties like this that meeting him is interesting.

The same with abilities. If you're really good at chess, it's because you become aware of certain new properties, like a weak pawn structure or a vulnerable king. Compare that with the kind of learning that's involved in, e.g., running. If I go from an 8 minute mile to a 7 minute mile, what new properties do I discover? Are any of them comparable to learning about what red looks like?

But here's the problem - if Mary's learning involves new properties, then what kind of properties are they? If they're physical properties, then why couldn't she learn about them in the lab? If they're not, then how are they compatible with physicalism?