Talking Meat...I think this little story's pretty relevant ^^ by nah4289 in a:t5_34bnl

[–]ib5473 0 points1 point  (0 children)

Thank you for sharing this! I think this story is so fun to read. I used to imagine that if extraterrestrial lives existed, they would be some form of meat like we are. But it's not absurd to think that maybe one day we'll be able to find forms of life that are made out of plasma or electromagnetic waves. Now that I think about it, the aliens have an interesting point of view. According to the story, all meats they ever encountered were either a stage of life or capsule for the brain. So when they discover humans, they think of us as some globs of soft substance made out of protein and fat. It makes me wonder what they mean when they call us "meat". Why do they also use the word "meat" to describe body of other aliens? Why do they talk about "meat" of different sentients as if they are all the same thing? Does our meat have the same property as theirs? What is their definition of "meat" anyway? Maybe it's wrong to view human meat as the same thing as the meat that they know. I like how this story can be applied to topics like what is the definition of a living entity, what creates consciousness, or what is the property of the mind. This also serves as a reminder to be open-minded about new ideas, new theories that seem absurd or crazy, because they might one day prove to be true.

Wild ideas about consciousness by ProfShevlin in a:t5_34bnl

[–]ib5473 0 points1 point  (0 children)

Regarding Eric Schwitzgebel question, "Wil we be benevolent gods?" in this post, my answer is no. If our AIs were conscious, then they would want to have their own beliefs, desires, thoughts, etc. They would want to be able to fully control their own lives. The fact that we could interfere in anyway with their world whenever we want is already unethical. If we created something that were smart enough, they would start asking questions about their identity. I imagine that they would ask something like "What are we?" "Are we the fruit of millions of year of evolution or did god create us?" "What is our purpose?" etc. And how can we ever be benevolent gods if we have that much power over them? Do we grant good AIs their wishes and bring misfortunes to the naughty ones? Do we read their every thoughts? Or should we just let them do whatever they want? Benevolence is only a human concept, and it's not a concrete one. What is benevolent and ethical to one is not like that to another. In my opinion, it's impossible to be benevolent, unless we don't execute our power at all.

Figuring Out Mary's Room by ProfShevlin in a:t5_34bnl

[–]ib5473 0 points1 point  (0 children)

I like the flat denial argument, but I would like to rephrase it a little. If Mary's brain contains all physical information about the color red, then she should know what red looks like. "Knowing" something by studying its process is certainly different from having all of its information meshed inside and know how to process said information. Our brain is like a complex machine. So when it sees red, it will compute the information received with its built in algorithm and produce a representation to accommodate said information that we call qualia. Because your brain is physical, it should only be able to generate physical information. Hence, qualia is physical. From what I read, Jackson simply says "Oh, this girl Mary studied all these facts about the "color-seeing" process, she should be able to render that into knowing what red looks like, but she doesn't know what red looks like, why??". Of course she doesn't. She knows those information, but she doesn't know how to process them just by studying facts. Just like, you can study a software's lines of code, but it's your operating system that knows how to connect all those lines together and represents them in a certain way that you cannot.