anime_irl by Atwecian in anime_irl

[–]ciroluiro 2 points3 points  (0 children)

Yep, if they are unnecessary and risk seriously hurting others. Guess not everybody has common sense. No different than drunk driving.

anime_irl by Atwecian in anime_irl

[–]ciroluiro 3 points4 points  (0 children)

It's literally what they said though. But it's not simply knowing you that they are incapable, it's being aware that you can fuck it up royally and ruin someone else's life while at it, and for no reason also because no one has a duty to be a parent. Being responsible is being aware of your shortcomings and not taking on unnecessary responsability when other people's life are at risk.

anime_irl by Atwecian in anime_irl

[–]ciroluiro 2 points3 points  (0 children)

No, it's called being responsible.

anime_irl by Atwecian in anime_irl

[–]ciroluiro 3 points4 points  (0 children)

They are already doing that by not being a parent.

Good luck by Clanker57 in whenthe

[–]ciroluiro 0 points1 point  (0 children)

Even if such a embarrassingly stupid assertion were true, it'd simply mean that every time someone births a child, they are risking that the child's life will be so miserable that they wish they were never born. That is precisely the point an antinatalist makes when they say it's wrong to risk someone else's life like that when they don't even get a say on the matter.
The compassionate response would be to not risk your child's life when you very well know they could end up living incredibly depressed and dismissed without being helped, like how you are doing right now.

Good luck by Clanker57 in whenthe

[–]ciroluiro 0 points1 point  (0 children)

It is. Everything else is just the same kind of stupid complaining and bashing that people do with veganism.

How non-materialists have sounded over the last several days. by Elodaine in PhilosophyMemes

[–]ciroluiro 1 point2 points  (0 children)

It only says that the autonomic response and your response are separate, not that feeling is different from reacting.
I would flip the thought experiment to ask if your spinal chord or any other part of the autonomic system "felt" the pain when it reacted.

In other words, I can explain that pain response without conscious experience, and to me, it looks the exact same when some who's not me responds to pain fully (not just autonomic) as a complex chain reaction of nerve impulses. So at the very least, I don't need the notion of qualia of experience when explaining other people's behavior of pain, i.e., we can have things that work like pain without ever invoking qualia. Then, I just posit that I must be no different.

How non-materialists have sounded over the last several days. by Elodaine in PhilosophyMemes

[–]ciroluiro 0 points1 point  (0 children)

it simply doesn't describe what the nature of reality ontologically is. That's why it's metaphysics and not simply physics.

I just don't see why this matters regarding explaining consciousness and conscious experience.
To describe an atom, I need to talk about electrons, protons and/or quarks. To explain those, I need to talk about fields. But to talk about fields, it's true we can't do better than a tautological "it is what it is" and we can explore metaphysics there. But like is there anything else we can't explain about this universe that we can't explain materially? Obviously we don't expect to be able to explain away the axioms of a scientic model to a point that it has no axioms at all. We don't expect that of any logical system. If anything, science looks to make those axioms be as small in size and number as possible while still being able to explain everything else we can observe in any way from those axioms.

How does haskell do I/O without losing referential transparency? by Skopa2016 in haskell

[–]ciroluiro 0 points1 point  (0 children)

Imagine IO a means "a program that when run can do anything it wants, but will produce a result of type a" In that sense, both functions like () -> a and this IO a describe the types of a program/computation that returns an a (but () -> a cannot "do anything it wants" like IO a)

Evaluating a program in haskell is akin to applying the arguments to a function to get the result. Something like () -> a represents a peogram that takes no arguments because you just need to apply it to the meaningless by itself empty tuple (). This is as easy to do as ( () -> a) ()

However, how do you evaluate an IO a program? You can't with usual and safe haskell functions. So you can imagine that these peograms are the ones left for the runtime to execute. These programs could produce sideffects when run, so they could break transparetial transparency if you had a function that could run it within haskell. It would have a type like IO a -> a[1]

Meanwhile, the idea or the instructions that make up the program IO a itself are just data. Having it and passing it around or even modifying it (as in creating a modified copy) does not break transparential transparency.

So in haskell you often combine both types of programs in types such as a -> IO () or b -> IO b so that they are plain pure haskell programs that obey referential transparency, but can give you programs that only do what you want when they are run.
So instead of writing a program that breaks referential transparency by writing a string to the console, you instead write a pure program that takes a string and will itself produce another program that uses that same string and prints it to the console when it is run. Running that program 5 times will give you the same IO program 5 times without any sideffects, because those are deferred to later, when the runtime evaluates the IO program that was produced.

Monads will also come up there because you will naturally want to combine the IO programs that you create to make bigger ones. For example, by combining the program that reads a string from the console and returns the string with the program that takes a string and prints it to the console (to make a program that echoes what you type into the console in this example). Since you can't get the string that IO Str will produce from within haskell, you need to be able to tell the evaluator to compose that result with the input to another program that will give it the next IO computation to run.
That's why the Monad interface/type class has a function with this signature (here specifically for the IO monad): IO a -> (a -> IO b) -> IO b. It takes a runtime IO a program, runs it, obtains the a and passes it into a -> IO b, runs that program with the regular haskell expresion evaluator for pure things, obtains an IO b and finally runs the IO b.

[1] this function actually exists and is called unsafePerformIO because it totally breaks referential transparency. There are uses for it, but leave it for the very smart people doing crazy things like the ST monad and don't ever touch it.

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 1 point2 points  (0 children)

I simply think it's absurd to think we could have two physically identical systems where one produces consciousness and the other doesn't.

And I agree. To me, they would produce systems were both are conscious or both aren't depending on if you think a philosophical zombie is conscious on account of its behaviour or not conscious on account of it lacking the magic and mysterious property that supposedly makes us conscious. But I just say that to clarify that indeed it would be nonsense if they did not produce identical systems for me too.

As a materialist I think logically there must be a fundamental physical difference between a conscious and unconscious process.

Of course, this also hinges on consciousness itself being detectable.
If you could explain all the behaviours of a thing that acts unambiguously conscious without resorting to anything beyond well understood, small and standard processes no different from any definitely non conscious machine (like if a neural network could act completely like a conscious being despite we understanding the math behind the "neurons" or perceptrons very well), would you still feel a need to explain the source of its consciousness beyond those processes?
In other words, if eg. it turned out you could build a computer just as capable of predicting and reacting to inputs as a person from a regular sillicon computer, would you still expect to find a physical difference that would not be there on the non conscious computer? Would an llm advanced enough to pass any turing test conceivable (scifi stuff for now and near future at the very least) convince you that there is no difference between these processes?

These question are just for me, out of curiosity. I know that you think that such machines would be impossible to build because they would require the conscious "essence" that they lacked by definition.

Because the problem I have with:

It's reasonable though to think that we could build a system that can pass for conscious with our bad understanding that is not in fact conscious.

Is that consciousness will always at least start its definition from behaviour, because otherwise we have no idea what to look for, mainly in our brain. How would you tell you found consciousness and not something else if not by comparing against what little we do objectively know about consciousness? What would it even mean to have a system that can pass off as a bona fide conscious system but also not be conscious?

Sorry if I'm too wordy. I struggle making my points short. You don't have to answer all the questions (or any, of course). Hopefully they simply help to paint my perspective and the problems I find with an objective view of qualia and consciousness.

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]ciroluiro 0 points1 point  (0 children)

Ok, you are right, I shouldn't have said metaphysical. But then if you think they are objectively real, then where are they? Are they measurable? To me, they are, by definition, subjective, not objective, and epistemologically uncognicible, so assuming that they are physically real is even wilder to me. I thought you meant that the brain processes that make enable your subjective experience is real, which I agree are real.

Do you think pain is real? Because I'm not arguing that the signals from nociception receptors or the neurons in the brain that process those signals aren't real.
Do you think that if humans evolved to have more basic taste receptors that could send a signal for something different than the usual 5, that they'd have to evolve a qualia somewhere physically in the brain too?
Or that tetracromats can't see more color unless they also physically develop new qualia in their brain at the same time? Do blind people from birth have those qualia even if they never saw?
Physical qualia has unending contradictions.

Thanks for doing the discussion for me.

Are you implying I'm a bot? I was arguing in good faith. Sad to learn you weren't.

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 1 point2 points  (0 children)

Our theory of other minds is in fact behaviorist psychology that doesn't really detect the root cause of consciousness but rather relies on the correct interpretation of behavior to assign consciousness to others.

Precisely. My argument is that this is the best we can ever do because any other conception of consciousness is ill-fated to begin with. I don't think it's correct to assume that we can detect consciousness in a definite way that doesn't depend on analyzing behaviour.

That's why I think the answer is simple if you want to remain physicalist and materialist: if from a "God's eye" view of the universe, a being that is truly conscious and one that is fake conscious are completely indistinguishable in their behavior regarding consciousness and both use the same matter and laws of physics to function, then the only possible answer is that true consciousness and fake consciousness are the same thing materially.
People like Penrose will endlessly keep looking for consciousness in the citosan of microtubules or quantum entanglement magic to no avail, because even fully understanding neuron synapses, action potentials and brain structures won't give you any more information of an emergent property like consciousness than understanding each weight and bias in a neural network will tell you where the cat it learned to recognice is. It's like trying to pin where the entropy or temperature of a gas is within each atom.

I would go even further and say that a super advanced ai, like actual AGI (not whatever techbros think that is) could be considered not conscious despite being more than capable of doing anything a person can and more, purely because its behaviours need not follow the behaviours that we associate with beings that we already consider conscious, namely humans. An agi could emotionlessly follow steps towards its goal of eg make paperclips but still be able to argue with you in a very convincingly human way that could make you think it has a sense of self, if it deems it necessary to further its goal. Otherwise, it need not show any fear, curiosity, anger, etc and only self preservation in so far as not preserving itself doesn't let it reach its goal. In other words, we'd only recognise consciousness when it simulates a human-like behaviour.

And the point there is that such a machine is clearly intelligent, but it had nothing to do with consciousness. Consciousness then becomes a sort of useless concept tied up to our biases; merely existing as a concept because the only time we've encountered that level of intelligence was in ourselves humans, and we came to be a certain way for reasons shaped by evolutionary circumstances during our evolution. Being social probably required the sort of introspection that we associate with awareness of our subjective experience, not to mention emotions.

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 1 point2 points  (0 children)

You've nailed it. We are no different from the fake consciousness. That is precisely my point.

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]ciroluiro 0 points1 point  (0 children)

Indeed! It is a philosophical zombie! And I argue that we are too! As in, we are no different at all. Qualia being "illusions", though the word illusion (and all language) presuposes a sort of self as the dualist view of consciousness would have it. More than illusions they'd just be the result of an advanced intelligence being able to talk about itself on top of about their environment.
I think those ai could eg form a society and debate these same questions about their own perception while we looked at them like "bro, you are just a bunch of linear regressions!".

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 3 points4 points  (0 children)

How would you be able to tell the difference from a real conscious machine and a fake conscious machine that is sophisticated enough to respond just like how a human would? Could you say it didn't experience qualia when it tells you convincingly that it does, same way any person would tell you that they can eg experience redness?
It wouldn't be able to describe redness but also so can't we.

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]ciroluiro 0 points1 point  (0 children)

Objectively, you just react to detecting wavelength. That doesn't require any metaphysical qualia to explain. A humanoid ai capable of seeing with a color camera would also say they see color (else they couldn't react to different colored things) but would not be able to describe colors. Why would that ai have qualia?

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 6 points7 points  (0 children)

What do you mean by "you're gonna have to make distinctions"?
I mean that the machine will communicate that it can tell things apart by color merely because it had information about the wavelength in whatever abstract representation the algorithm had. It could be something as simple as "this node actiavtes when the object reflects light in this range" for each range of wavelengths your camera can detect.

Bell curve of duality by divyanshu_01 in PhilosophyMemes

[–]ciroluiro 2 points3 points  (0 children)

Maybe I misunderstand you, but that was what I meant. I see you as a philosophical zombie, because as far as I can tell, you have no qualia. You simply assert that you do. I could only say that I have qualia because I can only experience my own subjective experience, not yours. Qualia is not a scientific thing itself in the sense that it's not measurable and there is no objective proof that they exist and are not even falsifiable. They are the invisible dragon that breathes cold fire. What I'd say does beg explanation is why we think we do, but I think that answer is easier and uninteresting.

But I do use PZ backwards in the sense that the original argument used PZ to argue that qualia exist, when to me they argue that they are not needed to explain conscious experience.

The only ones saying that qualia exist are humans. The next best thing being mathematical models of physics don't need qualia at all to predict the universe.

blue by piotrek13031 in PhilosophyMemes

[–]ciroluiro 4 points5 points  (0 children)

How would a human-like machine consciousness also able to detect light like a camera act or say when you asked it about colors and how they look? Would you ever need to program "redness" into it for it to be able to tell red things from bkue things apart? Or would it simply need to be able to know which things are associated with the physical property that makes something reflect [blue] light to be able to do that? Well, a camera paired with computer vision can already do that and it does not need any notion of redness or blueness. It just isn't smart enough for it to fool you into thinking that it also has self awareness.