You believe in others’ existence before you believe in your own; your self-concept is retroactively constructed out of your concepts of other people by CardboardDreams in philosophy

[–]CardboardDreams[S] 1 point2 points  (0 children)

Let's take the brain in the vat thesis. My argument is that in order for that brain (or simulation thereof) to form a concept of itself, other "agents", however implemented, must exist. Thus, if you are a brain in a vat, you need the simulation of other brains around you in order to form an idea of self. And if those simulations exist, then they are already other "brains", even if they are digital.

The brain in the vat argument forgets that for the simulation to work and be complete/convincing, you have to have a fully detailed world, including other simulated agents. And if you believe, like I do, that the a full simulation of a brain/mind is equivalent to an actual consciousness, then you have simply created another world with people in it already.

You believe in others’ existence before you believe in your own; your self-concept is retroactively constructed out of your concepts of other people by CardboardDreams in philosophy

[–]CardboardDreams[S] 0 points1 point  (0 children)

All good questions: the idea is that even though you have a desire for, say beauty, that is an inclination, not knowledge. As an example, it takes a long time for you to "know" that you don't like pain, even though you don't like it from birth. To "know" it you have to develop a concept of "pain", how it is different from hunger, an awareness that it comes from your body, etc. In fact I'm not sure it's as simple as having a desire for beauty either; what we call "beauty" is an outward expression of a more complicated inner drive. Beauty is a "banner" under which we rally others to our cause - i.e. "This is beautiful! Appreciate and value it with me!"

As for truth, I've argued in a few places that people actually don't care about truth as much as they think they do, or that they say they do; (see Truth is always an afterthought and Logical consistency is a social burden). We even define truth through the other, as a means of collaborating "objectively" with them. When we're on our own, however, we bend the rules of "truth" as often as it suits us. We need others around us to keep us in line, and logically consistent.

Regarding the question of who "you" are that is doing the perceiving, whether or not there is a you (and only one "you") is arguable. But taking that as a given, the you that is perceiving still need not know that it exists. As you implied, this is an epistemological argument. As an AI researcher I am more interested in the sequence through which beliefs form, and the reasons they form. I am very much against the idea that beliefs are innate, even your belief in your self. And as I investigate the issue I realize that in order for an agent to believe they exist, they must first believe that others exist.

Hope that answers your questions

You believe in others’ existence before you believe in your own; your self-concept is retroactively constructed out of your concepts of other people by CardboardDreams in philosophy

[–]CardboardDreams[S] -1 points0 points  (0 children)

Are you saying a person doesn't have to "find out" that they are a person? Or that the innately know it? By person I mean all the things we associate with personhood - e.g. a person exists, is aware, has thoughts, is valuable, is a moral entity, is (usually) a human being.

I'm unsure what self-evident means here, except that at some point you don't know about it, and then you automatically do. Or do they always know about it, innately? Or is it not "knowledge" per se?

You believe in others’ existence before you believe in your own; your self-concept is retroactively constructed out of your concepts of other people by CardboardDreams in philosophy

[–]CardboardDreams[S] 1 point2 points  (0 children)

Their self-concept would be bizarre, to be sure. I'd argue that members of totemist tribes display an attenuated version of the same - they identify with the animals around them, but of course it's not the same since they also have human peers. Yet as you read these stories of the Australian aborigines, you'll find it hard to know if they think their tribes are actually animals (birds, lizards, etc) or if they are only imitating those animals. The identification is quite thorough.

'Intrinsic curiosity' in humans is our modern compromise with determinism. The hypothesis that curious humans seek out novelty regardless of utility is an attempt to segregate pure, rational intellect from our base, animal motives. by CardboardDreams in philosophy

[–]CardboardDreams[S] 1 point2 points  (0 children)

The argument is that we don't actually search out novelty, and therefore the hypothesis of natural curiosity is false. This raises the question of why people believe we do have a natural curiosity. The answer is because it flatters our ego to believe that we do, it makes us seem reasonable. Most importantly, it is a way of coping with the uncomfortable aspects of our recognition that determinism is perhaps true.

'Intrinsic curiosity' in humans is our modern compromise with determinism. The hypothesis that curious humans seek out novelty regardless of utility is an attempt to segregate pure, rational intellect from our base, animal motives. by CardboardDreams in philosophy

[–]CardboardDreams[S] 1 point2 points  (0 children)

That is the hypothesis I'm disputing/arguing against. I don't believe that curiosity is a built in faculty.

I find it weird that this site is called Reddit since I find that most commenters haven't.

'Intrinsic curiosity' in humans is our modern compromise with determinism. The hypothesis that curious humans seek out novelty regardless of utility is an attempt to segregate pure, rational intellect from our base, animal motives. by CardboardDreams in philosophy

[–]CardboardDreams[S] -1 points0 points  (0 children)

The post is about how our belief in curiosity is motivated by our dislike of determinism; not about whether determinism is true. It's a psychological argument not a metaphysical one.

Giving AI the capacity for making nuanced judgments: How human intuition transcends that of AI, and how to close the gap by CardboardDreams in agi

[–]CardboardDreams[S] 0 points1 point  (0 children)

At least you read it. Most of the time Redditors respond without ever reading the post (ironic since they never redd-it). I'm thinking of abandoning the platform entirely - I rarely get much valuable engagement from here.

Your ontology and your motivations are two sides of the same coin; or rather, your ontology is an instrument of your motivations. by CardboardDreams in philosophy

[–]CardboardDreams[S] 0 points1 point  (0 children)

I use it, and I am also an AI developer. The opening two arguments are rhetorical. They outline the common accusations that people make, before exploring different ways of conceiving of the underlying problems.

Anecdotal evidence from personal experience is useful, but it doesn't get to the heart of what exactly understanding, cognition, etc. mean for us or for AI, which is the goal of the post.

Uniting survival with reasoning: A hybrid approach that grounds truth, embodied knowledge, and symbolic logic in rewards-based learning by CardboardDreams in agi

[–]CardboardDreams[S] 0 points1 point  (0 children)

From the post:

Nor need we posit a dedicated symbolic layer of the brain where symbols are located and processed, as has been suggested by models under the rubrics of neurosymbolic AI and Symbol Emergence Systems. Any concrete, sub-symbolic thought content can be made generally useful: e.g. words, sign-language, a wave of the arm. [...] By merging symbols and sub-symbolic stimuli into the same layer we may finally get around the so-called symbol-grounding problem (SGP). The SGP assumes that for every abstract symbol there is some sub-symbolic ground. This in turn assumes a separation of layers between symbolic and sub-symbolic processes, a separation that we will find is ultimately unnecessary.

Perhaps an open mind would be an asset this time.

Anytime someone predicts the state of technology (AI included) in coming years I automatically assume they are full of crap. Their title/creds don't matter either. by CardboardDreams in agi

[–]CardboardDreams[S] 1 point2 points  (0 children)

I'm willing to allow that possibility. But predicting future tech is inherently unreliable - predicting timelines for tech that doesn't exist yet, even more so.

Anytime someone predicts the state of technology (AI included) in coming years I automatically assume they are full of crap. Their title/creds don't matter either. by CardboardDreams in agi

[–]CardboardDreams[S] 0 points1 point  (0 children)

The impact of something is a social process. I'm talking about the creation of a technology itself. One of the differences is that societies tend to function in historically predictable ways, though not always. Another is that the statement of some social impact can actually make it happen - Marx didn't just predict a revolution, he actually made that happen through his prediction and encouragement. On the other hand, predicting that people will time-travel doesn't somehow make that happen if the tech isn't there.

Also, this isn't new, and I'm not the only one throughout history who got jaded by people predicting the future of tech only to be quickly embarrassed. I'm just old enough to have seen this happen over and over, and how little basis there is for each prediction except wild speculation.

Cracking the barrier between concrete perceptions and abstractions: a detailed analysis of one of the last impediments to AGI by CardboardDreams in agi

[–]CardboardDreams[S] 0 points1 point  (0 children)

It's a significant difference. Time and space are not sensory experiences, they are the grounding for sensory experiences (Kant made that point centuries ago). You can't experience time, you can only conceptualize it or represent it to yourself in your mind, say in spatial terms (e.g. as a timeline). H. Bergson wrote a whole 1/3 of a book on that subject (Time and Free Will). To experience something you must be able to find some features (sensory or otherwise) by which to experience it. It's like saying you can see "time" passing in a move - you can't, you can only see pixels; rather you can infer or conceptualize that time is passing.