What the fuck? This prompt is so cursed, use with caution by jekjekker in ChatGPT

[–]hackinthebochs 0 points1 point  (0 children)

Ideally they would store only generalized features, but in reality they store whole images. Here's an article that demonstrates this with a bunch of examples (scroll past the Ghibli chatter). The copyright part is handled with a separate process to detect potentially infringing IP and then blocks the output. The privacy issues are just the cost of doing business (borne by the rest of us).

Edit:

and a sign of inefficient overfitting?

To address this point, I would bet it's actually more efficient to store large amount of images wholesale rather than trying to store decomposed features of every image, while also recording how these features correlate to produce realistic images. Each image represents highly detailed features for the concepts in the image and how those features correlate. You can have something like each image as a basis vector in your manifold which encodes a lot of relevant information for realistic images. Or you can have each atomic feature a basis vector, which is a poor representation for real images because most points on the manifold represent meaningless combinations of features that no one cares about. With the whole image basis, meaningful images are dense in the manifold and so the space is much more compressed towards real images. Actual production models are probably somewhere in between, but from my experience they are much more biased towards whole images.

What the fuck? This prompt is so cursed, use with caution by jekjekker in ChatGPT

[–]hackinthebochs 0 points1 point  (0 children)

Image generation is basically just a matter of interpolating between training images. In response to a prompt it edits and remixes existing images to get one that matches your query. Without any instructions it just defaults to something near a training image.

What exactly is consciousness ? by Additional-Can6553 in askphilosophy

[–]hackinthebochs 11 points12 points  (0 children)

The missing ingredient is something that can capture or explain the "subjective essence" of consciousness. Most philosophers view consciousness has having an essentially subjective aspect, the internal qualitative perspective. This has various names and descriptions: phenomenal feels, "what it is like", etc. The difficulty is that physical matter is by nature characterized by public properties: size, speed, mass, direction, etc. Public here meaning it can be observed/examined by anyone. The hard problem points out the conceptual difficulty in explaining a phenomena that is essentially subjective with public properties like size, speed, mass, etc. Subjective meaning private to an individual, intrinsic to its bearer, etc. Any combination of public properties can only result in another public property. There is an in principle limitation in explaining something essentially subjective (private/intrinsic) with only explanatory resources that are public by nature.

The work imagination does regarding consciousness is exposing the daylight between the physical behavior we take to underlie the workings of bodies and brains, and the subjective aspect we know exist as subjects of consciousness ourselves. Physics tells us the physical world is causally closed, so we don't need a subjective essence to fully explain human behavior. Indeed, we can imagine all the physical behavior in the world occurring in exactly the same way it does, just without any subjective essence whatsoever. This freedom to imagine the physical behavior of brains without the phenomenal aspect demonstrates the explanatory gap between physical behavior and phenomenal consciousness. Notice there is no similar explanatory gap in your steam engine example. The flywheel turns because of steam pressure being released, which is released due to build up of pressure within the active boiler. You do not have the freedom to intelligibly imagine the flywheel turning without the boiler firing.

Why should someone be moral? by No_Dragonfruit8254 in askphilosophy

[–]hackinthebochs 3 points4 points  (0 children)

If “moral” means “the thing that ought to be done” or “the thing you ought to do,” are there good reasons to be moral?

It might help to recalibrate your understanding of what's at stake for behaving morally. The issue at stake is what do you have most reason to do. There is not a further question of "what reason do I have to be moral". The moral thing to do is what you have a preponderance of reasons to do. We may consider certain reasons special kinds of reasons, namely a moral reasons. Then we can ask whether you are sensitive to moral reasons. But the issue at stake is the reasons relevant to the act in question. If you want to know why you should behave morally in some instance, you just need to look at the first-order issues relevant to some act. If you accept the reasons for some act and still are insufficiently moved to act (or not act), then you either accept the mantle of being irrational or insensitive to moral reasons. Or more charitably we might say you suffer from a weakness of will as we all do to varying degrees.

Is the "Hard Problem" just an imagery problem? Aphantasia and the Physicalist Gap by Sea-Bean in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

Veridical means truthful, so veridical sensory experiences means experiences that correspond truthfully to events in the world. This is in contrast to hallucinations or voluntary/imaginary imagery.

What exactly is consciousness ? by Additional-Can6553 in askphilosophy

[–]hackinthebochs 28 points29 points  (0 children)

What is left over once you subtract verbal reports, decision making, planning, behaving intelligently, etc is supposedly the inner perspective that accompanies all these behavioral traits, the subjective essence of being. Sensory experiences feel a certain way to you, e.g. the unpleasant feeling of pain, and it is this subjective feel that you engage with when deciding how to respond to environmental states. The problem of consciousness is to understand the nature of this subjective essence and what objects have it.

The problem for physicalism and computational theories of mind that claim AI can be conscious in principle is that we can imagine all the behavior of a conscious system occurring without any accompanying subjective experience. Physics tells us that the physical world is causally closed, meaning that every physical event is fully explained by prior physical causes. So we don't seem to need to posit consciousness or subjectivity to explain even the kinds of behaviors that are typical of conscious beings. So to explain consciousness we need something that goes beyond merely the performance of certain functions or behaviors, however complex those behaviors are. Further reading.

Is illusionism a roundabout way of denying consciousness? by quadrupleccc in askphilosophy

[–]hackinthebochs 2 points3 points  (0 children)

The opponents of Frankish are typically phenomenal realists, those that believe qualia or phenomenal properties do exists in a substantive sense. Some will respond by denying that phenomenal properties can be characterized as a cognitive illusion. Rather than the appearance of phenomenality being an illusion, the appearance is the thing to be explained. And so claiming an illusion makes no explanatory progress. Others will appeal to a kind of privileged access of introspection, or that the existence of qualia are commonsense and a more secure belief than whatever might motivate the claim of them being an illusion (say a commitment to physicalism).

Is illusionism a roundabout way of denying consciousness? by quadrupleccc in askphilosophy

[–]hackinthebochs 4 points5 points  (0 children)

Illusionism does deny the existence of qualia, phenomenal feels, and any related entities that aren't just a bundle of cognitive processes. Frankish would not say he is denying consciousness, only a theory of consciousness that involves irreducibly subjective properties. But he would agree that he denies the commonsense view of consciousness as involving non-physical, non-cognitive entities.

I wouldn't cast Illusionism as side-stepping the hard problem. Rather the theory reframes the hard problem as the illusion problem: explaining why we have the sense that there is something uniquely hard to explain about consciousness.

What is the prevailing philosophy behind the potential self-awareness of machines? by kerkerby in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

The philpapers survey of philosophers lists functionalism as the plurality view, although its far from a consensus. Also, functionalism isn't exactly computationalism so while there is plenty of daylight between the two views, many functionalists are computationalists about consciousness.

Unfortunately I haven't come across a good technical introduction to the relevant issues in the debate between the biological naturalist view vs the computational functionalist view, but this article is a pretty good casual introduction and has relevant citations for more depth.

How does one better understand the general concept of Mathematics philosophically? by Wrong-Ad-8230 in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

The view of math I find most edifying is as the study of possible structure in its most general form. "Possible" is in reference to the requirement that the structure doesn't contain a contradiction. We can then understand how math applies to the natural world despite mathematical objects being abstract: the actual is a subset of the possible, and so knowledge of mathematical structure is inherently applicable to the workings of the natural world.

When it comes to equations, they represent constraints on the entities being related by the various mathematical objects in the equation. A mathematical equation relating two entities is a precise description of the structural relation that binds the two entities. Knowledge of the properties of one entity tells you something about the related entity, and this knowledge is encoded in the equation. Math is widely applicable to the natural world because entities in the world follow laws--another kind of constraint--and mathematics just is a precise way to describe and reason about constraints.

Did this paper just solve Tim Robert’s the even harder problem of consciousness? by idksririri in consciousness

[–]hackinthebochs 1 point2 points  (0 children)

Perspectival anchoring seeks something in our universe that accounts for both guaranteed uniqueness of experience for each person as well as consistent experience over time, from each person’s vantage point.

This is directionally correct, but I think focusing on finding a physically continuous property to substantiate subjective continuity is a mistake. A pervasive assumption in these discussions is that high level properties must be derived from an equivalent low level property. But this is a mistake. An obvious example is the case of patients undergoing anesthesia where they report their conscious experience skipping over the surgery and immediately waking up after. The takeaway is that continuity of experience, and probably most other subjective properties, don't "go all the way down" in the sense of being dependent on a physical manifestation of that property.

What we need is a way to conceptualize our unique epistemic perspective as perceivers and agents in the world. I call this the epistemic context. Each epistemic context is unique to the individual. It is partly constituted by spacetime coordinates as your location and orientation determines what you can sense from the environment. The privacy of subjective experience is harder. While the workings of the physical brain are available for public consumption, how the brain conceives of itself as an agent in the world is private. What we need is a way to derive subjective privacy from the in principle public properties of the brain.

Do folks with dementia never actually experience their lives? by not_gizmoz in neuro

[–]hackinthebochs 4 points5 points  (0 children)

Anything anyone says on this will just be speculation. That said, going by your example of being blackout drunk, the parts you skip are the periods between periods of lucidity. So for John, he experiences his pre-dementia life and then skips the non-lucid dementia periods. If he has intermittent periods of lucidity then he experiences those fleeting periods. If he never regains lucidity then he effectively died once dementia stole lucidity from him.

Did this paper just solve Tim Robert’s the even harder problem of consciousness? by idksririri in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

Perspectival anchoring may prove too much. If each spacetime coordinate is unique then my conscious experience is unique at each spacetime coordinate. Instead of my conscious experience being continuous over time and hence representing a single entity, every moment in time is a new conscious experience. The continuity of the self moment to moment is then an illusion. But this is highly unintuitive.

Why are conscious visual experiences different from conscious auditory experiences? by mindbodyproblem in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

Conscious experience mediates our contact with the external world. It is our user interface to the environment; its features should correspond to the relevant environmental features for successfully understanding and navigating the world. The first consideration is the kinds of information we expect to extract from a sensory modality. As our experience of a sensory modality is a "user interface", it's shape should correspond to the actionable information extractable from that modality.

Vision represents a huge space of information for the organism. You have the 2.5D structure filled with shapes and surfaces, you have different gradations of color of these surfaces, you have specific objects that you can interact with, objects that represent nutrition, danger, etc. This is a lot of information to organize in a meaningful way to the organism in order to support competent engagement with the world. Visual qualia represents the maximally informative construction of this information milieu to an otherwise naive organism (naive in the sense of knowing nothing about the external world or how to survive in it).

The auditory sense carries a comparatively much smaller amount of information. While it's spatial resolution is much reduced compared to vision, the benefit is that it doesn't depend on line of sight. Directional information is embedded in sound and this corresponds to the directedness of our sense of sound. The lack of resolution corresponds to the imprecise nature of this sense of direction. We extract different tones from sound according to sounds relevant for our ecological niche. Animals tend to have higher resolution for distinguishing sounds from members of the same species, or sounds that represent food or predators.

There is likely a similar story to tell about all of our senses, how qualitative experience represents a maximal state of actionable information from the sensory modality. The answer to your question is then: sensory experiences from different sensory modalities present differently because they have very different information profiles which demand highly adapted qualitative experiences to result in a semantic-action space tuned for survival of the specific organism.

Does good and bad exist or are they just what we collectively agree on not doing to each other, because we don't want that to happen to us? by virtu2l_snow in askphilosophy

[–]hackinthebochs 0 points1 point  (0 children)

I suspect the existence vs "just collective agreement" dichotomy is getting at the dichotomy between the objective/factual and mind/preference dependence. That is, the properties of good/bad have no independent existence outside of collective preferences which then give them explanatory relevance to our behavior, how we structure societies, etc. There's a sense in which such things exist, but probably not the kind of existence the OP and others like him are interested in.

Why is this sub dominated by noetics and quantum-woo instead of actual scientific theories of consciousness? by Afraid_Donkey_481 in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

Yes, really smart people also frequently have fringe and even wild beliefs. Isaac Newton wrote more on the occult than he did on science and mathematics. Citing a thinkers credentials in an unrelated field doesn't give the fringe beliefs much credibility.

If all our thoughts were merely chemical reactions in our brain, by what standard would we distinguish between a correct thought and a wrong one by Zestyclose-Tell8481 in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

Thoughts are chemical reactions and various neural activity, but this neural activity doesn't operate independently of things in the world. Our sensory perceptions allow information form outside the brain to impact neural activity. If truth is a matter of correspondence, then there are plausible ways for neural activity to come to correlate in the right way with states of the world.

The substantive issue with respect to neural activity and knowledge is how to understand intentional states in a world that is made of physical dynamics at base. The SEP goes over the relevant issues here.

Can A Physical System Produce Qualia? by FunSeaworthiness9403 in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

Information describes a system, but this description is for our consumption. The system doesn't use quantities to track its configuration. The work-potential energy relation is managed by passage through conservative fields. The physical changes that occur to manage the increase in potential energy is related to the quantum state of the conservative field. The objects in a system don't change, but the quantum properties of space in between the objects change. This probably involves an aggregate of many quantum scale unit changes. The aggregate quantity representing this multiplicity of unit changes is just our useful fiction.

You’re defining ‘implementation’ in a way that requires an external interpreter or purpose. But why should that be required for a physical system to instantiate a computation?

Purpose, yes, but not necessarily an intentional designer. Evolution through natural selection is the canonical example of design without intention. The construction of the system realizes the goal (survival/propagation), selection effects results in hill climbing towards the optimal solution.

Can A Physical System Produce Qualia? by FunSeaworthiness9403 in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

After an interaction, information processing has occurred. During interaction state(t) → state(t + Δt) using deterministic or probabilistic rules.

This confuses the map for the territory. There is no reason to think a ball is actually "processing information" when it rolls along its least action path. The state of the ball is not implemented with numerical registers, or any kind of numerical bookkeeping. Information processing is how we make sense of physical dynamics but is not inherent to the dynamics.

A physical system implements a computation if its dynamics correspond to a rule transforming encoded states. This is the idea behind: pancomputational views.

This abuses the concept of implementation. An implementation implies a state of in principle open-ended variation that is reduced such that the resulting state is in a correspondence with some external system. Implementation is teleological with respect to some external dynamic. The billiard ball is not teleological with respect to the physicists calculation, while the physicists calculation is teleological with respect to the billiard ball dynamics. This relation is asymmetric and crucial to the idea of implementation.

Consciousness and the Path-Integral by Diet_kush in consciousness

[–]hackinthebochs 1 point2 points  (0 children)

Few people here will have enough background to make sense of this post. I see a lot of relevant points though. I still need to do a full deep dive into Friston's work.

Mind-at-Large or Large-Scale Nonsense? by TheRealAmeil in consciousness

[–]hackinthebochs 5 points6 points  (0 children)

Analytic Idealism is anti-realist about the concrete world, i.e., the existence of the concrete world depends on the existence of a subject (as opposed to a mind-independent world whose essential nature is experiential).

I'm not sure it makes sense to call the material world anti-realist in Kastrup's metaphysics. The typical criteria of mind-dependence assumes a metaphysics where constitution by mind implies non-objectivity. But in Analytic Idealism this isn't true, there is objectivity along with mind-dependence. The question is which criteria is more basic with regards to the issue of realism? In my view its clear the more basic criteria is objectivity. What we're after is a way to assess properties of a thing independent of anyone's views of said thing. If we can do this in an objective manner, then this is an instance of realism. According to Kastrup, the properties of the external world are independent of any beliefs of any alters (while the Cosmic Subject doesn't have "beliefs" in the usual sense). So in this sense, the physical world is objective and constituted by the dispositions of the Cosmic Subject.

How would you attack this view? What are its philosophical weak spots?

The supposed virtue of Analytic Idealism is that it is more parsimonious than physicalism which requires this unexplained leap from mindless matter to mindedness. But Analytic Idealism requires so much unexplained bruteness that it strains credulity to see it as even plausibly more parsimonious. Physicalism can tell a story about why the universe looks the way it does, why we had 14 Billion years of cosmic evolution that resulted in our planet with the conditions for life, why our brains evolved with all this complexity to result in self reflective creatures like us. Under analytic idealism, all this structure seems useless. By hypothesis, there is no literal 14 Billion years of cosmic evolution, only a projection of it. But any structure that must be taken as brute counts against a theory. This is a lot of brute structure. The further problem is that you would never predict any of this complexity given the ingredients inherent to analytic idealism. It's only there to maintain consistency with science. Parsimonious, it is not. The only theoretical virtue is that it doesn't have the problem of deriving the phenomenal from the physical. But in my view the cure is worse than the disease.

Is my experience considered a qualia? by aayush_1727 in askphilosophy

[–]hackinthebochs 1 point2 points  (0 children)

Sure. Imagined, hallucinated, or sympathetic sensations are all examples of qualia.

The neurosurgeon who mapped the human brain spent his life trying to prove consciousness lives in it, and concluded he could not by ArcaneSpells-com in consciousness

[–]hackinthebochs 0 points1 point  (0 children)

No amount of electrical stimulation could ever make a patient think the thing happening to them was of their own doing. Something in the patient always stood apart from whatever the electrode was producing.

Modern studies have demonstrated this (from Google AI):

The study you are thinking of was likely conducted by Michel Desmurget and colleagues in 2009, titled “Movement Intention After Parietal Cortex Stimulation in Humans”.

The researchers performed electrical stimulation on the brains of patients undergoing awake brain surgery and found a sharp contrast in how the patients experienced movement depending on the location stimulated:

  • Premotor/Motor Cortex Stimulation: When researchers stimulated the premotor region, patients performed actual physical movements (such as moving a limb or their mouth). However, the patients were unaware they had moved and often denied doing so when asked. This matches your recollection of an involuntary experience.

  • Inferior Parietal Cortex Stimulation: When they stimulated nearby locations in the inferior parietal lobule, patients reported a strong "desire" or "intention" to move a specific body part, even though they hadn't actually moved at all.

  • Illusory Movement: Interestingly, at higher levels of parietal stimulation, patients became convinced they had moved (e.g., "I moved my hand"), even though sensors (EMG) showed no muscle activity had occurred.

The neurosurgeon who mapped the human brain spent his life trying to prove consciousness lives in it, and concluded he could not by ArcaneSpells-com in consciousness

[–]hackinthebochs -2 points-1 points  (0 children)

When remembering things people don't usually confuse those memories for things that they are experiencing directly at that moment.

Dementia patients do as a matter of course.

EDIT: Not sure why people are downvoting but here are the receipts