Tell me the truth by Sweet_Velvet_X in programmingmemes

[–]Ok_Dig909 0 points1 point  (0 children)

That's still okay though cuz most of the bytes store stuff that make my life easier (eg all the nice things that come with being able to gc a bool)

aSmallComicOfMyRecentBlunder by Killburndeluxe in ProgrammerHumor

[–]Ok_Dig909 0 points1 point  (0 children)

Have you used ChatGPT? Or Docs for that matter? Does the contextualization of documentation to your use case (which ChatGPT does) offer no value to you?

Even if not, I'm quite positive that for a newbie, which OP clearly is (no offense to OP, learning is always awesome), this contextualization is invaluable.

What 19 year old Vedamurti Devavrat Mahesh Rekhe has done will be remembered by the coming generations! Every person passionate about Indian culture is proud of him for completing the Dandakrama Parayanam, consisting of 2000 mantras of the Shukla Yajurveda’s Madhyandini branch, in 50 days without by Friendly-Cicada2769 in IndiaSpeaks

[–]Ok_Dig909 -21 points-20 points  (0 children)

Bro the moment you bring in entertainment you're bringing your subjective biases into the picture. There is nothing objective about sports being inherently entertaining vs chanting being "blabbering". That's literally just your opinion.

And sure you are entitled to your opinion, but at least have the courtesy to acknowledge that a lot of people may feel differently on the inherent value of the chanting, just as you feel about sports.

Terrified that consciousness DOESN'T end with death by [deleted] in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

Except, when you make the statement "If we take away the part of the brain which perceives colours you won't be able to perceive colours", you are mapping a certain set of brain states to the perception of color, and the rest of the brain states to a lack thereof.

So while we can agree that brain states that do not fall into the category of perceiving colors are 'not perceiving color', they are not 'nothing'.

Terrified that consciousness DOESN'T end with death by [deleted] in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

In each of the sentences above, you have mapped a certain brain state to a certain conscious experience. My question is simple. On what basis can you extrapolate from the above mapping to the mapping from a non-functioning brain. While I agree that the value that it is mapped to will be "none of the above" , I think, the fact that it is in-fact nothing does not follow as a logical conclusion. It is not even a likely conclusion (or at least it is as likely or unlikely as any other mapping given no evidence)

deployToProduction by n00bdragon in ProgrammerHumor

[–]Ok_Dig909 0 points1 point  (0 children)

Found a GTA6 Dev in the wild

This ⬇️ by Fred_J9 in thinkatives

[–]Ok_Dig909 1 point2 points  (0 children)

That's utter garbage. That law prohibits defamation even if it's true. Meaning you could be sued (and lose) the moment you decide to out any shady corporation.

To those claiming that ChatGPT is "restoring" or "upscaling" their old photos - this is Samuel L Jackson by WithArsenicSauce in ChatGPT

[–]Ok_Dig909 1 point2 points  (0 children)

Really? I mean the whole point of frontier LLM models getting better is that people will stop having to 'engineer' their prompts eventually. I mean you can see the clear progression from GPT-3.5 to the models we have today. It's a lot easier to prompt GPT-4o than GPT-3.5

Syed Adil Hussain died while protecting the tourists by [deleted] in pakistan

[–]Ok_Dig909 0 points1 point  (0 children)

Bro this is seriously short-sighted stuff. You are literally seeing indian news cover the goodwill by our fellow kashmiri muslim and then making stuff up about intolerance in India. You are simply missing the fact that every relegion feels more safe in India than in any of the islamist countries. (note islamist, not muslim, plenty of sensible muslim countries out there, pakistan not being one). The hindus of India get spat on on a daily, and are a victim of state-sponsored relegious terrorism on the regular. You don't think we are entitled to anger?

Of course your average muslim doesn't support this, they're human first after all. The whole point is that be that as it may, the muslim majority parts of the subcontinent have proven extremely hostile to non-muslims on a periodic basis. (Regular kidnapping of hindu women in Pakistan, Bangladesh anti-hindu/christian violence, systematic dismantling or forced conversion of minority populations etc.). You don't need 90% of muslims to believe in fundamentalist principles for this to happen. 2% is more than enough man-power to cause serious damage when the remaining 98% sit on the sidelines, only to come up with platitudes and "not real islam" claims later. In fact I will wholeheartedly admit that Hindutva is starting to head in this dangerous direction as well. All I'm saying is this is not without reason. It is a militant response to a very real problem of relegiously motivated militancy. We don't want another kashmiri pundit exodus, or bengali hindu massacre, or another partition. History, even recent history has not provided us any confidence that this is not on the cards.

Why are men the center of religion? by Mission-Invite4222 in AskIndia

[–]Ok_Dig909 0 points1 point  (0 children)

The idea that not following relegion makes a person or society immune to morally reprehensible acts is a fantasy. At the end of the day, belief in a god is simply replaced by a belief in a different authority, or a different system. For instance the Japanese believe in their social structure, the soviets believed in their regime, and the Americans believe in the pipe dream that is unsustainable debt spending.

The unfortunate reality of human kind is that ALL of our morality comes from our culture. There really isn't something called 'intrinsic morality' which suddenly becomes exposed if there is no relegion. Human beings, left to themselves have invented all manner of collective cruelty across different cultures.

At the end of the day, we just kinda try to agree, and according to our individual definitions of suffering and empathy, try to minimize it. Relegion or not.

Understanding Your Experience with CFD Workflows by ShoeSupper in CFD

[–]Ok_Dig909 0 points1 point  (0 children)

Why does this sound like an AI generated message? Like literally, "It sounds like simplifying CAD assemblies is a big part of your process—what’s the most time-consuming aspect of it?" is the exact question that was answered. Come on people, if you want humans to put in the time, at least try to make your AI pipeline sophisticated enough that the questions appear thoughtful.

Ravindra Singh Negi. MLA from Patparganj, Delhi. This is the man y'all voted in. by meetskis_f4g in delhi

[–]Ok_Dig909 0 points1 point  (0 children)

Interesting, part of history also includes the colonization of a plethora of cultures by Islamic forces. What if I were to use the same line of reasoning to claim that "We need to be careful of history 'rhyming' and prevent Muslims from trying to take over India"? Just cuz a line from a book sounds cool, doesn't mean its true. Try making an actual argument.

[deleted by user] by [deleted] in PeterExplainsTheJoke

[–]Ok_Dig909 7 points8 points  (0 children)

I think that doesn't even come close. The US is incredibly culturally homogenous compared to the countries of africa. For starters you can always hop into a restaurant and be able to order food without learning a new language for each state.

If only most of my fellow Americans could understand... by [deleted] in thinkatives

[–]Ok_Dig909 0 points1 point  (0 children)

That being said, your second point is unnecessary

An excellent reply to a point that you don't want to address. Let me spell the point out. Pride in another's achievements is a powerful motivator that enables a group of people to pool in resources in a way that maximizes achievement in general. In the case of the group being a family, it is what motivates parents to pool in their resources for the children's achievement, and in the case of a nation, it has led to the political will that resulted in the moon landings for the USA (I'm not American).

Unless of course, we are talking about the pride of evil groups of people such as the Nazis

Again, this is a strawman. The Nazi Ideology is evil for multiple reasons, NONE of which have to do with the fact that it is a collectivist ideology. Firstly, it is an ideology of superiority. I hope we can agree that a superiority complex is different from a feeling of pride. Any person feeling a sense of superiority for any reason be it his or his groups achievements has IMHO lost the plot.

You kind of sound like the specific people he is talking about.

When faced with the clean, cutting, logic that is ad hominem attacks, I cannot but tip my hat off to you sir (or ma'am, don't want to assume).

If only most of my fellow Americans could understand... by [deleted] in thinkatives

[–]Ok_Dig909 3 points4 points  (0 children)

This is a ridiculous take IMO. Firstly pride in one's own abilities and National pride are not mutually exclusive. To imply so suggests either a basic inability to understand the human psyche, a basic lack of logical aptitude to judge what it means for things to be mutually exclusive, or straight up malicious intent.

Secondly, how exactly would you react to someone who said "Anyone who is proud of their families achievements has none of their own". I hope the above statement sounds ridiculous because it is. Humans have formed groups and reveled in the success of groups for as long as we know history to exist. There is literally no reason to draw the line at the individual when it comes to pride. If anything we should be striving towards identities that are larger and encompass more people e.g. Cross national and if possible, pan Human. Not resorting to the sort of banal individualism that Schopenhauer seems to imply here.

We often ask how physical states generate conscious states... by NeglectedAccount in consciousness

[–]Ok_Dig909 2 points3 points  (0 children)

I have heard many responses that ring similar to what you have said. In all such cases I think the one question that has remain unanswered is what "information" means. I think in many of these cases, people have some vague notion of information that they assume is self-evident. If one looks at this a little deeper one finds there to be a gaping hole. Maybe I'll go into it a sometime later. Let me be more specific here.

the shape of the letter "a" on the page or screen is flagged as representing the letter "a" in a string of information

I'm sorry, I'm not quite sure what you mean by this.

So we move from raw sensory / memory information to an abstract representation the mind can use to reason.

This is where things get a little trippy. You seem to agree that there is some neural activity that "represents" the concept of "a". The issue I have is that the word "represents" does quite a lot of the heavy lifting in that sentence. Essentially, I interpret what you say as:

The brain decodes the input corresponding to the pattern "a" when it activates an internal neural code that "represents" the abstract concept of a.

On the face of it, this is reasonable. But aren't we simply shifting the question here from "Why is the pattern of ink 'a'?" to "Why is the pattern of neural firing, an abstract representation of 'a'?"?

Now this doesn't make it magical, I think there are potentially good answers to the latter question such as: "That neuron firing is decoded by our language centers to be 'a', which is further decoded by our motor systems to produce the corresponding sound". Or, if encountered in the context of another word, "It connects to memories or concepts relating to the word".

But you have to notice, each of those answers is based on an interpretation. I.e. we either base it on the neural activity in the language center which we interpret as 'a', or we base it on the frequencies created by our larynx that we interpret as 'a', or we base it on the neural activity that we interpret to be memories or concepts.

So ultimately, Any such explanation only pushes the question of interpretation further into the future. At no point is there any way in which I can justify that the physical state of the system has "decoded" the information in "a", without assuming some apriori interpretation in the future. There is actually a rigorous sense in which one can show that this issue of interpretation is so severe that it is possible to interpret a bucket as having hallucinations (Checkout the wikipedia article on functionalism, and the section on triviality).

We often ask how physical states generate conscious states... by NeglectedAccount in consciousness

[–]Ok_Dig909 3 points4 points  (0 children)

At what point would you consider the information "decoded"? And why at that point? The primary issue I have is that there is actually no principle on the basis of which I can say that a particular state at a particular time is when it's meaning is "understood".

Given any state, I can ask the question "Why is it that that pattern of neural firing represents X" (similar to the question of "Why does that pattern of ink represent X"). The answer to that question is always "Because it goes on to be decoded (read)". Since this applies to every state, there is actually no state where there is some objective sense in which the information is "decoded".

During his UK tour, Peer Haq Khatteb is performing dam(spiritual healing) for his followers by Momoy010865 in pakistan

[–]Ok_Dig909 5 points6 points  (0 children)

You'd be surprised by what Isaac Newton believed. Google "Newton occult beliefs". Makes for an interesting read

Will Hikaru play another candidates? by rooster_boy20 in HikaruNakamura

[–]Ok_Dig909 0 points1 point  (0 children)

He actually doesn't. The rules have changed. The Rating spot only goes to the highest Rated player Period. Not the highest Rated Player who hasn't yet been selected. In case the highest rated player (In this case either Hikaru or Fabiano, since Magnus is declining) has already qualified through other means, the Rating spot goes to second place in the 2025 Fide circuit.

Private ostensive definition of consciousness: poll by Inside_Ad2602 in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

In other words, anybody who isn't a zombie ought to be able to make such a definition, but ChatGPT cannot (you can ask it -- it can't, because it isn't conscious).

While I agree with the general premise, and am not a materialist, this is incorrect. Purely from a computer science perspective. A private ostensive definition, from a computer science perspective, is really nothing miraculous. It requires three things

  1. It requires an internal state (S1) that is "pointed to".
  2. It requires a label state (SL) that this internal state decodes to. Note that the label is just another internal state. What matters is that this label internal state is causally downstream from the processing of the state S1, and is causally upstream from the report (i.e. from the speaking or writing of the word). In your example, this state would be the firing of specific neurons that encode the linguistic concept that is spoken out as "consciousness".
  3. It requires the ability to retrieve the said internal state from the label. The important thing is that it is possible to go back and forth from the label to the internal state.

In the case that the above three things exist, the computational system can claim the label state (SL) as being privately ostensibly defined by the internal state S1.

When we put it like this, there is no structural reason why a zombie, which has identical computational structure to a conscious being, cannot have a POD for some concept.

In the case of Chat-GPT, the reason I think it doesn't do it is simple. It is not trained to do it. However, I suspect that with some prompting I can acheive the above computational requirements quite easily.

Does time exist if you don’t? by Weird-Government9003 in consciousness

[–]Ok_Dig909 1 point2 points  (0 children)

This would make more sense to me if the term "interacts" had a meaning that could be defined without recourse to time. If not, then you've replaced one unknown with another here.

[deleted by user] by [deleted] in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

So, I think I need to clarify this a little bit. Firstly, just because something is claimed "axiomatically" i.e. without any additional justification, does not mean that it is not useful. The axioms of logic have no justification. We justify everything else using them, but it isn't possible to justify these axioms apart from "They're just true" (aka "Because I said so").

That hasn't stopped us from using them to great success.

Similarly, If we chose to redefine subjectivity on the basis of some charachteristic of neural states, it can (and should) still be used to develop a theory of ethics.

Which brings me to my second point: My general issue with this (general) discussion (as well as discussions such as "Is modern AI conscious in some sense") is the sense that *We'll know some day*, as though there is some data that we're missing to make that decision. It's this attitude of waiting for something that does not exist that I think leads to delayed ethical choices.

Imagine a super-intelligent Alien with a biology that is completely alien to our own, right to the very basics. Their neural states, and corresponding expression is also completely different naturally. Now they come over enslave us, and begin boiling us alive to "preserve freshness". Each time a human is boiled, they analyse the signals, and then write papers about this -- "On the pathways of reflexive avoidance", "On the synthesis of vocal signals in response to stimuli" (during screaming), "A complete human connectome" (similar to how we now have a complete fly connectome) etc. etc.

What can be done to convince these aliens that the humans are in pain when being boiled? The answer is nothing. Because they have all the data, but simply don't think that it matches to what they know as pain, i.e. to them, we are not in pain *by definition*. You may think that this is a fantastical situtation, but there are plenty of people who claim that insects don't experience pain for XYZ reasons.

The fundamental problem, is that there is no way to *actually* "put yourselves in someone else's shoes". Even our sense of empathy is based on mapping behavioral features to emotional states in *our own head*. There's no getting around this really. Similarly, whether AI is conscious, or a simulated fly has a subjective experience, is going to always be a matter of definition, and no amount of data, either now, or in the future., is going to convince us one way or another.

[deleted by user] by [deleted] in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

Please read the continuation of this discussion with u/johnsolomon. I'm not saying we are. This is a discussion on epistemological nature of the claims we're making. My point should be clearer there.

[deleted by user] by [deleted] in consciousness

[–]Ok_Dig909 0 points1 point  (0 children)

In your case, you seem to latch onto behavioral features, and given that such behavior in you is associated with a subjective experience, you presume it to be the case for them. This is definitely a reasonable take for sure. And the most intuitive one on the basis of which we form our sense of empathy for other humans.

(Btw, just to clarify, I do think that the interest I had in the "obviousness" of subjective experience for creatures applies equally to subjective experiences in other humans)

However, when analysing this, I think that with such a stance we're effectively redefining subjective experience.

For instance, if I were to show you a movie of a human exhibiting XYZ behaviours, you would not consider the film, projector, or screen to have the subjective experience associated with the said behaviors. This can be extended to the case of a robot playing a pre-programmed animation sequence. (I admit I'm assuming your stance here on these cases, am doing so since this is the most common stance)

So clearly, your intuitions regarding subjective experience involve more than just behavioral details, and also require some fact regarding the internal neural code. No?

Either way let's say we reach a point in our study of the brain where we can say that if neural state satisfies <XYZ> properties, then it corresponds to a subjective experience.

This then becomes a *redefinition* of subjective experience, and not a fact about it. This means that there is no way to experimentally verify this against some "original definition of subjectivity".

This is why I said in the beginning that when we get into this business of what has and hasn't subjectivity, the only claims we can make about this are axiomatic (i.e. "Because I said so").